name
stringlengths 5
6
| title
stringlengths 8
144
| abstract
stringlengths 0
2.68k
| fulltext
stringlengths 1.78k
95k
| keywords
stringlengths 22
532
|
---|---|---|---|---|
288212 | Analysis of windowing mechanisms with infinite-state stochastic Petri nets. | In this paper we present a performance evaluation of windowing mechanisms in world-wide web applications. Previously, such mechanisms have been studied by means of measurements only, however, given suitable tool support, we show that such evaluations can also be performed conveniently using infinite-state stochastic Petri nets. We briefly present this class of stochastic Petri nets as well as the approach for solving the underlying infinite-state Markov chain using matrix-geometric methods. We then present a model of the TCP slow-start congestion avoidance mechanism, subject to a (recently published) typical worldwide web workload. The model is parameterized using measurement data for a national connection and an overseas connection. Our study shows how the maximum congestion window size, the connection release timeout and the packet loss probability influence the expected number of buffered segments at the server, the connection setup rate and the connection time. | Introduction
W HEN modeling and evaluating the performance of
modern distributed systems, complex system behavior
(involving networks, switches, servers, flow control
mechanisms, etc.) as well as very complex workloads (often
a mix of batch data, interactive data and real-time data for
voice and video) need to be taken into account to come up
with trustworthy performance measures. This most often
leads researchers to either use a measurement-based approach
or simulation as performance evaluation technique.
As both these techniques are rather costly, it is worthwhile
to study the suitability of analytic/numerical approaches
based on stochastic Petri nets (SPNs) as well.
As a challenging application for such a suitability study
we have chosen the handling of WWW (World Wide Web)
requests by the hypertext transfer protocol (HTTP) [1].
Clearly, there are many factors involved in this process:
the client issuing the request, the server handling the re-
quest, the type of request (just text, text with embedded
graphics, pictures, or even video), the Internet connecting
the client and the server, as well as the transport protocols
used by client and server (TCP/IP). Taking all these
aspects into account would render an analytical solution
impossible, as also witnessed by the many measurement-based
performance studies in this area ([2], [3]). To the
best of the knowledge of the authors, no analytical performance
studies for such systems have been reported yet.
In this paper we propose to use a special class of SPNs
for the construction and efficient numerical solution of per-
The authors are with the Laboratory for Distributed Sys-
tems, RWTH Aachen, 52056 Aachen, Germany. E-mail:
farost,[email protected]
formance models for the handling of WWW requests. Our
models include client characteristics (request pattern) and
request types, the Internet delays, the server speed, and
the influence of the underlying TCP/IP protocol. In par-
ticular, our model includes the explicit connection set-up
(and release) phase of TCP/IP as well as its congestion
avoidance windowing mechanism (known as slow start).
Despite all these details, after a suitable abstraction pro-
cess, the models can still be solved efficiently with numerical
means, thereby using the tool spn2mgm to exploit the
special quasi-birth-death (QBD) structure of the stochastic
process underlying the SPN.
The contribution of this paper is twofold. First, it shows
the suitability of the SPN-based approach for the performance
evaluation of complex systems, provided a suitable
model class and solution method is chosen. Secondly,
the performance results (obtained with realistic parameters
derived from measurements) show the impact of lower-layer
protocols (TCP/IP including its windowing mecha-
nism) on the perceived performance at the application level
(WWW).
This paper is further organized as follows. In Section II,
we concisely describe the class of SPNs and the employed
solution methods, as well as the tool support we have de-
veloped. We then describe the handling of WWW requests
in detail, as well as the corresponding SPN model in Section
III. Section IV is then devoted to a wide variety of
numerical case studies. Section V concludes the paper.
II. The Modeling Framework
The basis of our modeling framework is a special class
of SPNs (so-called infinite-state SPNs), which is suitable
for an efficient numerical analysis. We present the main
properties of these SPNs in the following section. Then,
we discuss some issues concerning the numerical solution
of the underlying Markov chain, and finally present the tool
we developed to support infinite-state-SPN based modeling
and analysis.
A. Matrix-Geometric Stochastic Petri Nets
Infinite-state SPNs are especially suited for modeling
systems which involve large buffers or queues. Based on
the class of (generalized) SPNs proposed by Ciardo in [4],
they allow one distinguished place of unbounded capacity
(graphically represented as a double-circled place) and
represent an extension of the initial work by Florin and
Natkin [5]. The unbounded place is usually used to model
infinite buffers, or to approximate large finite buffers.
While all features of Ciardo's original SPN class are still
__
Fig. 1. Leveled structure of the underlying Markov chain of an
infinite-state SPN.
available (like immediate transitions, enabling functions,
inhibitor arcs and arc multiplicities), there are some restrictions
concerning the unbounded place:
ffl The multiplicity of incoming and outgoing arcs is limited
to one.
ffl Transition rates and weights of immediate transitions
may not depend on the number of tokens in the unbounded
place.
Due to the unbounded place, the Markov chain underlying
the Petri net has an infinite number of states. The
advantage of using infinite-state SPNs instead of classical
ones is that the infinite state space has a special structure,
which makes it eligible to very efficient numerical solution
techniques, leading much quicker to a steady-state solution
than by investigating a large, finite state space.
B. Numerical Solution
The continuous-time Markov chain underlying an
infinite-state SPN is a QBD process [6], [7]. Its state space
can be represented two-dimensionally as a strip which is
unbounded in one direction. In the case of infinite-state
SPNs, the position of a state in the unbounded direction
corresponds to the number of tokens in the unbounded
place. All states which belong to markings having the same
number of tokens in the unbounded place are said to be in
one level. Due to the restrictions mentioned in the previous
section, transitions can only take place between states
which belong to the same level, or between states in adjacent
levels (see Fig. 1). Furthermore, the transition rates
between levels are independent of the level index (except
for a boundary level). This leads to an (infinite) generator
matrix with the following block-tridiagonal structure:
Except for the boundary, all transition rates can be kept
in three square matrices A . The size of these
matrices equals the number of states in the (non-boundary)
levels (denoted by N in Fig. 1).
There exist several efficient techniques for deriving the
steady-state solution with of these Markov
chains. The boundary conditions involving B represent a
normal set of linear equations, and the non-boundary part
Fig. 2. Snapshot of an editing session with agnes.
is reduced to the quadratic matrix polynomial
0:
The solution of this polynomial for R is the core of the solution
techniques. While most methods provide an iterative
solution to this problem (like those presented in [6], [8],
[9]), the approach suggested in [10] transforms the problem
to an Eigenvalue problem. Though the latter method
has some interesting properties, the iterative approaches
proved to be numerically more stable for large models so
far (see [11] for a comparison). The results presented in
this paper were obtained using the LR method [8].
C. Tool Support
We developed extensive tool support to simplify the use
of infinite-state SPNs for modeling and analysis, and for
abstracting details of the underlying Markov chain. Using
spn2mgm ([12], [13]), it is possible to specify the desired
performance measures easily at the Petri net level in
a reward-based way. Concerning the numerical solution,
the user can choose between the algorithms mentioned in
the previous section; all of them have been implemented
using high-performance linear algebra packages.
For the easy graphical specification of the Petri net,
we adopted the generic net editing system agnes [14] (see
Fig. 2). In addition, also a textual specification is still possible
III. Modeling Windowed Traffic Control
Mechanisms
The TCP protocol relies on two window traffic control
mechanisms to handle flow and congestion control. We
focus on TCP's congestion control mechanism, which is
described in the following section. Then, an appropriate
SPN model is proposed, after which we suggest several approaches
for modeling the traffic which has to be delivered
by TCP. Finally, we describe how the model parameters
used in our experiments are derived.
with slow start and
window flow control
with slow start and
window flow control
Web browser/
client system
request
for frame/image
frame/image
data
response
segments
frame/image
data
Lower protocol layers
Web server
segment
proc. overhead
client request
Fig. 3. Accessing WWW documents via TCP/IP.
A. System Description
A TCP connection is associated with two windows: a
receiver window to synchronize the sender's speed with the
speed a receiver is able to process incoming data, and a
congestion window to avoid network overload. The amount
of data sent is given by the minimum of both windows.
Assuming that the receiving station can handle incoming
traffic sufficiently fast, the size of the congestion window is
the limiting factor.
avoids network congestion by limiting the number
of packets traveling through the network simultaneously.
This is accomplished by introducing the congestion win-
dow, holding the remaining tolerable amount of data which
can be fed into the network. Once a packet is acknowl-
edged, the available congestion window space is increased
by the appropriate number of bytes. Since the tolerable
number of packets which can be fed into the network prior
to being acknowledged is not known a priori, TCP adopts
the slow start mechanism suggested by Jacobson [15].
Slow start initializes the congestion window size to one
packet. Whenever a packet gets acknowledged, the window
size is increased by one packet. Thus, the window size
effectively grows exponentially. This exponential growth
phase is bounded by a maximum window size parameter.
Once it is exceeded, the congestion window size increases
in a linear fashion. If a packet gets lost, the maximum
window size parameter is set to half the current window
size, and the current window size is set to one packet.
While slow start provides a suitable algorithm for congestion
avoidance, the initial phase of "probing for band-
width" becomes a problem if a TCP connection is set up
for transferring only a small amount of data. In particu-
lar, the HTTP protocol suffers from this fact, since getting
a HTML document and the images referenced therein
is accomplished by building separate TCP connections for
each of them (see Fig. 3). Since the amount of data per
item is typically small, the slow start mechanism leads to
a situation where just a fraction of the available band-width
is used. Furthermore, the establishment of each
connection involves the usual TCP three-way handshake,
increasing delay by one round trip time. In version 1.1 of
arr buffer
no_conn conn
server
connect
timeout
tokens
cwin
ack
ack_2
#cwin < max_win
reset
net
loss
loss_rate
lost
loss_done22
or
rate *= #net
rate *= #net
rate *= #net
Fig. 4. Petri net for modeling TCP behavior.
the HTTP protocol, the use of persistent connections for
several requests (P-HTTP) is possible and recommended,
thus minimizing the number of necessary slow start phases
and connection setups. Other approaches like T/TCP [16]
and UDP-based mechanisms like ARDP [17] have been addressed
in [3].
B. Model Development
The model presented in this section has been developed
having three main aims in mind. First, the model should
account for the window flow control mechanism and capture
the influence of the slow start mechanism on transmission
performance. Second, connection setups have to
be considered, since we are interested in the gain obtained
by using persistent protocol versions. Third, the complexity
of the overall model should not exceed the numerical
capabilities of our analysis tool, so less important details
had to be omitted.
The model concentrates on the server-side of a client-server
relationship. We assume that the main amount of
traffic arises from server to client (as in the HTTP case),
thus we neglect the impact of flow- and congestion control
mechanisms on traffic which is sent to the server. Instead,
different types of clients will lead to different patterns of
generating segments (or packets) to be processed and transferred
to the client. For the time being, we assume that the
generation takes place according to a Poisson process.
The model is illustrated in Fig. 4. The left hand side of
the model deals with connection setup, while the right hand
side represents windowing and the slow start mechanism.
The segments to be delivered to the client are generated by
the transition arr and put in the unbounded buffer place
buffer. If there is no connection between server and client
yet (as indicated by a token in no conn), transition server
is disabled due to its enabling function and a connection
setup is performed by firing transition connect. The connection
will not be released until all segments in buffer
have been delivered (inhibitor arc to timeout). Once the
buffer is empty and all segments have been transferred
(place net empty), the connection will be released after
an additional delay, modeled by transition timeout.
Each segment in buffer is the result of some operation of
the server. In the HTTP case, the server has at least to perform
some kind of database (or disk) access to retrieve the
desired document. This overhead is modeled by the transition
server.
connection-setup and server overhead
have been accounted for now, segments are ready for
transmission. A prerequisite for submitting a segment to
the network is that the congestion window is large enough.
The total size of the congestion window is reflected by the
place cwin. It initially holds one token, representing the
size of one segment. The available size of the congestion
window which currently remains for transmission is represented
by the place tokens. Each segment submitted to
the network takes one token from this place. After the segment
has been acknowledged successfully (represented by
transitions ack and ack 2), the token is put back. Transition
ack 2 is enabled if the maximum allowed window size
(max win) has not been reached yet. In this case, two tokens
are put back into place tokens, and one additional
token is put into cwin to reflect the enlarged congestion
window. Transition ack is only enabled if the maximum
congestion window size has been reached. In this case, the
congestion window is not enlarged any further, and just
one token is put back into tokens (we do not account for
a further linear increase as featured by the original slow
start procedure). As a last possibility, a segment may be
lost in the network, represented by transition loss. In that
case, the segment is put back in the buffer (this implies that
the corresponding lost segment experiences the server delay
induced by server for a second time; for excessive packet
loss ratios, the server rate has to be adjusted appropriately)
and the congestion window size is reset to one by enabling
the immediate transition reset (actually, the congestion
window is only reduced by the size of tokens still available
in the congestion window to limit the size of the model).
This transition is also enabled if no connection exists, thus
effectively initializing the congestion window for the next
connection.
The exponentially distributed timing of transitions in the
model surely is an approximation of the real-world system;
it may be appropriate to model network delays, but time-out
values usually have a less stochastic character. The
impact of correctly modelling deterministic timeout values
is, however, out of the scope of this paper; see e.g. [18] for
an alternative to model connection release timeout values
by Erlangian distributions.
Note that the firing rates of ack, ack 2 and loss are
proportional to the number of segments submitted to the
network (place net). This approximation of an infinite-
server behavior for network delays accounts for the fact
that the increase of network load by submitting single segments
is negligible. In future models, network delays could
also be described more accurately, e.g. by using appropriate
phase-type distributions as suggested in [19].
The most striking difference between this model and the
1 Note that this represents the overhead per segment of the server
reply, not the overhead per client request. Therefore, per-request
overhead has to be split between the number of segments resulting
from this request.
idle start
frame_s img_s
frame_m img_m
select_s f1
leave_idle
select_m
weight_m
im_rate
buffer
weight_s
Fig. 5. Traffic generation model for HTTP.
real slow start mechanism is that the maximum congestion
window is set to a constant size (occurring as max win in
the model), instead of setting it to half its value each time a
segment is lost. Introducing an additional place for holding
the current maximum window size would have led to a too
large state space and has thus been omitted. Since the
maximum window size can not shrink, the results obtained
by analyzing the presented model provide an optimistic
approximation.
C. Workload Models
As mentioned in the previous section, the slow start
mechanism leads to poor bandwidth utilization when there
is only a small amount of data to be transferred per TCP
connection. In order to evaluate the merits of persistent
connection approaches (like P-HTTP) in this case, an appropriate
traffic model has to be developed. We focussed on
modeling HTTP workload, using a characterization similar
to the one presented in [3].
A user request for a web page is satisfied in two successive
steps. First, the HTML document referenced by the
URL is fetched from the server. Afterwards, all images
referenced in this "frame" document are requested from
the server. The traffic model must appropriately generate
the segments which correspond to the server replies to
these individual requests, and put them in the place buffer
(Fig. 4).
The traffic model we propose is shown in Fig. 5. It is
able to account for two different user request types (a small
and a medium one, see next section for details). After an
exponentially distributed user-idle time has passed (tran-
sition leave idle), a probabilistic choice between small
and medium request type takes place. Each request type
consists of two phases, corresponding to generating segments
to satisfy the frame request and segments belonging
to the image requests. While the duration of each phase
corresponds to the time needed to submit the corresponding
request to the server, the rate at which segments are
generated during that phase (transitions T1,T2,T3 and T4)
corresponds to the number of segments the server will send
in reply to the request.
buffer
idle frame_s img_s
leave_idle
wait T3
Fig. 6. Arrival model for one request type with blocking after submitting
a frame request.
As an alternative to the arrival model shown in Fig. 5,
we also investigate simplified versions of it, consisting of
just one request type, simple IPP arrivals, and Poisson ar-
rivals. Since these models do not account for the fact that
prior to submitting an image request the reply to the preceding
frame request has to be completed, we also investigate
a blocking arrival model (see Fig.6). It deals with
one request type only, where all frame segments have to
be delivered before image segments are generated. This is
realized by introducing an additional immediate transition
which is disabled as long as there are any segments left in
the server's outgoing buffer.
D. Parameterization
The parameters for the overall model can be split in three
groups: server, network, and workload parameters (see Table
I). The following paragraphs explain how the SPN
transition rates can be derived from these.
D.1 Server Performance
Server-specific performance data is introduced in the
model by the transition server. Its mean firing time corresponds
to the average workload per segment of an answer to
a client request. We assume that it is due to three param-
eters: computational effort per request, disk seek time per
request and disk transfer time per segment. Per-segment
workloads are obtained by dividing the per-request overheads
by the mean number of segments per request, denoted
by s mean (derived in the workload parameterization
section). In conclusion, the firing rate of transition server
is given by
s mean
disk
D.2 Network parameters
Apart from the usual packet transfer latencies, a connection
setup requires an additional round trip time to account
for the TCP three-way handshake, so the rate of connect is
rtt . The rate of the connection release transition timeout
is given by t \Gamma1
release .
The time needed to acknowledge a segment sent from
server to client consists of one round trip time plus the
segment transfer time, which depends on segment size
Server characteristics
Computation time per request [s] t comp
Disk transfer speed [B/s] disk
Network characteristics
Round
Bandwidth [B/s] bw
Loss ratio p loss
Max. TCP segment size [B] nMSS
Connection release timeout [s] t release
Max. congestion window size [segs] max cwin
Workload characteristics
User idle time [s] t idle
Small request probability p small
Small frame request size [B] b sf
Number of small image requests n si
Small image request sizes [B] b
Medium frame request size [B] b mf
Number of medium image requests n mi
Medium image request sizes [B] b
I
Overview on all model parameters.
and bandwidth. Since we also consider packet losses,
the rates of transitions ack and ack 2 are given by
loss ), and the rate of transition
loss equals loss .
D.3 Workload parameters
Due to the small size of client requests, we assume that
the submission of a request takes on average one round trip
time. Thus, the firing rates of transitions f1 and f2 equal
rtt . We also assume that all image requests are submitted
simultaneously, thus again involving just one round trip
time and yielding the rate t \Gamma1
rtt for transitions i1 and i2 as
well.
The number of segments to be put into the server's
buffer place depends on the size of the reply corresponding
to a request. For small requests, it is given by s
db sf =nMSS e for the initial frame request. The corresponding
total number of segments to be transferred in reply to the
image requests is s
db si;k =nMSS e. These are
the numbers of segments to be generated on average while
a token is in places frame s and img s, so the rates of
and are s sf =t rtt and s si =t rtt , respectively. The parameters
for the medium size request type can be computed
similarly. Also, using n si , n mi and p small , the mean number
of segments per request can be computed as
Parameter National International
loss
nMSS 536 [B]
II
Network parameters used in the experiments.
IV. Performance Evaluation
The SPN model proposed in the previous section features
a wealth of model parameters, and a large number of
performance measures can be obtained from its analysis.
We will thus keep many parameters constant throughout
all experiments, and focus on the investigation of a few
aspects only. We present the numerical results after describing
the selected parameters in the next section. Fi-
nally, some information on the model's complexity and the
computational solution effort are given.
A. Parameter selection
Since the following investigations focus on the variation
of a few parameters only, many of the model parameters
given in Table I are kept constant.
Server characteristics. We assume that the computation
time per request is t 0:01[s]. The performance
parameters of the disk are t
Network characteristics. Parameters concerning network
performance have been taken from [19], where extensive Internet
performance investigations have been accomplished.
We selected two reference connections, a national one
(RWTH Aachen to University of Karlsruhe, Germany) and
an international connection (RWTH Aachen to Stanford
University, U.S.A. See Table II for the parameters of
these connections.
If not mentioned otherwise, the connection release time-out
t release has been set to 10 seconds.
Workload characteristics. The parameters for small and
medium request types occurring in the model's workload
characterization have been taken from [3], representing the
structure of some popular Web pages. The small request
type consists of an initial 6651 byte frame page, referencing
two images of size 3883 and 1866 bytes. Medium requests
are formed by a 3220 byte frame and three images of size
57613, 2344 and 14190 bytes. We assume that the probability
for a small request is
B. Numerical Results
Our investigations focus on three main areas. We first
investigate the impact of different workload models on the
obtained performance measures. Here, we were also interested
to which extent the maximum congestion window
size (max cwin) influences the performance characteristics.501502503502 4
mean
buffer
size
maximum congestion window size
full HTTP workload model
IPP workload model
one request type only
one request type/blocking model
Poisson model
Fig. 7. Expected buffer size for the international connection and
different workload models.5152535
mean
buffer
size
maximum congestion window size
full HTTP workload model
IPP workload model
one request type only
one request type/blocking model
Poisson model
Fig. 8. Expected buffer size for the national connection and different
workload models.
The connection release timeout plays an important role
when dealing with protocols like persistent HTTP. We thus
show how changing this parameter affects the properties of
the overall system in the next experiment. Finally, the influence
of the segment loss probability on the system has
been investigated.
B.1 Workload models and congestion window size
As a first point, we were interested in how far detailed
workload models influence the results of our investigation.
Since the workload model greatly enlarges the number of
states of the Markov chain underlying the SPN (e.g., the
QBD level size of the arrival model shown in Fig. 5 is five
times as large as a model with simple Poisson arrivals), it
is interesting to see whether this effort pays off.
Fig. 7 shows the mean buffer size (number of tokens in
place buffer) for different workload models. Clearly, increasing
the maximum window size leads to a higher segment
throughput, and thus reduces the buffer filling.
connection
probability
maximum congestion window size
Poisson model
full HTTP workload model
one request type only
IPP workload model
one request type/blocking model
Fig. 9. Probability for an existing connection for different workload
models.
Concerning the different workload models, the results
for the full HTTP model (as shown in Fig. 5) differ significantly
from those of the simplified versions. The results of
the one-request and IPP workload models (with identical
mean segment arrival rate) are very similar and still capture
the qualitative behavior of the original model. How-
ever, due to ignoring the bursty arrival pattern accounted
for by the other workload models, the approximation of the
workload by a Poisson process leads to a dramatic under-estimation
of the expected buffer size. As an interesting
point, the results for the blocking workload model (Fig.
do not differ too much from the non-blocking one-request-
type model, especially for larger maximum connection window
sizes. The absolute values are smaller, since the mean
segment generation rate for this workload model is lower
than for the other models (due to the additional waiting
time in place wait).
Fig. 8 illustrates the same performance measures for the
national Internet connection. Due to the lower round trip
time, the average buffer filling is much lower than in the international
case. Again, the Poisson workload model yields
much too low results.
From now on focusing on the international connection
type in all experiments, we investigate the steady-state
probability for an existing connection (i.e., a token in place
conn) in Fig. 9 . While the results for all bursty workload
models coincide, the Poisson workload distributes segments
much more in time, leading to a situation where the connection
timeout almost never expires. Due to the increased
usable bandwidth for larger maximum connection windows,
the segments in the server's buffer are delivered quicker to
the client, which leads to the connection to be released
quicker. This results in lower connection probabilities for
larger values of max cwin.
Fig. 10 shows the connection setup rate for the different
arrival models, which may represent a cost-factor. As can
be observed, the setup rate increases for larger congestion
window sizes, since requests are satisfied quicker and the0.010.020.030.04
connection
setup
rate
maximum congestion window size
one request type/blocking model
full HTTP workload model
one request type only
IPP workload model
Poisson model
Fig. 10. Connection setup rates for different workload models.0.30.50.70.90
connection
probability
connection release timeout value [s]
Poisson model
one request type only
one request type/blocking model
Fig. 11. Probability for an existing connection for different connection
release timeouts.
release timeout expires more often in this case. Again, the
Poisson model leads to significantly different results.
Summarizing, it can be said that ignoring the work-load
burstiness dramatically alters the performance measures
obtained from the model, however, thanks to our
modeling environment, bursty arrival patterns can easily
be accounted for. Furthermore, increasing the maximum
congestion window above a minimum value of about 8-
segments leads to much better bandwidth utilization,
thus reducing the buffer size and the time connections
are held, albeit at the cost of increased connection setup
rates. Though increasing the bandwidth utilization by
higher maximum congestion window sizes is generally desir-
able, this may also increase network congestion and packet
losses. However, this aspect can not be considered with the
single client-server model presented here.
0.10.20.30.4
connection
setup
rate
connection release timeout value [s]
Poisson model
one request type only
one request type/blocking model
Fig. 12. Connection setup rates for different connection release time-outs
B.2 Influence of connection release timeout.
The introduction of a connection release timeout is crucial
for the reduction of connection-setups and delays when
protocols like P-HTTP are employed. Clearly, when choosing
this parameter it is important to compare the gain of
less connection setups with the higher costs imposed by
maintaining a (mainly unused) connection.
Fig. 11 illustrates the probability of an existing connection
for different values of t release for a maximum connection
window size of 12. Obviously, the probability increases
for larger timeout values. Since the amount of data to be
transferred remains constant, an existing connection is often
unused. On the other hand, the connection setup rate
decreases for larger timeout values, as illustrated in Fig. 12.
B.3 Influence of segment loss ratio
Apart from bandwidth and average round trip time, the
packet loss probability heavily influences the performance
of an Internet connection. The impact of losses on the
windowing system is shown in Fig. 13 for different values
of max cwin. It can be observed that the mean buffer size
of the system increases dramatically for high loss ratios if
cwin is chosen too small. The system's behavior is
much more robust concerning packet losses if the connection
window size is large enough. This behavior is due to
the fact that higher packet loss ratios effectively increase
the amount of segments to be delivered by the network,
since lost segments are re-submitted for transmission by
transition loss done. For a loss ratio of 0:5, every second
packet has to be retransmitted. Since lost packets are
again subject to loss in the next transmission try, the number
of segments to be delivered effectively doubles. Since
small values of max cwin lead to small effective transmission
performance, the buffer size is particularly sensitive to
packet losses.20060010000 0.1 0.2 0.3 0.4 0.5
mean
buffer
size
segment loss probability
full HTTP workload model, max_cwin=4
full HTTP workload model, max_cwin=8
full HTTP workload model, max_cwin=12
Fig. 13. Expected buffer size for different loss probabilities.
C. Computational effort
The Markov chain underlying the investigated SPN can
grow remarkably large. For example, the SPN shown in
Fig. 4 with the full workload model as shown in Fig. 5 leads
to an underlying QBD process with 765 states per level.
In some experiments (see e.g. Fig. 13), we obtained mean
buffer sizes around 1000. Consequently, the evaluation of
this system by using a large finite Markov chain would
involve the investigation of several thousand levels, leading
to a total number of several million states. While deriving
steady-state measures of Markov chains of this size becomes
a problem when using common numerical or simulation
methods, the QBD-based solution approach leads to results
in a quick and memory-efficient way.
We were able to accomplish the solution of the above-mentioned
model in around 2.5 hours on a SUN SparcStation
20 clocked at 75 MHz. This time includes generating
the state space, recognizing the QBD structure, solving
the QBD Markov chain, and computing the desired performance
measures. However, it should be noted that models
of this size currently represent the upper limit we are able
to solve due to numerical instabilities.
V. Conclusions
To our best knowledge, we presented the first performance
evaluation study of the TCP slow-start mechanism
under P-HTTP workload which is based on the numerical
analysis of a SPN model. It has been shown that the choice
of a reasonably high limit for the maximum congestion window
is crucial for efficiently utilizing the communication
infrastructure. This is especially true for connections with
high packet losses.
We also demonstrated the strong influence of the employed
workload model on the results of the system anal-
ysis. In particular, the approximation of bursty workloads
by a Poisson process yields misleading results.
Our modeling environment, based on powerful yet user-friendly
tools for using efficient numerical techniques and
hiding them behind a SPN-based interface proved to be of
great use in the experiments. By employing QBD-based
methods for the analysis of the underlying Markov chain
we were able to derive numerical results much quicker than
by conventional methods.
Concerning the application investigated here, far more
extensive experiments are possible. We did not present
the impact of varying the speed of the server or the impact
of changing workloads and user behavior. Further
investigations could also focus on derivations of the slow-start
algorithm. Of course, while the results presented here
are reasonable, the validation of (especially more detailed)
models by measurements and simulations is an important
topic.
Future work will also focus on the improvement of our
modeling environment. In particular, the limit of about
1000 states in the repeating levels of the underlying QBD
process is often a problem (for example, it led to the relatively
small choice of the maximum congestion window
parameter in the presented experiments). Future work will
concentrate on the combination of sparse matrix computations
with the spectral expansion solution method for QBD
processes [10] to alleviate this problem. Furthermore, this
method would also allow us to drop the first requirement
on the infinite SPN place mentioned in Section II.
Acknowledgments
This work has been supported by the doctorate program
computer science and technology at the RWTH Aachen.
We also thank the department of Prof. Hromkovic for donating
some spare computing resources.
--R
"Hypertext transfer protocol - HTTP/1.1,"
"Internet Web Servers: Workload Characterization and Performance Implications,"
"Modeling the performance of HTTP over several transport protocols,"
"SPNP: Stochastic Petri net package,"
"One place unbounded stochastic Petri nets: Ergodicity criteria and steady-state solution,"
Matrix Geometric Solutions in Stochastic Mod- els: An Algorithmic Approach
"Matrix geometric solutions in Markov models: A mathematical tutorial,"
"A logarithmic reduction algorithm for quasi birth and death processes,"
"Analysis of a finite capacity multi-server delay-loss system with a general Markovian arrival process,"
"Spectral expansion solution for class of Markov models: Application and comparison with the matrix-geometric method,"
"Steady-state analysis of infinite stochastic Petri nets: A comparison between the spectral expansion and the matrix-geometric method,"
"SPN2MGM: Tool support for matrix-geometric stochastic Petri nets,"
spn2mgm Web Page
"Entwurf und Implementierung einer parametrisierbaren Benutzeroberflache fur hierarchische Netz- modelle,"
"Congestion avoidance and control,"
"Extending TCP for transactions - concepts,"
The virtual system model: A scalable approach to organizing large systems
"Untersuchungen zum Verbindungsmanagement bei Videoverkehr mit Matrix- geometrischen stochastischen Petrinetzen,"
Messung und Modellierung der Dienstgute paketvermittelnder Netze
--TR | matrix-geometric methods;window flow control;congestion control;stochastic Petri nets |
288307 | Filter-based model checking of partial systems. | Recent years have seen dramatic growth in the application of model checking techniques to the validation and verification of correctness properties of hardware, and more recently software, systems. Most of this work has been aimed at reasoning about properties of complete systems. This paper describes an automatable approach for building finite-state models of partially defined software systems that are amenable to model checking using existing tools. It enables the application of existing model checking tools to system components taking into account assumptions about the behavior of the environment in which the components will execute. We illustrate the application of the approach by validating and verifying properties of a reusable parameterized programming framework. | INTRODUCTION
Modern software is, increasingly, built as a collection
of independently produced components which are assembled
to achieve a system's requirements. A typical
software system consists of instantiations of generic,
reusable components and components built specifically
This work was supported in part by NSF and DARPA under
grants CCR-9633388, CCR-9703094, and CCR-9708184 and by
NASA under grant NAG-02-1209.
for that system. This software development approach
offers many potential advantages, but it also significantly
complicates the process of verifying and validating
the correctness of the resulting software systems.
Developers who wish to validate or verify correctness
properties of software components face a number of
challenges. By definition, reusable components are
built before the systems that incorporate them, thus
detailed knowledge of the context in which a component
will be used is unavailable. Components are oftened designed
to be very general, for breadth of applicability,
yet configurable to the needs of specific systems; this
generality may impede component verification. Typ-
ically, components are subjected to unit-level testing
and are delivered with informal documentation of the
intended component interface behavior and required
component parameter behavior. For high-assurance
systems this is lacking in a number of respects: (i) unit-level
reasoning focuses solely on local properties of the
component under consideration, (ii) informal documentation
cannot be directly incorporated into rigorous reasoning
processes, and (iii) system developers may have
some knowledge about the context of component use,
but no means of exploiting this information. In this pa-
per, we describe an automatable approach to applying
existing model checking tools to the verification of partial
software systems (i.e., systems with some missing
components) that addresses these concerns.
Model checking is performed on a finite-state model of
system behavior not on the actual system artifacts (e.g.,
design, code), thus any application of model checking
to software must describe model construction. In our
approach, models for partial systems are constructed in
two independent steps. First, a partial system is completed
with a source code representation of the behavior
of missing system components. This converts the open
partial-system to a closed system to which model checking
tools can be applied. Second, techniques from partial
evaluation and abstract interpretation [19, 18] are
applied to transform the completed source code into the
input format of existing finite-state system generation
tools. Finite-state models built in this way are safe,
thereby insuring the correctness of verification results,
but may be overly pessimistic with respect to the missing
components' behavior. To enhance the precision of
reasoning, we filter [16] missing components' behavior
based on assumptions about allowable behavior of those
components. Our approach supports model checking
of systems with different kinds of missing components,
including components that call, are called from, and
execute in parallel with components of the partial sys-
tem. This flexibility enables verification of properties of
individual components, collections of components, and
systems.
The work described in this paper extends the applicability
of existing model checking techniques and tools
to partial software systems and illustrates the practical
benefits of the filter-based approach to automated anal-
ysis. We illustrate our approach and its benefits by verifying
correctness properties, expressed in linear temporal
logic (LTL) [24], of realistic generic, reusable com-
ponents, written in Ada, using the SPIN model checker
[20]. In principle, the approach described in this paper
can be used in any setting that supports filter-based
analysis [16].
In the following section we discuss relevant background
material. Section 4 discusses abstractions used in constructing
finite-state software models and Section 5 describes
our approach to completing partial software sys-
tems. We then present our experiences and results from
applying the analysis approach to a generic, reusable
software component in Section 6. Section 7 describes
related work and we conclude, in Section 8, with a summary
and plans for future work.
In this section we overview LTL model checking, a variant
of model checking for open systems called module
checking, and a technique for refining model checking
results using filter formulae. These ideas form the basis
for our approach to constructing and checking finite-state
models of partial software systems.
3.1 Model Checking
Model checking techniques [7, 20] have found success
in automating the validation and verification of properties
of finite-state systems. They have been particularly
effective in the analysis of hardware systems [26]
and communication protocols [20, 30]. Recent work has
seen model checking applied to more general kinds of
software artifacts including requirements specifications
[2, 3], architectures [28], and implementations [13, 9]. In
model checking software, one describes the software as
a finite-state transition system, specifies system properties
with a temporal logic formula, and checks, exhaus-
tively, that the sequences of transition system states
satisfy the formula.
There are a variety of temporal logics that might be
used for coding specifications. We use linear temporal
logic in our work because it supports filter-based
analysis and it is supported by robust, efficient model
checking tools such as SPIN [20]. In LTL a pattern of
states is defined that characterizes all possible behaviors
of the finite-state system. We describe LTL operators
using SPIN's ASCII notation. LTL is a propositional
logic with the standard connectives &&, -?, and !. It
includes three temporal operators: !?p says p holds at
some point in the future, []p says p holds at all points
in the future, and the binary pUq operator says that p
holds at all points up to the first point where q holds.
An example LTL specification for the response property
"all requests for a resource are followed by granting of
the resource" is [](request -? !?granted).
SPIN accepts design specifications written in the
Promela language and it accepts correctness properties
written in LTL. User's specify a collection of interacting
processes whose product defines the finite-state
model of system behavior. SPIN performs an efficient
non-empty language intersection test to determine if
any state sequences in the model conform to the negation
of the property specification. If there are no such
sequences then the property holds, otherwise the sequences
are presented to the user as exhibits of erroneous
system behavior.
3.2 Module Checking
In computer system design, a closed system is a system
whose behavior is completely determined by the state
of the system. An open system (or module [22, 23]) is a
system that interacts with its environment and whose
behavior depends on this interaction. Given an open
system and temporal logic formula, the module checking
problem asks whether for all possible environments,
the composition of the model with the environment satisfies
the formula. Fortunately, for LTL, the module
checking problem coincides with the basic model checking
problem[22]. Often, we don't want to check a formula
with respect to all environments, but only with
respect to those that satisfy some assumptions. In the
assume-guarantee paradigm [29], the specification of a
module consists of two parts. One part describes the
guaranteed behavior of the module; which we encode in
the finite state system to be analyzed. The other part
specifies the assumed behavior of the environment with
which the module is interacting and is combined with
the property formula to be analyzed.
3.3 Filter-based Analysis
Filters [16] are constraints used to incrementally refine
a naively generated state space and help validate properties
of the space via model checking. Filters can be
represented in a variety of forms (e.g., as automata or
temporal logic formulae) and are used in the FLAVERS
static analysis system [14]. Filters in FLAVERS were
originally developed to sharpen the precision of analysis
relative to internal components of a complete software
system that were purposefully modeled in a safe, but
abstract manner. Filters serve equally well in refining
analysis results with respect to external component behavior
in the analysis of partial software systems [11].
In this paper, we encode filters in LTL formulae to perform
assume-guarantee model checking. Given a property
that encode assumptions
about the environment, we model check the combined
We refer to the individual
F i as filters and to the combined formula as a
filter formula.
We note that many specification formalisms support
filter-based analysis, but some popular formalisms, such
as CTL, do not. It may be possible to encode a filter
into a CTL formula. In general, however, because multiple
temporal operators cannot lie directly in the scope
of a single path quantifier, there is no simple method
for constructing a filter formula in CTL.
In principle, model checking can be applied to any
finite-state system. For non-trivial software systems we
cannot render a finite-state system that precisely models
the system's behavior, since, in general, the system
will not be finite-state. Even for finite-state software,
the size a precise finite-state model will, in general, be
exponential in the number of independent components,
i.e., variables and threads of control. For these rea-
sons, we would like to use finite-state system models
that reflect the execution behavior of the software as
precisely as possible while enabling tractable analysis.
We use techniques from abstract interpretation to construct
such models. In the remainder of this section, we
introduce the notion of abstract interpretation, we then
describe a collection of abstract interpretations that we
use in model construction, and finally, we describe our
approach to the selection of abstractions to be used for
a given software system.
4.1 Safe Models for Verification
We say that a finite-state model of a software system
is safe with respect to model checking of a property
specification if model checking succeeds only when the
property holds on the real system. For LTL model-
checking, which is fundamentally an all-paths analy-
sis, any abstraction of behavior must preserve information
about all possible system executions. This class
of abstractions can be described as abstract interpretations
(AI) [10] over the system's execution semantics.
These abstractions are similar to the kinds of approximations
that are introduced into program representations
(e.g., control flow graphs) used in compiler analyses
[27]. When behaviors are abstracted in this way
and then exhaustively compared to an LTL specification
and found to be in conformance, one can be sure
that the true executable system behaviors conform to
the specification.
One of the strengths of AIs is that they guarantee the
safety of information gathered by analyses that incorporate
them. To achieve this we need to precisely define
each AI. We formalize an AI as a \Sigma-algebra [18] which
defines, for a concrete type signature in the source pro-
gram, a type for the abstract domain of the AI and
abstract definitions for the operations in the signature.
Operationally we view a \Sigma-algebra as a data abstraction
with a defined domain of values and implementations
of operations over that domain. This operational view
allows systematic abstraction by substituting abstract
definitions for concrete definitions for abstracted program
variables. Computation with abstract values and
operations can then proceed in the same way as it would
have for concrete values and operation.
Unlike the concrete operations in a program, we can
define abstract operations to produce sets of values of
the operation's return type. This is a mechanism for encoding
lack of precise information about variable values.
This mechanism can be exploited by the partial evaluation
capabilities discussed in Section 5. The partial
evaluator will treat a set of values returned by an operation
as equally likely possibilities and create a variant
program fragment for each value (i.e., simulating non-deterministic
choice). This allows subsequent analyses
(e.g., model checking) to detect the presence of specific
values that may be returned by the operation. The
technical details of how this is performed is given in
[18] for a simple imperative language; we have scaled
those techniques up and applied them to a real Ada
program in Section 6. Conceptually, one can think of
partial evaluation as the engine that drives the systematic
application of selected abstract interpretations to
a given source program.
4.2 Sample Abstract Interpretations
Space constraints make it impossible to give the complete
formalization of the AIs used in the example in
Section 6. In this section, we describe the main idea of
each AI and illustrate selected abstract operations.
The point AI is the most extreme form of abstraction.
Under this AI, a variable's abstract domain has a single
value, representing the lack of any knowledge about
possible variable values. Abstract operations for the
variable are defined as the constant function producing
the domain value. Abstract relational operations
that test the value of a variable are defined as the constant
function returning the set ftrue; falseg. While
the point AI is extreme in its abstraction, it is never-
theless, not uncommon in existing FSV approaches-
many state reachability analyses (e.g., CATS [32]) use
this abstraction for all program variables.
Closely related to the point AI is the choice AI. This AI
also encodes a complete lack of knowledge about possible
program variables, but it does so in a different way.
Abstract operations are defined to produce the set of all
possible domain values. Abstract relational operations
that test the value of a variable retain their concrete
semantics. In most cases, the possible values reaching
such a test will consist of the set of all domain values
and the result of the test will be the set ftrue; falseg.
Exposing the distinct domain values to a partial evaluator
gives it the opportunity to specialize program fragments
with respect to possible variable values. For ex-
ample, a control flow branch for each variable value
can be introduced into the program, subsequent program
fragments can be hoisted into each branch, and
each fragment can be specialized to the branch variable
value. Any tests of an abstracted variable within such
a branch will have a single domain value flowing into it,
thus, a more refined test result (e.g., true or false, but
not both) may be computed. The resulting program
model can be more precise than using the point AI, but
care should be taken in applying the choice AI since it
will result in a larger program model.
The k-ordered data AI provides the ability to distinguish
the identity of k data elements, but completely
abstracts the values of those elements. For
2-ordered data AI, any pair of concrete data values are
mapped to abstract values d 1 and d 2 ; all other values
are mapped to the ot value. Like in the choice
AI, abstract operations are defined to return sets of
values to model lack of knowledge about specific abstract
values. For all operations, except assignment,
the constant function returning fd 1 ; d 2 ; otg is used. For
assignment the identity function is used. Relational
test operations are slightly more subtle. Relational
operations other than equality, and inequality, return
falseg. Equality is defined as:
ffalseg if x 6= y
Inequality is defined analogously.
A special case of the classic signs AI [1] is the
zero-pos AI which is capable of differentiating between
valuations of a variable that are positive and
zero. The abstract domain ranges over three values:
unknown; zero; positive. For this AI we find it convenient
to introduce the unknown value, which represents
the fact that the variable can have either zero or
positive value. Abstract operations for assignment of
constant zero, increment and decrement by a positive
value, and a greater-than test with zero are defined as:
positive
Other operations are defined analogously.
It is also possible to define an AI that is safe with
respect to a restricted class of LTL properties. One
such abstraction is used commonly in verifying message
ordering requirements in communication protocols.
Wolper [31] has shown that reasoning about pairwise
ordering questions over a communication channel that
accepts large domains of values can be achieved using
a domain of size three 1 . This can be achieved when
the data in the channel are not modified or tested by
the program 2 . We support reasoning for this class of
questions through the use of a 2-ordered list AI. This AI
represents the behavior of a "list" of data items which
are themselves abstracted by the 2-ordered data AI.
Conceptually, the values of the abstracted list record
whether a specific d i has been inserted into the list and
not removed yet. If both of the non-ot 2-ordered data
are in the list their ordering is also recorded. No attempt
is made to represent the number of ot elements
in the list or their relative position with respect to the
values. The abstract list values are:
some : zero or more ot values
mixed with zero or more ot values
mixed with zero or more ot values
mixed with zero or more ot values,
with d 1 in front of d 2
and d 1
mixed with zero or more ot values,
with d 2
in front of d 1
Technically, this AI is not safe for LTL (e.g., it does
not allow for lists with multiple instances of d 1 values),
however, the abstraction is safe for all system executions
for which the d i are inserted at most once into
the list. Thus, the AI is safe for LTL formulae of the
Requirements involving more than a pair of data items can
be handled by a simple scaling of the approach described here.
This condition can be enforced using an approach that is
similar to the restriction of errors discussed in Section 5.
where P is an arbitrary LTL formula and the call and
return prefixes indicate the program actions of invoking
and returning from an operation. This is a filter-
formula, as described in Section 3, that restricts checking
of P to paths that are consistent with the information
preserved by the the 2-ordered list AI.
We illustrate abstract list operations for Inserting elements
at the tail and Removeing elements at the head;
other operations are defined analogously.
f(ot; L)g otherwise
Since list operations may both produce values and up-date
the list contents we define abstract operations over
tuples. The first component is the return value of the
indicates that no value is returned. The
rest of the components define how the components of
the AI should be updated based on the operation. In
the next section we discuss the use of a 2-ordered abstraction
in completing a partial system that enables
verification of ordering properties of data-independent
systems.
4.3 Abstraction Selection
Given a collection of program variables and a collection
of AIs we must select, for each variable, the AI which
will define its semantics in the finite-state model. We
do not believe that this process can be completely automated
in all cases. Our experience applying AIs in
model construction, however, has left us with a methodology
and a set of heuristics for selecting abstractions.
In this methodology, we bind variables to AIs, then use
additional program information to refine the modeling
of a variable by binding it to a more precise AI.
Start with the point AI Initially all variables are
modeled with the point AI.
Use choice for variables with very small domains
Variables with domains of size less than 10 that
are used in conditional expressions are modeled
with the choice AI.
semantic features in the specification
The property to be checked includes, in the form
of propositions, different semantic features of
the program (e.g., valuations of specific program
variables). These features must be modeled
precisely by an AI to have any hope of checking
the property.
Select controlling variables In addition to variables
mentioned explicitly in the specification, we
begin
if not d then
if y?0 and not d then
Figure
1: Example for AI Selection
consider variables on which they are control de-
pendent. The conditional expressions for these
controlling variables suggest semantic features that
should be modeled by an AI.
Select variables with broadest impact When
confronted with multiple controlling variables to
model, select the one that appears most often in a
conditional.
After the selection process is complete, we generate a
finite-state model using the variable-AI bindings and
check the property. Model checker output either proves
the property or presents a counter-example whose analysis
may lead to further refinement of the AIs used to
model program variables.
To illustrate this methodology, consider the program
fragment in Figure 1 which has variables d, x and y.
Assume we are interested in reasoning about the response
property [](xISzero -? !?call-P). The key
features that are mentioned explicitly in this specification
are values of variable x and calls to procedure P.
We must model x with more precision than the point
AI provides in order to determine the states in which
it has value zero. An effective AI for x must be able to
distinguish zero values from non-zero values; we choose
the zero-pos AI. At this point we could generate an abstracted
model and check the property or consider additional
refinements of the model; we choose the latter for
illustrating our example. Using control dependence information
we can determine the variables that appear in
conditionals that determine whether statements related
to x and P execute. In our example, there are two such
variables d and y. We could refine the modeling of both
of these variables, but, we prefer an incremental refinement
to avoid unnecessary expansion of the model. In
choosing between these variables, we see that d appears
in both conditionals and we choose to model it since it
may have a broader impact than modeling y. Since, d is
a boolean variable and the conditional tests for falsity,
we choose for it to retain its concrete semantics. At
this point, we would generate an abstracted model and
check the property. If a true result is obtained then we
are sure that the property holds on the program, even
though the finite-state system only models two variables
with any precision. If a false result is obtained
then we must examine the counter-example produced
by the model checker. It may reveal a true defect in
the program or it may reveal an infeasible path through
the model. In the latter case, we identify the variables
in the conditionals along the counter-example's path as
candidates for more binding to more precise AIs.
This methodology is not foolproof. It is based on a fixed
collection of AIs and a given program variable may require
an AI that is not in that collection. Our heuristics
for choosing variables to refine may cause the generation
of finite-state models that are overly precise, and whose
analysis is more costly than is necessary. Nevertheless,
this approach has worked well on a variety of examples
and we will continue to improve it by incorporating
additional AIs and mechanisms for identifying candidates
for refinement. While not fully-automatable, this
methodology could benefit from automated support in
computing controlling variables and in the analysis of
counter-examples. We are currently investigating how
best to provide this kind of support.
A partial system is a collection of procedures and
tasks 3 . We complete a partial system by generating
source-code that implements missing components,
which we call contexts. These context components are
combined with the given partial system.
Stubs and drivers are defined to represent three different
kinds of missing components; calling, called and
parallel contexts. Calling contexts represent the possible
behavior of the portions of an application that
invoke the procedures of the partial system. Called contexts
represent the possible behavior of application procedures
that are invoked by the procedures and tasks
of the partial system. Parallel contexts represent those
portions of an application that execute in parallel with
and engage in inter-task communication with the procedures
and tasks of the partial system.
For simplicity, in our discussion and the examples in
this section, we phrase our system models and properties
in terms of events (i.e., actions performed by the
software). It is often convenient to use a mixture of
event and state-based descriptions in models and properties
and we do so in the example in Section 6.
To begin construction of any system model one must
have a definition of the events that are possible. For
Ada programs, these events include: entry calls or
accepts, calls or returns from procedures, designated
statements being executed or variables achieving a specified
value. We partition the events into those that are
3 The approach in this section can easily be extended to support
packages and other program structuring mechanisms.
procedure P() is
begin
task body T is
begin
accept
accept
procedure stub() is
choice
begin
loop
case choice is
when 1 =? A;
when 3 =? P();
when 4 =? null;
otherwise =? exit;
task body driver is
begin
task body T is
choice
begin
accept
- call missing;
loop
case choice is
when 1 =? A;
when 3 =? null; ?
when 4 =? null;
otherwise =? exit;
return missing;
accept
task body driver is
choice
begin
loop
case choice is
when 1 =? A;
when 3 =? T.E; C;
when 4 =? null;
otherwise =? exit;
Figure
2: Partial Ada System, Stubs and Drivers
internal to the components being analyzed and external
events, which may be executed by missing components.
Based on this partitioning we construct a stub procedure
that represents all possible sequences of external
events and calls on public routines and entries in the
partial system.
Figure
2 illustrates, on the left side, a partial system
consisting of procedure P and task T. The internal
events are calls on the E entry of T and execution of C.
Two external events are defined as A and B. The stub
procedure and driver task are also given in the figure.
Because there are no external entries there is no parallel
context defined.
Existing model checking tools require a single finite-state
transition system as input. To generate such a
system from a source program with procedures requires
inlining, or some other form of procedure integration.
We describe the construction of a source-level model
for a completed partial system as a series of inlining
operations. We assume that there are no recursive calls
in the system's procedures, stubs and drivers. Given
this assumption, Figure 3 gives the steps to assemble a
completed system. Applying these steps to the example
gives the code on the right side of Figure 2.
The same stub procedure is used to model the behavior
of all missing called components. To enable model
checking based on assumptions about the behavior of
Input: collection of procedures, tasks and a description of the
external alphabet
Output: a source level system without external references
Steps:
1. Generate stub procedure that non-deterministically chooses
between the actions in the external alphabet and calls to the
procedures and entries of the program components. It must
also be capable of choosing to do nothing or to return.
2. Inline calls to procedures made by stubs.
3. Stubs may now contain (inlined) calls to task entries. For
each task that calls a missing component specialize the stub
so that any calls to that tasks entries are replaced by an
error indication.
4. Calls to missing components from tasks are replaced by the
stub routine. Indications of the call to and return from the
missing component are inserted before and after the stub
body.
5. The driver and parallel contexts, if any, are formed by in-lining
the stub body.
Figure
3: System Completion Algorithm
specific missing components we bracket the inlined stub
with indicators of call and return events for the missing
component (e.g., call missing and return missing).
The goal of the system completion process is to produce
legal Ada source code so that subsequent tools can process
the system. This is somewhat at odds with the
fundamental lack of knowledge about event ordering in
missing components. We model this lack of knowledge
with non-determinism by introducing a new variable,
choice, that is tested in the stub conditionals. This
variable will be abstracted with a point AI and subsequent
model construction tools will represent such conditionals
with non-deterministic choice.
We must take care to insure that potential run-time errors
are preserved in the completed system, since they
contribute to the actual behavior of the software. For
example, it is possible for system tasks to call stubs
which in turn call system procedures containing entry
calls on that task; this is a run-time error in Ada. To
preserve this possible behavior we introduce an error
event. In the example in Figure 2 the point at which
such an event is introduced is marked by a ?. This
allows a user to test for the possibility of the run-time
error or to filter the allowable behavior of missing components
to eliminate the error (i.e., using []!error as
the property to be checked or as a filter). This conversion
is done separately for each task and amounts to
specialization of the stub body for the task by interpreting
self-entry calls as the error event.
Completing a partial program does not yield a finite-state
system. The next step is to selectively abstract
program variables and transform dynamic program behavior
to a static form.
5.1 Automating Model Construction
Our approach to automating the construction of safe
finite-state models of software systems builds on recent
work in abstraction-based program specialization [18].
Figure
4 illustrates the steps in converting Ada source
to Promela which can be submitted to the SPIN model
checker. First, the partial system is completed with a
source-level model of its execution environment. We
apply a source-to-source partial evaluation tool that
transforms the program to a form that is more readily
modeled as a finite-state system. Partial evaluation is
a program transformation and specialization approach
which exploits partial information about program data.
Essentially it performs parts of a program's computation
statically; the result is a simplified program that is
specialized to statically available data values. A wide-variety
of source transformations can be applied to aid
in finite-state model construction including; procedure
integration, bounded static variation, and migration of
dynamically allocated data and tasks to compile-time
[19]. A novel feature of the approach we use is its ability
to incorporate AIs for selected variable [18]; we use
the variable-AI bindings discussed in Section 4.
After partial evaluation, a tool is applied to convert the
resulting Ada to SEDL, an internal form used by the
Inca [4] toolset 4 , which is then converted into Promela.
The Promela is then submitted, along with an LTL
specification, to SPIN which produces either an indication
of a successful model check or a counter-example.
Aside from selection of AIs, this approach is completely
automatable. At present, the system completion and
the partial evaluation tools are not fully-implemented
and were not used in the experiments described in Section
6; all other tools depicted in Figure 4 were run
without user intervention. Stubs, drivers and the ab-
stracted, specialized Ada for those experiments were
constructed by-hand using the algorithms that are being
implemented in our partial evaluation tool. We are
implementing an approach to stub and driver generation
using ideas from work on synthesis of program
skeletons from temporal logic specification [25]. With
this approach we will be able to encode filters on environment
behavior directly into stubs and drivers,
thereby eliminating the need for including those filters
in the formula to be checked. It remains to be seen
whether encoding filters in the transition system or in
the formula to be checked results in better performance;
we plan to explore this question in future work.
4 Inca was previously referred to as the constrained-expressions
toolset.
INCA
Partial Evaluator
AI-based
Ada-to-SEDL
State/event
predicate
definitions
AI-variable
bindings
External
events/states
Promela
True or Counter-example
Ada
Ada Source
Ada
System Completor
Figure
4: Model Construction Process
6 EXPERIENCES
In this section, we describe our experiences with applying
the techniques described in this paper to model
checking of a real partial software system. We begin
with a description of this partial system.
6.1 Replicated Workers Computations
The replicated workers framework (RWF) is a parameterizable
parallel job scheduler, where the user configures
the computation to be performed in each job, the
degree of parallelism and several pre-defined variations
of scheduler behavior. An instance of this framework
is a collection of similar computational elements, called
workers. Each worker repeatedly accesses data from a
shared work pool, processes the data, and produces new
data elements which are returned to the pool. User's
define the number of workers, the type of work data,
and computations to be performed by a worker on a
data item. A version of this framework, written in Ada
[15], implements workers, the pool and a lock as dynamically
allocated instances of task types.
Figure
5 illustrates the structure of the replicated workers
framework and a sample of its interaction with a
user application; procedure and entry calls are depicted
with dashed and solid arrows, respectively, in the figure.
Applications Create a collection of workers and a work
pool and configure certain details of framework operation
(e.g., whether the Execute routine operates as a
Synchronous or Asynchronous invocation). A compu-
Worker
Worker
Worker
c := Create(.);
Input(c, v);
Create
Input Execute
User's Application
doWork
Figure
5: The Replicated Workers Framework
tation is initialized through calls to the Input routine
and started by calling Execute. Communicating only
by way of the workpool, the collection of workers cooperate
to perform the desired computation and terminate
their execution when complete. Detailed description of
the behavior of the RWF is provided in [15].
The execution state of the RWF consists of the local
control flow states of a single pool, a single lock,
and each of the workers. In addition, each of these
tasks maintains local data. The original Ada code
for the pool task, on the left of Figure 6, has a
boolean variable, executeDone, three natural variables,
numWait, numIdle, and workCount, two linked lists,
WorkPool.List and newWork, two variables of the work
type, and an array of task accesses workers, which is accessed
through the discriminant value C. The lock task
has a single boolean variable. Each worker task has a
boolean variable, done, three linked lists, a task access
variable, and an integer variable. In addition to the internal
state of the RWF, we need to consider the state of
the context, represented by stub and driver code. The
only data component of that state is the work value
passed to Input, which we refer to as driverInput.
We will see, below, that these variables are abstracted
in a variety of different ways in the finite-state models
used for system validation.
6.2 Building RWF Models
We use the approach described in Section 5 to produce
finite-state systems that represent the behavior
of the replicated workers framework. The frame-work
is built of three active components: a task type
(ActivePool) for the pool, a task type (ActiveWorker)
for the worker, and a task which mediates access to a
shared resource (ResultLock). The user is provided
access to framework functionality through a collection
of public procedures: a constructor Create, Input and
task body ActivePool is
Collection is a discriminant
begin
accept StartUp;
workCount := 0;
Outer: loop
loop
select *
or accept Execute;
C.done := FALSE;
for i in 1 . C.max loop ?
exit;
* end select;
loop
select *
or accept Put(newWork:in out WList) do
Remove(newWork, workItem);
for i in 1.Size(newWork) loop ??
Insert(work, workItem);
workCount
Remove(newWork, workItem);
numIdle
* end select;
executeDone := TRUE;
exit when
Synchronous then
accept Complete;
C.done := TRUE;
loop Outer;
type ZERO POS is (zero, positive);
task body GEN1ActivePool is
choice
begin
accept StartUp;
workCount := zero;
Outer: loop
loop
select *
or accept Execute;
GEN1CollectionInfo done := FALSE;
GEN2ActiveWorker.Execute;
GEN3ActiveWorker.Execute;
exit;
* end select;
loop
select *
or accept Put() do
if choice then ??
workCount := positive;
* end select;
if numIdle=3 and workCount=zero then
executeDone := TRUE;
exit when numWait=3;
accept Complete;
GEN1CollectionInfo done := TRUE;
loop Outer;
Figure
Original Ada and Abstracted, Specialized Ada for ActivePool Task
Output routines, and a routine to Execute the com-
putation. ActiveWorkers call user provided functions
(doWork,doResults) to perform subcomputations on
given work and result data.
In validating the RWF implementation we assume that
only one task creates and accesses each instance of the
framework. This means that a single driver can be used
to complete the system model; if the assumption is relaxed
we would incorporate multiple drivers and a parallel
component. We illustrate the analysis of a configuration
of the RWF with three workers and Synchronous
execution semantics. We will reason about local correctness
properties of this system that are either internal
to the RWF or related to the semantics of the
RWF's application interface. For this reason, the external
alphabet is empty. The stub generated by the
algorithm in Figure 3 in this case consists of choices
among calls to the RWF procedures.
Defining the generic parameters and parameters to the
Create call to be consistent with these assumptions enables
program specialization to eliminate a number of
program variables. In particular, the pool's work variables
and array of task accesses, and each worker's three
linked lists, task access variable, and integer variable
are eliminated. Some of these variables were eliminated
because their values are known to be constant, others
are eliminated, by copy propagation, because they only
transfer values between other variables. A significant
number of variables, ranging over large domains, remain
in the program so we apply the AIs, from Section
4, to the remaining variables to construct three different
abstracted versions of the RWF system.
Model 1. This model is the most aggressively ab-
stracted. The variables numWait, numIdle, workCount,
WorkPool.List, newWork and driverInput are all abstracted
to the point AI. Variables executeDone, done,
and the lock's boolean retain their concrete semantics.
Our initial attempts to validate RWF properties did
not use this model; we used model 2. We developed
this model in order to see if any of the existing properties
could be checked on a more compact model than
2. The results presented below confirm that this was
possible.
Parameters passed to doWork and doResult routines,
which are modeled with stubs, also require abstraction.
In this model the input work parameter is abstracted
with the point AI and the boolean output parameter
uses the choice AI.
Model 2.
Figure
6 gives the original Ada source
and the abstracted, specialized Ada code for the
ActivePool task of the RWF. Due to space limitations
some details of the example are elided from the Figure,
denoted by *, but the most interesting transformations
remain. As with the first model, several local vari-
ables, WorkPool.List, newWork and driverInput, are
abstracted to the point AI. Where those variables can
influence branch decisions, non-determinism is used.
Since there is no non-deterministic choice construct in
Ada, we introduce a new variable choice that indicates,
by convention, to the model construction tools that a
non-deterministic choice of the value of the variable
is desired. Of the remaining variables, executeDone,
numWait and numIdle retain their concrete semantics
and workCount is abstracted with a zero-pos AI.
We note that that numWait and numIdle both act as
bounded counters up to the number of workers, thus
they have a relatively small impact on the size of the
model. ActiveWorker tasks for this model are the same
as for model 1.
Some details of the specialization process are illustrated
in
Figure
6. Knowledge of the number of workers is
exploited to unroll the ? loop and specialize its body.
As a consequence of this, the resulting Ada contains
only static task references (e.g., GEN1ActiveWorker).
In fact, partial evaluation applied to this example converts
all dynamically allocated data and tasks to a static
form and all indirect data and task references to a static
form. The ?? loop is not unrolled, rather, because of the
zero-pos AI used in its body the specializer determines
that there are only two possible values for workCount
after the loop, unchanged and positive, and produces
the conditional.
Model 3. Model 2 was insufficient for validation
of ordering properties of work items in the RWF. We
constructed a third RWF model that incorporated the
2-ordered data AI for WorkType data and the 2-ordered
list abstraction for the WorkPool.List data. Even
though this model uses a non-trivial domain for variables
of WorkType, the resulting model did not explicitly
require the modeling of the pool's local variables,
since they only serve to transfer values between lists.
It is not possible to generate a compact, finite-state stub
and driver that will input any sequence of work data to
the partial system under analysis. For the properties
we wish to check, such generality is not required of the
stub and driver. In models 2 and 3 we require no information
about the input sequence and consequently
the point AI suffices. In this model, we require a finer
abstraction. The stub and driver for this model incorporated
the 2-ordered environment abstraction binding
the driverInput variable with the 2-ordered data AI.
Thus, input sequences are modeled as sequences of values
from fd 1 ;d 2 ; otg.
6.3 The Properties
We model checked a collection of correctness requirements
of the replicated workers framework. The requirements
were derived from an English language description
of the framework and encoded as LTL formulae
using patterns [12]. We expressed all the formulae in
terms of event and state predicates that are converted
automatically by the Inca toolset into propositions for
use in defining LTL formulae for SPIN. An event refers
to the occurrence of a rendezvous, a procedure call, or
some other designated program statement. An event
predicate is true if any task containing the specified
event is in a state immediately following a transition
on that event. State predicates define the points at
which selected program variables hold a given value
(e.g., states in which workCount is zero). We note the
defining boolean expressions for encoding state predicates
can be quite involved in some cases. For example,
the Inca predicate definition for states where workCount
is zero:
(defpredicate "workCountISzero"
(in-task activepool-task (= workCount "zero")))
causes the generation of a disjunction of 123 individual
state descriptions, i.e., one for each state of the
ActivePool task in which workCount has the value
zero. Our experience suggests that automated support
for such definitions is a necessary component of
any finite-state software verification toolset.
A selection of the specifications we checked are given
in
Figure
7. All model checks were performed using
SPIN, version 3.09, on a SUN ULTRA5 with a 270Mhz
UltraSparc IIi and 128Meg of RAM. Figure 8 gives the
data for each of the model checking runs; the transition
system model used for the run is given. We report the
user+system time for running SPIN to convert LTL to
the SPIN input format, to compile the Promela into a
model checker, and to execute that model checker. The
model construction tools were run on an AlphaStation
200 4/233 with 128Meg of RAM. The longest time taken
(1) []((call doResults:i && !?return doResults:i) -?
((!call doResults:j) U return doResults:i))
Mutually exclusive execution of doResults.
(2) (!?call Execute) -? ((!call doWork) U call Execute)
No work is scheduled before execution.
(3) []((return Execute && (!?call Execute)) -?
((!call doWork) U call Execute))
scheduled after termination.
[](call Execute -? ((!return Execute) U
(done w1 jj done w2 jj done w3 jj
workCountEQZero jj [](!return Execute))))
Computation terminates when workpool is empty or
worker signals termination.
!(done w1 jj done w2 jj done w3) -? !?call doWork)
If a worker is ready to get work, the workpool is not empty
and the computation is not done, then work is scheduled.
schedules work in input order.
After a work item is scheduled, it will not be scheduled again.
After a work item is processed, it will not be scheduled.
Figure
7: LTL Specifications
to convert completed Ada to SEDL was for model 3; it
took 66.3 seconds. Generating Promela from the SEDL
can vary due to differences in the predicate definitions
required for different properties. The longest time taken
for this step was also for model 3; it took 16.4 seconds.
6.4 Discussion
All specified properties were known to hold on the RWF
implementation we analyzed. For specifications 1-3 no
filters were required to obtain true results. The remaining
specifications required some form of filter. Properties
6-8 required the filter in Section 4 to insure the
safety of the model check results under the 2-ordered
AI incorporated in the transition system. We discuss
the filters for properties 4-5 below. Specifications 1 and
7 are short-hand for collections of specifications for all
are different
worker tasks ids. The model checks times were equal
for the different versions of each specification; one such
time is given in Figure 8.
Modeling missing components using the permissive
stubs and drivers described in Section 5 has the advantage
of yielding safe models for the system configuration
considered. Its drawback is that it may not precisely
describe the required behavior of missing components.
This is the reason that model checks for specifications 4
Property Time Result Model
(4f) 0.7, 3:44.3, 8.8 true 2
(6f) 1.3, 36:58.7, 13:08.1 true 3
Figure
8: Performance Data
and 5 failed. To boost precision in analyzing these prop-
erties, we code assumptions about the required behavior
of missing components (e.g. doWork) as filter-formulae
that are then model checked.
Analysis of the counter example provided by SPIN for
specification 4 showed that doResults calls made from
GEN1ActiveWorker can call the ActivePool.Finished
entry. This is because the stub routine allows
doResults to perform any computation. API documentation
for the RWF warns users against calling
RWF operations from doWork and doResults. If we
assume that users heed this warning, we can define two
filters that eliminate such calls. The resulting filter for-
mula, (4f), is:
([](call-Execute -? ((!return-Execute) U
workCountISZero - [](!return-Execute)))))
where call stubRWF w and call stubRWF r are rather
large conjunctions of the events that correspond to the
calls of RWF operations from stubs inlined at doWork
and doResult call-sites within workers. The generation
of these propositions is relatively simple using Inca's
predicate definition mechanism. The same filters were
used for specification (5f).
The use of filters in properties (6-8f) was required since
the AIs incorporated in the model were only guaranteed
to be safe under the assumption of a single Insert of
each work datum into the abstracted WorkPool.List.
Even though the unfiltered versions of those properties
returned a true result, those results cannot be trusted.
It may be the case that the AI caused certain possible
system executions to be excluded from the analysis. If
such an execution violated the specified property then
a true result might be returned when there is a defect
in the system. To insure that this is not the case, we
checked the following filter formula (6f):
Properties (7-8f) only referred to d1 so they only include
the filter that restricts Inserts of d1.
6.5 Lessons Learned
Our experience using filters for model checking with
this example is consistent with previous work on filter-based
verification [11, 14, 28]. In many cases, no filters
are necessary and when necessary relatively few filters
are sufficient to achieve the level of precision necessary
for property verification. Model checking of properties
for our sample system was fast enough to be usable in
a practical development setting. We note that the the
second component of model check time in Figure 8 is the
sum of the time for SPIN to compile Promela input to
a C program and the time to compile that C program.
The bulk of this cost in all cases was compiling the C
program. The reader should not interpret these times
as an inherent component of the cost of using SPIN.
The Inca tools that we use to generate Promela code
encode local task data into the control flow of a Promela
task rather than as Promela variables. This can cause
a dramatic increase in the size of the C program generated
and consequently the compile times are significant.
Further study is required to determine whether a more
direct mapping to Promela would yield significant reductions
in these times.
It is interesting to note that the addition of filters can
in some cases reduce analysis cost (e.g., property (4f))
while in others it can dramatically increase analysis cost
(e.g., property (6f)). Conceptually, analysis cost can be
reduced because paths through the finite-state model
that are inconsistent with the filters are not considered
during model checking. Analysis cost can be increased,
on the other hand, because the effective state space
(i.e., the product of the model and the property) is significantly
larger. Further study is needed to understand
the situations in which reduction or increase in analysis
cost can be expected when using filters.
Our methodology for incorporating AIs into finite-state
models yields aggressively abstracted transition sys-
tems. Nevertheless, as one might expect, even the relatively
small changes in the abstractions we incorporated
into our three models dramatically change the space,
and consequently the time, required for model check-
ing. Checks for properties (1) and (2), using model
1, required on the order of 1000 states to be searched,
whereas checks for properties (6-8), using model 3, required
on the order of 100000 states to be searched. We
believe that constructing compact transition systems,
while retaining sufficient precision to enable successful
model checks, requires that AIs be selected and incorporated
independently for properties that refer to the
same set of propositions. In our experiments, the cost of
model construction is not the dominant factor in analysis
time, so the benefits of independently abstracted
models may yield an overall reduction in analysis time.
It is important to note that these observations are based
on checking of local properties of a cohesive partial
system with a relatively narrow and well-defined inter-
In principle, it may be necessary to include a very
large number of filters which may dramatically increase
model check cost. We believe that application of the approach
described in this paper is most sensible at points
in the development process where unit and integration-
level testing is currently applied. In this context, we
believe that the sub-systems under analysis will be similar
to the RWF system (i.e., highly cohesive and loosely
coupled to the environment). Further evaluation is necessary
to determine this conclusively and to study the
cost of filter-based model checking for partial systems
that are strongly coupled to their environment.
The work described in this paper touches on model
checking of software systems, model checking of open
or partial systems and abstractions in model checking.
In Section 3, we have already discussed the bulk of the
related work.
There have been some recent efforts to apply model
checking techniques to abstracted software systems
(e.g., [13, 30]). In that work, ad-hoc abstraction was
performed by hand transforming source code into models
suitable for analysis. While automating the selection
of abstractions is a very difficult problem, application
of abstractions is relatively well-understood. Unlike
ad-hoc methods, our work builds off the solid semantic
foundations and rich history of existing abstractions
that have been developed in the twenty year history of
abstract interpretation [10]. Furthermore, we explore
the use of partial evaluation techniques (e.g., [21]) as a
means of automating application of those abstractions.
Our use of filters to refine a model of the environment
is similar to other work on compositional verifica-
tion. These divide-and-conquer approaches, decompose
a system into sub-systems, derive interfaces that summarize
the behavior of each sub-system (e.g., [6]), then
perform analyses using interfaces in place of the details
of the sub-systems. This notion of capturing environment
behavior with interfaces also appears in recent developments
on theoretical issues related to model checking
of partial systems (e.g., [22, 23]). There has been
considerably less work on the practical issues involved
with finite-state verification of partial systems. Aside
from our work with FLAVERS, discussed in Section 3,
there are two other recent related practical efforts.
Avrunin, Dillon and Corbett [5] have developed a technique
that allows partial systems to be described in
a mixture of source code and specifications. In there
work, specifications can be thought of as assumptions
or filters on a naive completion of a partial system given
in code. Unlike our work, their approach is targeted to
automated analysis of timing properties of systems.
Colby, Godefroid and Jagadeesan [8] describe an automatable
approach to completing reactive partial sys-
tem. Unlike our approach, there work is aimed at producing
a completed system that is executable in the
context of the VeriSoft toolset [17]. Their system completion
acts as a controlling environment that causes the
given partial system to systematically explore its behavior
and compare it to specifications of correctness prop-
erties. To produce a tractable completion, they perform
a number of analyses to determine which portions of the
partial system can be influenced by external behavior,
for example, tests of externally defined variables are
modeled with non-deterministic choice. This is equivalent
to abstracting all external data with a point AI,
which happens by default in our approach. Our use
of filters allows restriction of external component behavior
which is not possible in their approach. Both
approaches are sensitive to elimination, or abstraction,
of program actions that may cause run-time errors; in
our case this is manifested by modeling self-entry calls
as error actions.
We have described an automatable approach to safely
completing the definition of a partial software system.
We have shown how a completed system can be selectively
abstracted and transformed into a finite-state
system that can be input to existing model checking
tools. We have illustrated that this approach strikes a
balance between size and precision in a way that enables
model checking of system requirements of real software
components. Finally, we have shown how to refine the
representation of system behavior, in cases where the
precision of the base representation is insufficient, to
enable proof of additional system requirements.
There are a number of questions we plan to investigate
as follow on work. In the work described in this pa-
per, we encode filter information into the model check
on-the-fly, an alternate method is to encode it directly
into the finite-state system. We are currently comparing
these two approaches in order to characterize the
circumstances in which one approach is preferable to
the other. In this paper, we consider individual abstrac-
tions, encoded as AI, but we know of systems where the
desired abstraction is a composition of two AIs. We are
investigating the extent to which construction of such
compositions can be automated. Finally, we are continuing
development of the tools that make up our approach
and we plan to evaluate their utility by applying
them to additional real software systems. Along with a
completed toolset, we plan to produce a library of abstractions
that users can selectively apply to program
variables. This will allow non-expert users to begin to
experiment with model checking of source code for real
component-based software systems.
9
ACKNOWLEDGEMENTS
Thanks to John Hatcliff and Nanda Muhammad for
helping to specialize versions of the replicated workers
framework by hand. Thanks to James Corbett and
George Avrunin for access to the Inca toolset. Special
thanks to James for responding to our request for
a predicate definition mechanism with an implementa-
tion. Thanks to the anonymous referee who gave very
detailed and useful comments on the paper.
--R
Abstract Interpretation of Declarative Languages.
Analyzing partially-implemented real-time systems
Checking subsystem safety properties in compositional reachability analysis.
Automatic verification of finite-state concurrent systems using temporal logic specifications
Automatically closing open reactive programs.
Evaluating deadlock detection methods for concurrent software.
Abstract interpretation: A unified lattice model for static analysis of programs by construction or approximation of fixpoints.
Modular flow analysis for concurrent soft- ware
Property specification patterns for finite-state verification
Model checking graphical user interfaces using abstractions.
Data flow analysis for verifying properties of concurrent programs.
An application-independent concurrency skeleton in Ada-95
Limiting state explosion with filter-based refinement
Model checking for programming languages using VeriSoft.
Staging static analysis using abstraction-based program specializa- tion
Automatically specializing software for finite-state verification
The model checker SPIN.
Partial Evaluation and Automatic Program Generation.
Module checking.
Module checking revis- ited
The Temporal Logic of Reactive and Concurrent Systems: Specification.
Synthesis of communicating processes from temporal logic specifications.
Symbolic Model Checking.
Advanced Compiler Design and Imple- mentation
In transition from global to modular temporal reasoning about programs.
Model checking software systems: A case study.
Specifying interesting properties of programs in propositional temporal logics.
--TR
Automatic verification of finite-state concurrent systems using temporal logic specifications
Abstract interpretation of declarative languages
In transition from global to modular temporal reasoning about programs
Automated Analysis of Concurrent Systems with the Constrained Expression Toolset
The temporal logic of reactive and concurrent systems
Partial evaluation and automatic program generation
Data flow analysis for verifying properties of concurrent programs
A concurrency analysis tool suite for Ada programs
Model checking software systems
Checking subsystem safety properties in compositional reachability analysis
Model checking large software specifications
An application-independent concurrency skeleton in Ada 95
Analyzing partially-implemented real-time systems
The Model Checker SPIN
Model checking for programming languages using VeriSoft
Applying static analysis to software architectures
Model checking graphical user interfaces using abstractions
Automatically closing open reactive programs
Advanced compiler design and implementation
Property specification patterns for finite-state verification
Synthesis of Communicating Processes from Temporal Logic Specifications
Expressing interesting properties of programs in propositional temporal logic
Abstract interpretation
Symbolic Model Checking
State-Based Model Checking of Event-Driven System Requirements
Evaluating Deadlock Detection Methods for Concurrent Software
Module Checking
Modular flow analysis for concurrent software
--CTR
Oksana Tkachuk , Matthew B. Dwyer, Adapting side effects analysis for modular program model checking, ACM SIGSOFT Software Engineering Notes, v.28 n.5, September
Frank Huch, Verification of Erlang programs using abstract interpretation and model checking, ACM SIGPLAN Notices, v.34 n.9, p.261-272, Sept. 1999
Matthew B. Dwyer , John Hatcliff , Roby Joehanes , Shawn Laubach , Corina S. Psreanu , Hongjun Zheng , Willem Visser, Tool-supported program abstraction for finite-state verification, Proceedings of the 23rd International Conference on Software Engineering, p.177-187, May 12-19, 2001, Toronto, Ontario, Canada
Patrice Godefroid , Lalita J. Jagadeesan , Radha Jagadeesan , Konstantin Lufer, Automated systematic testing for constraint-based interactive services, ACM SIGSOFT Software Engineering Notes, v.25 n.6, p.40-49, Nov. 2000
John Penix , Willem Visser , Eric Engstrom , Aaron Larson , Nicholas Weininger, Verification of time partitioning in the DEOS scheduler kernel, Proceedings of the 22nd international conference on Software engineering, p.488-497, June 04-11, 2000, Limerick, Ireland
G. J. Holzmann , M. H. Smith, An Automated Verification Method for Distributed Systems Software Based on Model Extraction, IEEE Transactions on Software Engineering, v.28 n.4, p.364-377, April 2002
Victor A. Braberman , Miguel Felder, Verification of real-time designs: combining scheduling theory with automatic formal verification, ACM SIGSOFT Software Engineering Notes, v.24 n.6, p.494-510, Nov. 1999
Ji Zhang , Betty H. C. Cheng, Specifying adaptation semantics, ACM SIGSOFT Software Engineering Notes, v.30 n.4, July 2005
John Hatcliff , Matthew B. Dwyer , Hongjun Zheng, Slicing Software for Model Construction, Higher-Order and Symbolic Computation, v.13 n.4, p.315-353, Dec. 1, 2000
Ji Zhang , Betty H. C. Cheng, Model-based development of dynamically adaptive software, Proceeding of the 28th international conference on Software engineering, May 20-28, 2006, Shanghai, China
John Penix , Willem Visser , Seungjoon Park , Corina Pasareanu , Eric Engstrom , Aaron Larson , Nicholas Weininger, Verifying Time Partitioning in the DEOS Scheduling Kernel, Formal Methods in System Design, v.26 n.2, p.103-135, March 2005 | filter-based analysis;assume-guarantee reasoning;software verification and validation;model checking |
288321 | Automated test data generation using an iterative relaxation method. | An important problem that arises in path oriented testing is the generation of test data that causes a program to follow a given path. In this paper, we present a novel program execution based approach using an iterative relaxation method to address the above problem. In this method, test data generation is initiated with an arbitrarily chosen input from a given domain. This input is then iteratively refined to obtain an input on which all the branch predicates on the given path evaluate to the desired outcome. In each iteration the program statements relevant to the evaluation of each branch predicate on the path are executed, and a set of linear constraints is derived. The constraints are then solved to obtain the increments for the input. These increments are added to the current input to obtain the input for the next iteration. The relaxation technique used in deriving the constraints provides feedback on the amount by which each input variable should be adjusted for the branches on the path to evaluate to the desired outcome.When the branch conditions on a path are linear functions of input variables, our technique either finds a solution for such paths in one iteration or it guarantees that the path is infeasible. In contrast, existing execution based approaches may require an unacceptably large number of iterations for relatively long paths because they consider only one input variable and one branch predicate at a time and use backtracking. When the branch conditions on a path are nonlinear functions of input variables, though it may take more then one iteration to derive a desired input, the set of constraints to be solved in each iteration is linear and is solved using Gaussian elimination. This makes our technique practical and suitable for automation. | Introduction
Software testing is an important stage of software develop-
ment. It provides a method to establish confidence in the
reliability of software. It is a time consuming process and
accounts for 50% of the cost of software development [10].
Given a program and a testing criteria, the generation of
test data that satisfies the selected testing criteria is a very
difficult problem. If test data for a given testing criteria for
a program can be generated automatically, it can relieve the
software testing team of a tedious and difficult task, reducing
the cost of the software testing significantly. Several approaches
for automated test data generation have been proposed
in the literature, including random [2], syntax based
[5], program specification based [1, 9, 12, 13], symbolic evaluation
[4, 6] and program execution based [7, 8, 10, 11, 14]
test data generation.
A particular type of testing criteria is path coverage,
which requires generating test data that causes the program
execution to follow a given path. Generating test data for a
given program path is a difficult task posing many complex
problems [4]. Symbolic evaluation [4, 6] and program execution
based approaches [7, 10, 14] have been proposed for
generating test data for a given path. In general, symbolic
evaluation of statements along a path requires complex algebraic
manipulations and has difficulty in handling arrays and
pointer references. Program execution based approaches can
handle arrays and pointer references efficiently because array
indexes and pointer addresses are known at each step
of program execution. But, one of the major challenges to
these methods is the impact of infeasible paths. Since there
is no concept of inconsistent constraints in these methods,
a large number of iterations can be performed before the
search for input is abandoned for an infeasible path. Existing
program execution based methods [7, 10] use function
minimization search algorithms to locate the values of input
variables for which the selected path is traversed. They consider
one branch predicate and one input variable at a time
and use backtracking. Therefore, even when the branch conditions
on the path are linear functions of input, they may
require a large number of iterations for long paths.
In this paper, we present a new program execution based
approach to generate test data for a given path. It is a novel
approach based on a relaxation technique for iteratively refining
an arbitrarily chosen input. The relaxation technique
is used in numerical analysis to improve upon an approximate
solution of an equation representing the roots of a
function [15]. In this technique, the function is evaluated at
the approximate solution and the resulting value is used to
provide feedback on the amount by which the values in the
approximate solution should be adjusted so that it becomes
an exact solution of the equation. If the function is lin-
this technique derives an exact solution of the equation
from an approximate solution in one iteration. For nonlinear
functions it may take more than one iteration to derive
an exact solution from an approximate solution.
In our method, test data generation for a given path in a
program is initiated with an arbitrarily chosen input from a
given domain. If the path is not traversed when the program
is executed on this input, then the input is iteratively refined
using the relaxation technique to obtain a new input that
results in the traversal of the path. To apply the relaxation
technique to the test data generation problem, we view each
branch condition on the given path as a function of input
variables and derive two representations for this function.
One representation is in the form of a subset of input and
assignment statements along the given path that must be executed
in order to evaluate the function. This representation
is computed as a slicing operation on the data dependency
graph of the program statements on the path, starting at
the predicate under consideration. Therefore, we refer to it
as a predicate slice. Note that a predicate slice always
provides an exact representation of the function computed
by a branch condition. Using this exact representation in
the form of program statements, we derive a linear arithmetic
representation of the function computed by the
branch condition in terms of input variables. An arithmetic
representation of the function in terms of input variables
is necessary to enable the application of numerical analysis
techniques since a program representation of the function is
not suitable for this purpose. If the function computed by
a branch condition is a linear function of the input, then its
linear arithmetic representation is exact. When the function
computed by a branch condition is a nonlinear function of
the input, its linear arithmetic representation approximates
the function in the neighborhood of the current input.
These two representations are used to refine an arbitrarily
chosen input to obtain the desired input as follows. Let
us assume that by executing a predicate slice using the arbitrarily
chosen input, we determine that a branch condition
does not evaluate to the desired outcome. In this case, the
evaluation of the branch condition also provides us with a
value called the predicate residual which is the amount
by which the function value must change in order to achieve
the desired branch outcome. Now using the linear arithmetic
representation and the predicate residual, we derive
a linear constraint on the increments for the current
input. One such constraint is derived for each branch condition
on the path. These linear constraints are then solved
simultaneously using Gaussian elimination to compute the
increments for the current input. A new input is obtained
by adding these increments to the current input. Since the
constraints corresponding to all the branch conditions on
the path are solved simultaneously, our method attempts to
change the current input so that all the branch predicates
on the path evaluate to their desired outcomes when their
predicate slices are executed on the new input.
If all the branch conditions on the path are linear functions
of the input (i.e., the linear arithmetic representations
of the predicate functions are exact), then our method either
derives a desired input in one iteration or guarantees that
the path is infeasible. This result has immense practical
importance in accordance with the studies reported in [6].
A case study of 3600 test case constraints generated for a
group of Fortran programs has shown that the constraints
are almost always linear. For this large class of paths our
method is able to detect infeasibility, even though the problem
of detecting infeasible paths is unsolvable in general. If
such a path is feasible, our method is extremely efficient as
it finds a solution in exactly one iteration.
If at least one branch condition on the path is a nonlinear
function of the input, then the increments for the current input
that are computed by solving the linear constraints on
the increments may not immediately yield a desired input.
This is because the set of linear constraints on the increments
are derived from the linear arithmetic representations
(which in this case are approximate) of the corresponding
branch conditions. Therefore it may take more than one
iteration to obtain a desired input. Even when the branch
predicates on the path are nonlinear functions of the input,
the set of equations to be solved to obtain a new input from
the current input are linear and are solved by Gaussian elim-
ination. Gauss elimination algorithm is widely implemented
and is an established method for solving a system of linear
equations. This makes our technique practical and suitable
for automation.
The important contributions of the novel method presented
in this paper are:
ffl It is an innovative use of the traditional relaxation technique
for test data generation.
ffl If all the conditionals on the path are linear functions of
the input, it either generates the test data in one iteration
or guarantees that the path is infeasible. There-
fore, it is efficient in finding a solution as well as powerful
in detecting infeasibility for a large class of paths.
ffl It is a general technique and can generate test data
even if conditionals on the given path are nonlinear
functions of the input. In this case also, the number of
iterations with inconsistent constraints can be used as
an indication of a potential infeasible path.
ffl The set of constraints to be solved in this method is
always linear even though the path may involve conditionals
that are non-linear functions of the input. A set
of linear constraints can be automatically solved using
Gaussian elimination whereas no direct method exits to
solve a set of arbitrary nonlinear constraints. Gaussian
elimination has been widely implemented and experimented
algorithm. This makes the method practical
and suitable for automation.
ffl It is scalable to large programs. The number of program
executions required in each iteration are independent
of the path length and are bounded by number of
input variables. The size of the system of linear equations
to be solved using Gaussian elimination increases
with the number of branch predicates on the path, but
the increase in cost is significantly less than that of the
existing techniques.
The organization of this paper is as follows. An overview of
the method is presented in the next section. The algorithm
for test data generation is described in section 3. It is illustrated
with examples involving linear and nonlinear paths,
loops and arrays. Related work is discussed in section 4.
The important features of the method are summarized and
our future work is outlined in section 5.
Overview
We define a program module M as a directed graph
e), where N is a set of nodes, E is a set of
edges, s is a unique entry node and e is a unique exit node
of M . A node n represents a single statement or a conditional
expression, and a possible transfer of control from
node n i to node n j is mapped to an edge (n
G is a sequence of nodes
such that (n
A variable i k is an input variable of the module M if it
either appears in an input statement of M or is an input
parameter of M . The domain Dk of input variable i k is
the set of all possible values it can hold. An input vector
is the number
of inputs, is called a Program Input. In this paper, we may
refer to the program input by input and use these terms
interchangeably.
A conditional expression in a multi-way decision statement
is called a Branch Predicate. Without loss of gen-
erality, we assume that the branch predicates are simple relational
expressions (inequalities and equalities) of the form
E1 op E2 , where E1 and E2 are arithmetic expressions,
and op is one of f!;-;=; 6=g.
If a branch predicate contains boolean variables, we represent
the "true" value of the boolean variable by a numeric
value zero or greater and the "false" value by a negative numeric
value. If a branch predicate on a path is a conjunction
of two or more boolean variables such as in
then such a predicate is considered as multiple branch predicates
must simultaneously be satisfied
for the traversal of the path. If a branch predicate on a path
is a disjunction of two or more boolean variables such as in
B), then at a time only one of the branch predicates
A - 0 or B - 0 is considered along with other branch
predicates on the path. If a solution is not found with one
branch predicate then the other one is tried.
Each branch predicate E1 op E2 can be transformed
to the equivalent branch predicate of the form F op 0,
where F is an arithmetic expression E1 \Gamma E2 . Along a given
path, F represents a real valued function called a Predicate
Function. F may be a direct or indirect function of the
input variables. To illustrate this, let us consider the branch
predicate
for the conditional statement P2 in the example program in
Figure
1. The predicate function F2 corresponding to the
branch predicate BP2 is
Along path g, the
predicate function F2 indirectly represents
the function of the input variables X;Y; Z.
We now state the problem being addressed in this paper:
Problem Statement: Given a program path P which
is traversed for certain evaluations (true or false) of
branch predicates BP1
a program input I
that causes the branch predicates to evaluate such that
P is traversed.
We present a new method for generating a program input
such that a given path in a program is traversed when the
program is executed using this input. In this method, test
data generation is initiated with an arbitrarily chosen input
from a given domain. If the given path is not traversed
on this input, a set of linear constraints on increments
to the input are derived using a relaxation method. The
increments obtained by solving these constraints are added
to the input to obtain a new input. If the path is traversed
on the new input then the method terminates. Otherwise,
the steps of refining the input are carried out iteratively to
obtain the desired input. We now briefly review the relaxation
technique as used in numerical analysis for refining
an approximation to the solution of a linear equation.
The Relaxation Technique
be an approximation to a solution of the linear
equation
In general, substituting (x0 ; y0) in the lhs of the above equation
would result in a non zero value r0 called the residual,
i.e.,
If increments \Deltax and \Deltay for x0 and y0 are computed such
that they satisfy the linear constraint given by
Therefore,
is a solution of equation (1).
In order to formulate the test data generation problem
as a relaxation technique problem, we view the predicate
function corresponding to each branch predicate on the
path as a function of program input. To apply the above
relaxation technique, a Linear Arithmetic Representation
in terms of the relevant input variables is required
for each predicate function. We first derive an exact
program representation called a Predicate Slice for the
function computed by each predicate function and then use
it to derive a linear arithmetic representation. The two
representations are used in an innovative way to refine the
program input.
The Predicate Slice
The exact program representation of a predicate function,
the Predicate Slice, is defined as follows:
Definition: The Predicate Slice S(BP;P ) of a branch
predicate BP on a path P is the set of statements that
compute values upon which the value of BP may be
directly or indirectly data dependent when execution
follows the path P .
In other words, S(BP;P ) is a slice over data dependencies
of the branch predicate BP using a program consisting of
0: read(X,Y,Z)
1:
2:
3: else
5:
7:
8: write("Nonlinear: Quadratic")
endif
9:
endif
0: read(X,Y,Z)
Statements in Predicate Slice S(BP1, P)
0: read(X,Y,Z)
1:
2:
Statements in Predicate Slice S(BP2, P)
0: read(X,Y,Z)
1:
Statements in Predicate Slice S(BP4, P)
Figure
1: An Example Program and Predicate Slices on a path P=f0,1,P1,2,P2,4,5,6,P4,9g
only input and assignment statements preceding BP on the
path P . We illustrate the above definition using the example
program in Figure 1. Consider the path P
Let BP i denote the i th branch predicate along the path P .
The predicate slices corresponding to the branch predicates
BP 1, BP2 and BP4 along path P are:
As illustrated by the above examples, predicate slices
include only input and assignment statements. The value
of a predicate function for an input can be computed by
executing the corresponding predicate slice on the input.
Note that a predicate slice is not a conventional static
slice since it is computed over the statements along a path.
It is also not a dynamic slice because it is computed statically
using the input and assignment statements along a
path and is not as precise as the dynamic slice. To illustrate
the latter we consider the code segment given in Figure 2:
input(I, J, Y);
Figure
2: A code segment on a path using an array.
When I 6= J, the evaluation of BP : (A[J] ? 0) is not data dependent
on the assignment statement. Whereas, if I = J, the
evaluation of BP is data dependent on the assignment state-
ment. Therefore, the predicate slice for the branch predicate
BP will consist of the input statement as well the assignment
statement. In other words, the predicate slice is a path
oriented static slice.
The concept of predicate slice enables us to evaluate the
outcome of each branch predicate on the path irrespective
of the outcome of other branch predicates. The predicate
slices for the branch predicates on the path can be executed
using an arbitrary input even though the path may not be
traversed on that input. This is possible because there are
no conditionals in a predicate slice. After execution of a
predicate slice on an input, the value of the corresponding
predicate function can be computed and the branch outcome
evaluated.
There is a correspondence between the outcomes of the
execution of the predicate slices on an input and the traversal
of the the path on that input. If all the branch predicates
on the path evaluate to their desired outcomes, by executing
their respective predicate slices on an input and computing
the respective predicate functions, the path will be traversed
when the program is executed using this input. If any of
the branch predicates on the path does not evaluate to its
desired outcome when its predicate slice is executed on an
input, the path will not be traversed when the program is
executed using this input.
Conceptually, a predicate slice enables us to view a predicate
function on the path as an independent function of
input variables. Therefore, our method can simultaneously
force all branch predicates along the path to evaluate to their
desired outcomes. In contrast, the existing program execution
based methods [7, 10] for test data generation attempt
to satisfy one branch predicate at a time and use backtracking
to fix a predicate satisfied earlier while trying to satisfy a
predicate that appears later on the path. They cannot consider
all the branch predicates on the path simultaneously
because the path may not be traversed on an intermediate
input.
The predicate slice is also useful in identifying the relevant
subset of input variables, on which the value of the
predicate function depends. This subset of input variables
is required so that a linear arithmetic representation of the
predicate function in terms of these input variables can be
derived. The subset of the input variables on which the value
computed by a predicate function depends can only be determined
dynamically as illustrated by the example in Figure
2. Therefore, given an input and a branch predicate on
the path, the corresponding predicate slice is executed using
this input and a dynamic data dependence graph based upon
the execution is constructed. The relevant input variables
for the corresponding predicate function are determined by
taking a dynamic slice over this dependence graph.
Note that if only scalars are referenced in a predicate slice
and the corresponding predicate function, then the subset
of input variables on which the predicate function depends
can be determined statically from the predicate slice. Execution
of the predicate slice on the input data followed
by a dynamic slice to determine relevant input variables is
necessary to handle arrays. We define this subset of input
variables as the Input Dependency Set.
Definition: The Input Dependency Set ID(BP;I;P ) of
a branch predicate BP on an input I along a path P is
the subset of input variables on which BP is, directly or
indirectly, data dependent. These input variables can
be identified by executing the statements in the predicate
slice S(BP;P ) on input I and taking a dynamic
slice over the dynamic data dependence graph.
For example, executing S(BP2;P ) on an input
we note that the evaluation of BP2 depends
on the input variables X,Y and Z. Therefore,
Zg.
Now we explain how we use the input dependency set
to derive the linear arithmetic representation in terms of
input variables for a predicate function for a given input.
Deriving the Linear Arithmetic Representation of a
Predicate Function
Given a predicate function and its input dependency set
ID for an input I, we write a general linear function of
the input variables in ID. Then, we compute the values
of the coefficients in the general linear function so that it
represents the tangent plane to the predicate function at I.
This gives us a Linear Arithmetic Representation for
the predicate function at I.
For example, the predicate function F2
has for the input
linear function for the inputs in ID is
Here, a; b and c are the slopes of f with respect to input
variables X;Y and Z respectively and d is the constant term.
If the slopes a, b and c above are computed by evaluating
the corresponding derivatives of the predicate function
at the input I0 and the constant term is computed such
that the linear function f evaluates to the same value at
I0 as that computed by executing the corresponding predicate
slice on I0 and evaluating the predicate function, then
represents the tangent plane to the predicate
function at input I0 . This gives us the linear arithmetic representation
for the predicate function at I0 .
If the predicate function computes a linear function of
the input, then the above tangent plane
the exact representation for the predicate function. Whereas
if a predicate function computes a nonlinear function of the
input, then the above tangent plane f(X;Y;
approximate the predicate function in the neighborhood of
the input I0 .
We illustrate this by deriving the linear arithmetic representation
for the predicate function F2 at the input
We approximate the derivatives of a predicate
function by its divided differences. To compute a at I0 , we
execute S(BP2;P ) at I0 and at
where we have chosen increment in the
input variable X. Then, we compute the divided difference:
2:
This gives the value of a = 2: We compute the value of b by
executing the predicate slice S(BP2;P ) at I0 and at
and computing the divided difference of F2 at these two
points with respect to y. This gives b equal to \Gamma2. Similarly,
we get c equal to 1. We compute the value of d by solving
for d from the equation
a
Substituting the values of a; b; c and F2(I0) in this equation
and solving for d, we get d equal to \Gamma100. Therefore, we
obtain the linear arithmetic representation for F2 at I0 as
In this example, F2 computes a linear function of the input.
Therefore, its linear arithmetic representation at I0 computed
as above is the exact representation of the function
of inputs computed by F 2. Also, only those input variables
that influence the predicate function F2 appear in this representation
In this paper, we have approximated the derivatives
of a predicate function by its divided differences. Tools
have been developed to compute derivative of a program
with respect to an input variable [3]. With these tools, we
can get exact derivative values rather than using divided
differences. Therefore, our technique for deriving a linear
arithmetic representation for a predicate function can be
very accurately implemented for automated testing.
Using the method explained above, we derive a linear
arithmetic representation at the current input for each
predicate function on the given path. In order to derive a
set of linear constraints on the increments to the current
input from these linear arithmetic representations, we
execute the predicate slices of all the branch predicates on
the current input and compute the values of corresponding
predicate functions. We use these values of the predicate
functions to provide feedback for computing the desired
increments to the current input.
The Predicate Residuals
The values of the predicate functions at an input, defined
as Predicate Residuals, essentially place constraints on the
changes in the values of the input variables that, if satisfied,
will provide us with a new input on which the desired path
is followed.
Definition: The Predicate Residual of a branch predicate
for an input is the value of the corresponding predicate
function computed by executing its predicate slice
at the input.
If a branch predicate has the relational operator
then a non zero predicate residual gives the exact amount
by which the value of the predicate function should change
by modifying the input so that the branch evaluates to its
desired outcome. Otherwise, a predicate residual gives the
least (maximum) value by which the predicate function's
value must be changed (can be allowed to change), by modifying
the program input, such that the branch predicate
evaluates (continues to evaluate) to the desired outcome.
We explain this with examples given below.
If a branch predicate evaluates to the desired outcome
for a given input, then it should continue to evaluate to the
desired outcome. In this case, the predicate residual gives
the maximum value by which the predicate function's value
can be allowed to change, by modifying the program input,
such that the branch predicate continues to evaluate to the
desired outcome. To illustrate this, let us consider the path
P in the example program in Figure 1. Using an input
the branch predicate BP2 evaluates to the
desired branch for the path P to be traversed. The value of
the predicate function F2 at I = (1; 2; 110) and hence the
predicate residual at this input is 8. Therefore, the value of
the predicate function can be allowed to decrease at most by
8 due to a change in the program input, so that the predicate
function continues to evaluate to evaluate to a positive value.
On the other hand, if a predicate does not evaluate to the
desired outcome, the predicate residual gives the least value
by which the predicate function's value must be changed, by
modifying the program input, such that the branch predicate
evaluates to the desired outcome. For example, using
the input the branch predicate BP2 does not
evaluate to the desired branch for the path P to be tra-
versed. The value of the predicate function and hence the
predicate residual at \Gamma99. Therefore, the input
should be modified such that the value of the predicate
function increases at least by 99 for the branch predicate
BP2 to evaluate to its desired outcome.
The predicate residuals essentially guide the search for
a program input that will cause each branch predicate
on the given path P to evaluate to its desired outcome.
We compute a predicate residual at the current input for
each branch predicate on the given path. Once we have a
predicate residual and a linear arithmetic representation at
the current input for each predicate function, we can apply
the relaxation technique to refine the input.
Refining the input
The linear arithmetic representation and the predicate residual
of a predicate function at an input essentially allow us
to map the change in the value of the predicate function to
changes in the program input. For each predicate function
on the path P , we derive a linear constraint on the increments
to the program input using the linear representation
of the predicate function and the value of the corresponding
predicate residual. This set of linear constraints is then
solved simultaneously using Gaussian elimination to compute
increments to the input. These increments are added
to the input to obtain a new input.
We illustrate the derivation of linear constraint corresponding
to the predicate function F 2. The branch predicate
BP2 evaluates to "false", when S(BP2; P ) is executed
on the arbitrarily chosen input
should evaluate to "true" for the path P to be traversed.
The residual value \Gamma99 and the linear function
are used to derive a linear constraint
Note that the constant term d does not appear in this con-
straint. Intuitively, this means that the increments to the input
I0 should be such that the value of predicate function F2
changes more than 99 so as to force F2 to evaluate to a positive
value and therefore force the corresponding branch predicate
BP2 to evaluate to its desired outcome, i.e., "true" on
the new input. For instance,
is one of the solutions to the above constraint. We see that
BP2 evaluates to "true" when S(BP2;P ) is executed on
The linear constraint derived above from the predicate
residual to compute the increments for the current input, is
an important step of this method. It is through this constraint
that the value of a predicate function at the current
input provides feedback to the increments to be computed
to derive a new input. Since this method computes a new
program input from the previous input and the residuals, it
is a relaxation method which iteratively refines the program
input to obtain the desired solution.
We would like to point out here that when the relational
operator in each branch predicate on the path is "=", this
method reduces to Newton's Method for iterative refinement
of an approximation to a root of a system of nonlinear functions
in several variables. To illustrate this, let us consider
the linear constraint in equation (2). Let us assume that
the relational operator in the corresponding branch predicate
BP2 is "=" and for simplicity let F2 be a function of
a single variable X. Then the linear constraint in equation
2 reduces to
99which is of the form
In general, the branch predicates on a path will have equalities
as well as inequalities. In such a case, our method is
different from Newton's Method for computing a root of a
system of nonlinear functions in several variables. But since
the increments for input are computed by stepping along
the tangent plane to the function at the current input, we
expect our method to have convergence properties similar
to Newton's Method.
In our discussion so far, we have assumed that the conditionals
are the only source of predicate functions. However,
in practice some additional predicates should also be considered
during test data generation. First, constraints on
inputs may exist that may require the introduction of additional
predicates (e.g., if an input variable I is required
to have a positive value, then the predicate I ? 0 should
be introduced). Second, we must introduce predicates that
constrain input variables to have values that avoid execution
errors (e.g., array bound checks and division by zero).
By considering the above predicates together with the predicates
from the conditionals on the path a desired input can
be found. For simplicity, in the examples considered in this
paper we only consider the predicates from the conditionals.
3 Description of the Algorithm
In this section, we present an algorithm to generate test
data for programs with numeric input, arrays, assignments,
conditionals and loops. The technique can be extended to
nonnumeric input such as characters and strings by providing
mappings between numeric and nonnumeric values. The
main steps of our algorithm are outlined in Figure 3. We
now describe the steps of our algorithm in detail and at the
same time illustrate each step of the algorithm by generating
test data for a path along which the predicate functions
are linear functions of the input. Examples with nonlinear
predicate functions are given in the next section.
The method begins with the given path P and an arbitrarily
chosen input I0 in the input domain of the program.
The program is executed on I0 . If P is traversed using I0 ,
then I0 is the desired program input and the algorithm ter-
minates; otherwise the steps of iterative refinement using
the relaxation technique are executed.
We illustrate the algorithm using the example from
section 2, where the path
in the program of Figure 1, with initial program input
considered. The path P is not traversed
when the program is executed using I0 . Thus, the iterative
relaxation method as discussed below is employed to refine
the input.
Step 1. Computation of Predicate Slices.
For each node n i in P that represents a branch predicate, we
compute its Predicate Slice S(n by a backward pass over
the static data dependency graph of input and assignment
statements along the path P before n i . The predicate slices
for the branch predicates on the path P are:
Step 2. Identifying the Input Dependency Sets.
For every node n i in P that represents a branch predicate,
we identify the input dependency set ID(n
variables on which n i is data dependent by executing the
predicate slice S(n on the current input Ik and taking
a dynamic slice over the dynamic data dependence graph.
The input dependency sets for the branch predicates on
the path P computed by executing the respective predicate
slices on P using the input are:
Note that all input and assignment statements along the
path P need be executed at most once to compute the input
dependency sets for all the branch predicates along the path
Step 3. Derivation of Linear Arithmetic Representations
of the Predicate Functions.
In this step, we construct a linear arithmetic representation
for the predicate function corresponding to each branch
predicate on P . For each branch predicate n i in P , we first
formulate a general linear function of the input variables
in the set ID(n For example, the linear formulations
for the predicate functions corresponding to the branch
predicates on path P are:
The coefficients a i , b i and c i of the input variables in the
above linear functions represent the slopes of the i th predicate
function with respect to input variables X;Y and Z
respectively. We approximate these slopes with respective
divided differences.
To compute the slope coefficient with respect to a
variable, we execute the predicate slice S(n
evaluate the predicate function F at the current input
m) and at Ik
the divided difference
This gives the value of the coefficient of i j in the linear
function for the predicate function F corresponding to node
in P . Similarly, we compute the other slope coefficients
in the linear function.
In the example being considered, evaluating F1 by executing
computing the
divided difference with respect to X, we get
larly, for F2 and F 4, we get 2: Computing
the divided differences with respect to Y using (1; 2;
computing
the divided difference with respect to Z using (1; 2;
and (1; 2; 4), we get
To compute the constant term d i , we execute the corresponding
predicate slice at Ik and evaluate the value of the
predicate function. The values of input variables in Ik and
the slope coefficients found above are substituted in the linear
function, and it is equated to the value of the predicate
function at Ik computed above. This gives a linear equation
in one unknown and it can be solved for the value of the
constant term.
For the example being considered, we substitute the
slope coefficients a i , b i and c i computed above and
in the general linear formulations for
the predicate functions F 1, F2 and F 4. Then, we equate
the general linear formulations to their respective values at
and obtain the following equations in d
Solving for the constant terms d i , we get
and Therefore, the linear arithmetic representations
for the three predicate functions of P are given by:
If a predicate function is a linear function of input variables
then the slopes computed above are exact and this method
results in the exact representation of the predicate function.
Input: A path and an Initial Program Input I0
Output: A Program Input If on which P is traversed
procedure TESTGEN(P;I0)
If P traversed on I0 then
step 1: for each Branch Predicate n i on P , do Compute
while not Done do
for each Branch Predicate n i on P , do
step 2: Execute S(n input Ik to compute input dependency set ID(n
step 3: Compute the linear representation L(ID(n for the predicate function for n i
step 4: Compute
step 5: Construct a linear constraint using R(n
the computation of increment to Ik
endfor
step inequalities in the constraint set to equalities
step 7: Solve this system of equations to compute increments for the current program input.
Compute the new program input Ik+1 by adding the computed increments to Ik
if the execution of the program on input Ik+1 follows path P then
else k++ endif
endwhile
endprocedure
Figure
3: Algorithm to generate test data using an iterative relaxation method.
If a predicate function computes a nonlinear function, the
linear function computed above represents the tangent
plane to the predicate function at Ik . In the neighborhood
of Ik , the inequality derived from the tangent plane
closely approximates the branch predicate. Therefore, if
the predicate function evaluates to a positive value at a
program input in the neighborhood of Ik , then so does the
linear function and vice versa. These linear representations
and the predicate residuals computed in subsequent step
are used to derive a set of linear constraints which are used
to refine Ik and obtain Ik+1 .
Step 4: Computation of Predicate Residuals.
We execute the predicate slice corresponding to each branch
predicate on P at the current program input Ik and evaluate
the value of the predicate function. This value of the predicate
function is the predicate residual value R(n
the current program input Ik for a branch predicate n i on
. The predicate residuals at I0 for the branch predicates
on P are:
Step 5: Construction of a System of Linear Constraints
to be solved to obtain increments for the
current input.
In this step, we construct linear constraints for computing
the increments \DeltaI k for the current input Ik , using the linear
representations computed in step 3 and predicate residual
values computed in step 4.
We first convert the linear arithmetic representations of
the predicate functions into a set of inequalities and equal-
ities. If a branch predicate should evaluate to "true" for
the given path to be traversed, the corresponding predicate
function is converted into an inequality/equality with the
same relational operator as in the branch predicate. On the
other hand, if a branch predicate should evaluate to "false"
for the given path, the corresponding predicate function is
converted into an inequality with a reversal of the relational
operator used in the branch predicate. If a branch predicate
relational operator and should evaluate to "false"
for the given path to be traversed, then we convert it into
two inequalities, one with the relational operator ? and the
other with the relational operator !. If the corresponding
predicate function evaluates to a positive value at Ik , then
we consider the inequality with ? operator else we consider
the one with ! operator. This discussion also holds when a
branch predicate has 6= relational operator and should evaluate
to "true" for the given path to be traversed. This set
of inequalities/equalities gives linear representations of the
branch predicates on P as they should evaluate for the given
path to be traversed.
Converting the linear arithmetic representations for the
predicate functions on the path P into inequalities, we get:
Now using the linear arithmetic representations at Ik of the
branch predicates as they should evaluate for the traversal
of path P and using the predicate residuals computed at
Ik , we apply the relaxation technique as described in the
previous section to derive a set of linear constraints on the
increments to the input.
By applying the relaxation technique to the linear arithmetic
representations computed above and the predicate
residuals computed in the previous step, the set of linear
constraints on increments to I0 are derived as given below:
Note that the constant terms d i from the linear arithmetic
representations do not appear in these constraints.
Step Conversion of inequalities to equalities.
In general, the set of linear constraints on increments derived
in the previous step may be a mix of inequalities and
equalities. For automating the method of computing the
solution of this set of inequalities, we convert it into a system
of equalities and solve it using Gaussian elimination.
We convert inequalities into equalities by the addition of
new variables. A simultaneous solution of this system of
equations gives us the increments for Ik to obtain the next
program input Ik+1 .
Converting the inequalities to equalities in the constraint
set, for the example being considered, by introducing three
new variables u, v and w, we get:
where we require that u; v; w ? 0.
Step 7: Solution of the System of Linear Equations.
We simultaneously solve the system of linear equations obtained
in the previous step using Gaussian elimination. If
the number of branch predicates on the path is equal to the
number of unknowns (input variables and new variables)
and it is a consistent nonsingular system of equations, then
a straightforward application of Gaussian elimination gives
the solution of this system of equations. If the number of
branch predicates on the path is more than the number of
unknowns, then the system of equations is overdetermined
and there may or may not exist a solution depending on
whether the system of equations is consistent or not. If the
system of equations is consistent then again a solution can
be found by applying Gaussian elimination to a subsystem
with the number of constraints equal to the number of vari-
ables, and verifying that the solution satisfies the remaining
constraints. If the system of equations is not consistent, it is
possible that the path is infeasible. In such a case, a consistent
subsystem of the set of linear constraints is solved using
Gaussian elimination and used as program input for the next
iteration. A repeated occurrence of inconsistent systems of
equations in subsequent iterations strengthens the possibility
of the path being infeasible. A testing tool may choose
to terminate the algorithm after a certain number of occurrences
of inconsistent systems with the conclusion that the
path is likely to be infeasible.
If the number of branch predicates on the path is less
than the number of unknowns, then the system of equations
is underdetermined and there will be infinite number of solutions
if the system is consistent. In this case, if there are n
branch predicates, we select n unknowns and formulate the
system of n equations expressed in these n unknowns. The
other unknowns are the free variables. The n unknowns are
selected such that the resulting system of equations is a set
of n linearly independent equations. Then, this system of n
equations in n unknowns is solved in terms of free variables,
using Gaussian elimination. The values of free variables are
chosen and the values of n dependent variables are com-
puted. The solution obtained in this step gives the values by
which the current program input Ik has to be incremented
to obtain a next approximation Ik+1 for the program input.
We execute the predicate slices and evaluate the predicate
functions at the new program input Ik+1 . If all the branch
predicates evaluate to their desired branches then Ik+1 is
a solution to the test data generation problem. Otherwise,
the algorithm goes back to step 2 with Ik+1 as the current
program input for (K th iteration.
As explained in the previous section, input dependency
sets and the linear representations depend on the current
input data. Therefore, they are computed again in the next
iteration using Ik+1 .
In the example considered, there are three linear constraints
and six unknowns. Therefore, it is an underdetermined
system of equations and can be considered as a system
of three equations in three unknowns with the other three
unknowns as free variables. If it is considered as a system of
three equations in the three variables \DeltaX , \DeltaY and \DeltaZ and
then Gaussian elimination is used to triangularize the coefficient
matrix, we find that the third equation is dependent
on first equation because the third row reduces to a row of
zeros resulting in a singular matrix. Therefore, we consider
it as a system of equations in \DeltaX , \DeltaZ and w:4
\DeltaX
\DeltaZ
The values of free variables can be chosen arbitrarily such
that u, v and w ?0. Choosing the free variables u, v and \Deltay
equal to 1, and solving for \Deltax, \Deltaz and w, we get,
2. Adding above increments to I0 , we get
Executing the predicate slices on path P using input I1
and evaluating the corresponding predicate functions, we
see that the three branch predicates evaluate to the desired
branch leading to the traversal of P . Therefore, the algorithm
terminates successfully in one iteration.
In this method, a new approximation of the program
input is obtained from the previous approximation and its
residuals. Therefore, it falls in the class of relaxation meth-
ods. The relaxation technique is used iteratively to obtain
a new program input until all branch predicates evaluate
to their desired outcomes by executing their corresponding
predicate slices.
If it is found that the method does not terminate in a
given time, then it is possible that either the time allotted
for test data generation was insufficient or there was an accumulation
of round off errors during the Gauss elimination
method due to the finite precision arithmetic used. Gaussian
elimination is a well established method for solving a
system of linear equations. Its implementations with several
pivoting strategies are available to avoid the accumulation
of round off errors due to finite precision arithmetic. Besides
these two possibilities, the only other reason for the method
not terminating in a given time is that the path is infeasible.
It is clear from the construction of linear representations
in step 3 that if the function of input computed by a
predicate function is linear, then the corresponding linear
representation constructed by our method is
the exact representation of the function computed by the
predicate function. We prove that in this case, the desired
program input is obtained in only one iteration.
Theorem 1
If the functions of input computed by all the predicate
functions for a path P are linear, then either the desired
program input for the traversal of the path P is obtained
in one iteration or the path is guaranteed to be infeasible.
Proof
Let us assume that there are m input variables for the program
containing the given path P and there are n branch
predicates BP1;BP2; :::BPn on the path P such that n1
of them use use the relational
". The linear representations for the predicate
functions corresponding to the
predicates on P can be computed by the method described
in Step 3 of the algorithm. Note that these representations
will be exact because the functions of input computed by
the predicate functions are linear. We can write the branch
predicates on path P in terms of these representations as
follows:
eq. set 1
Note that the coefficients corresponding to input variables
not in the input dependency set of a predicate function will
be zero.
be an approximation to the
solution of above set of equations. Then we have:
eq. set 2.
where r i;j is the residual value obtained by executing the corresponding
predicate slice using I0 and evaluating the corresponding
predicate function. Let
be a solution of the eq. set 1. Then, executing the given
program at If would result in traversal of the path P . The
goal is to compute this solution. Substituting If in eq. set
1, we get:
eq. set 3
Now subtracting eq. set 2 from eq. set 3, we get:
This is precisely the set of constraints
on the increments to the input that must be satisfied
so as to obtain the desired input. If the increment \Deltax i
for x i0 is computed from the above set of constraints then
gives the desired solution If .
As illustrated above, this requires only one iteration. This
indeed is the set of constraints used in Step 5 of our method
for test data generation. Therefore, given any arbitrarily
chosen input I0 in the program domain, our method derives
the desired input in one iteration.
While solving the constraints above, if it is found that
the set is inconsistent then the given path P is infeasible.
Therefore, our method either derives the desired solution in
one iteration or guarantees that the given path is infeasible.
3.1 Paths with Nonlinear Predicate Slices.
If the function of input computed by a predicate function is
nonlinear, the predicate function is locally approximated by
its tangent plane in the neighborhood of the current input
Ik . The residual value computed at Ik provides feedback to
the tangent plane at Ik for the computation of increments
to Ik so that if the tangent plane was an exact representation
of the predicate function, the predicate function will
evaluate to the desired outcome in the next iteration. Because
the slope correspondence between the tangent plane
and the predicate function is local to the current iteration
point Ik , it may take more than one iteration to compute
a program input at which the execution of predicate slice
results in evaluation of the branch predicate to the desired
branch outcome.
Let us now consider a path with a predicate function
computing a second degree function of the input, for
the example program in Figure 1,
with initial program input
The path P is not traversed on I0 . Therefore, input I0
is iteratively refined. The predicate slices and the input
dependency sets of branch predicates BP1;BP2 and BP4
are the same as in the example on path with linear predicate
slices. For BP 3,
Also, the linear representations for the predicate functions
F1; F2 and F4 are the same as for the example in the previous
section. For F 3, we have
The slope of F3 with respect to Z is computed by evaluating
the divided difference at (1,2,3) and (1,2,4). The above
linear function represents the tangent plane at I0 of the
nonlinear function computed by the predicate function corresponding
to branch predicate BP 3.
Converting each of these functions into an inequality
with the relational operator that the branch predicate
should evaluate to, we get:
Note that the relational operator for the representation for
BP2 is different from that of the example in previous section
because a different branch is taken.
The predicate residuals at I0 for the predicate slices on P
are:
The set of linear constraints to be used for computing the
increment for I0 using the results of above two steps are:
2:
The inequalities in the above constraint set are converted to
equalities by introducing new variables s ?
resulting system of equations in \DeltaX , \DeltaY ,
\DeltaZ and v is solved using Gaussian elimination.61
\DeltaX
\DeltaY
\DeltaZ
The free variables s, t and u are arbitrarily chosen equal to
1, and the system is solved for \DeltaX , \DeltaY , \DeltaZ , and v. The
solution of the above system is:
2.
These increments are added to I0 to obtain a new input I1 .
Executing the predicate slices on P using the program input
I1 , we find that all the four branch predicates evaluate to
their desired branches resulting in the traversal of P . There-
fore, the algorithm terminates successfully in one iteration.
We summarize the results in the following table.
Iteration
This example illustrates that the tangent plane at the current
input is a good enough linear approximation for the
predicate function in the neighborhood of the current input.
Now we consider a path with a predicate function computing
the sine function of the input. Let us consider
the following path P for the program in Figure 1,
with initial program input
The path P is not traversed on I0 because BP2 evaluates to
an undesired branch on I0 . Therefore, the steps for iterative
refinement of I0 are executed. We summarize the results
of execution of our test data generation algorithm for this
example in the table given below.
For path P to be traversed, the branch predicates BP1
and BP4 should evaluate to "false" and the branch predicates
BP2 and BP5 should evaluate to "true". As shown in
the table, through iterations 1 to 4 of the algorithm BP 1,
BP 2, and BP4 continue to evaluate to their desired outcomes
and the values of inputs X, Y and Z are incremented
such that F5 moves closer to zero in each iteration. Finally
in iteration 4, F5 becomes positive for program input I4 and
therefore BP5 becomes true.
We would like to point out that if the linear arithmetic
representation of a branch predicate is exact, then
the branch predicate evaluates to its desired outcome in the
first iteration and continues to do so in the subsequent it-
erations. Whereas, if the linear arithmetic representation
approximates the branch predicate in the neighborhood of
current input (as in the case of BP5) by its tangent plane,
then although in each iteration the refined input evaluates
to the desired outcome with respect to the tangent plane, it
may take several iterative refinements of the input for the
corresponding branch predicate to evaluate to its desired
outcome.
In this example, BP5 evaluated to "true"(its desired out-
come) at I0 , but it evaluated to "false" at I1 . This is because
the predicate residual provides the feedback to the linear
representation (i.e., the tangent plane to the
sine function) of BP5 and the input is modified by stepping
along this linear arithmetic representation. As a result, the
linear representation evaluates to a positive value at I1 , but
the change in the program input still falls short of making
the predicate function F5 evaluate to a positive value at I1 .
In the subsequent iterations, the input gets further refined
and finally in the fourth iteration, the predicate function F5
evaluates to a positive value.
As illustrated by this example, after the first iteration,
all the branch predicates computing linear functions of the
input continue to evaluate to their desired outcomes as the
input is further refined to satisfy the branch predicates computing
nonlinear functions of input. During regression test-
ing, a branch predicate or a statement on the given path
may be changed. To generate test data so that the modified
path is traversed, an input on which other branch predicates
already evaluate to their desired outcomes will be a
good initial input to be refined by our method. Therefore,
during regression testing, we can use the existing test data
as the initial input and refine it to generate new test data.
3.2 Arrays and Loops
When arrays are input in a procedure, one of the problems
faced by a test data generation method is the size of the input
arrays. Our test data generation method considers only
those array elements that are referenced when the predicate
slices for the branch predicates on the path are executed and
the corresponding predicate functions are evaluated. The
input dependency set for a given input identifies the input
variables that are relevant for a predicate function. There-
fore, the test data generation algorithm uses and refines only
those array elements that are relevant. This makes the test
data generation independent of the size of input arrays.
We illustrate how our method handles arrays and loops
by generating test data for a program from [10] given in
Figure
4. We take the same path and initial input as in [10]
so that we can compare the performance of the two program
execution based test data generation methods. Therefore,
where denotes the j th execution of the predicate P
with initial program input:
The path P is not traversed on I0 . Therefore, the steps for
iterative refinement of I0 are executed. Let l, h, s denote
low, high, step respectively, and
then the predicate slices and input dependency
sets of the branch predicates on P are:
The linear representations for the predicate functions corresponding
to the branch predicates on P are:
y,
h,
The predicate residuals at I0 for the predicate functions of
the branch predicates on P are:
The set of linear constraints to be used for computing increment
to I0 using the results of above two steps are:
The above inequalities are converted to equalities by introducing
seven new variables a, b, c, d, e, f and g. where
a, d, f ? 0; and b, c, e and g - 0. Considering it a system
of equations expressed in unknowns \Deltal, \Deltas, d, \Deltax, \Deltay, b
and e, we get:66 664
\Deltal
\Deltas
d
\Deltax
\Deltay
The unknowns and free variables are selected so as to obtain
a nonsingular system of equations. The values of free
variables can be chosen arbitrarily with the constraints that
a, d, f ? 0; and b, c, e and g - 0. The values of free variables
f , \Deltaz, \Deltah, and g are chosen as 1. The value of free
variable a is 3 for integer arithmetic. Solving for the unknown
variables using Gaussian elimination, we get:
3:
The new input generated after the first iteration is:
The input values of A[39]; A[51] and A[63] are copied into
A[89]; A[91] and A[93] respectively and then the increments
computed in this iteration are added to A[89]; A[91] and
A[93], giving:
This step is important because the increments computed in
the current iteration have to be added to the input used
in the current iteration but the resulting values have to be
copied into the array elements to be used in the next itera-
tion. Only elements A[89]; A[91]; and A[93] are relevant for
the next iteration.
By executing the predicate slices for P on the program
input I1 and evaluating the corresponding predicate func-
tions, we see that all the branch predicates evaluate to their
desired branch outcomes resulting in the traversal of P . All
the predicate functions corresponding to branch predicates
on P compute linear functions of input. Therefore, as expected
the algorithm terminates successfully in one itera-
tion. We summarize the results of this example in the table
in
Figure
4. Korel in [10] obtains test data for the above
path in 21 program executions, whereas our method finds
a solution in only one iteration with only 8 program exe-
cutions. One program execution is used in the beginning
to test whether path P is traversed on I0 , six additional
program executions are required for computation of all the
slope computations for linear representations and one more
program execution is required to test whether path P is traversed
on I1 .
If we choose the path
the set of linear constraints obtained in step 3 will be in-
consistent. Since, all the predicate functions for this path
compute linear functions of input, from Theorem 1, we conclude
that this path must be infeasible. It is easy to check
that P is indeed an infeasible path.
var
A: array[1.100] of integer;
min, max, i:integer;
1: input(low,high,step,A);
2: min := A[low];
3:
4: i := low
do
5:
P3: if min ? A[i] then
7:
8: output(min,max);
Iteration # low high step A[39] A[51] A[63] A[89] A[91] A[93]
Iteration # BP11 BP21 BP31 BP12 BP22 BP32 BP13
Figure
4: An example using an array and a loop.
4 Related Work
The most popular approach to automated test data generation
has been through path oriented methods such as
symbolic evaluation and actual program execution. One of
the earliest systems to automatically generate test data using
evaluation only for linear path constraints was
described in [4]. It can detect infeasible paths with linear
path constraints but is limited in its ability to handle array
references that depend on program input. A more recent attempt
at using symbolic evaluation for test data generation
for fault based criteria is described in [6]. In this work, a
test data generation system based on a collection of heuristics
for solving a system of constraints is developed. The
constraints derived are often imprecise, resulting in an approximate
solution on which the path may not be traversed.
Since the test data is not refined further so as to eventually
obtain the desired input, the method fails when the path is
not traversed on the approximate solution.
A program execution based approach that requires a partial
solution to test data generation problem to be computed
by hand using values of integer input variables is described
in [14]. There is no indication on how to automate the step
requiring computation by hand. Program execution based
approaches for automated test data generation have been
described in [11, 8], but they have been developed for statement
and branch testing criteria.
An approach to automatically generate test data for a
given path using the actual execution of the program is presented
in [10]. Another program execution based approach
that uses program instrumentation for test data generation
for a given path has been reported in [7]. These approaches
consider only one branch predicate and one input variable at
a time and use backtracking. Therefore, they may require
a large number of iterations even if all the branch conditionals
along the path are linear functions of the input. If
several conditionals on the selected path depend on common
input variables, a lot of effort can be wasted in backtracking.
They cannot consider all the branch predicates on the path
simultaneously because the path may not be traversed on
an intermediate input. The concept of predicate slice allows
us to evaluate each branch predicate on the path independently
on an intermediate input even though the path may
not be traversed on this input. This makes our technique
more efficient than other execution based methods.
Our method is scalable to large programs since the number
of program executions in each iteration is independent
of the path length and at most equal to the number of input
variables plus one. If there are m input variables, in each
iteration, at most m executions of the input and assignment
statements on the given path are required to compute the
slope coefficients. The values of the predicate functions at
input Ik are known from the previous iteration. One execution
of the input and assignment statements on the path is
required to test whether the path is traversed on Ik+1 .
Our method uses Gaussian elimination to solve the system
of linear equations, which is a well established and
widely implemented technique to solve a system of linear
equations. Therefore, our method is suitable for automa-
tion. The size of the system of linear equations to be solved
increases with an increase in the number of branch conditionals
on a path, but the increase in cost in solving a larger
system is significantly less than that of existing execution
based methods.
Conclusions
In this paper, we have presented a new program execution
based method, using well established mathematical tech-
niques, to automatically generate test data for a given path.
The method is an innovative application of the traditional
relaxation technique used in numerical analysis to obtain
an exact solution of an equation by iterative improvement
of an approximate solution. The results obtained from this
method for test data generation are very promising. It provides
a practical solution to the automated test data generation
problem. It is easy to implement as the tools required
are already available. It is more efficient than existing program
execution based approaches as it requires fewer program
executions because all the branch predicates on the
path are considered simultaneously, and there is no back-
tracking. It can also detect infeasibility for a large class of
paths in a single iteration. Because it is execution based, it
can handle different programming language features. We are
working on extending the technique for strings and pointers.
--R
"Test Plan Generation using Formal Grammars,"
"Automatic Generation of Random Self-checking Test
"ADIC: An Extensible Automatic Differentiation Tool for ANSI-C,"
"A System to Generate Test Data and Symbolically Execute Programs,"
"A Rule Based Software Test Data Generator,"
"Constraint-based Automatic Test Data Generation,"
"ADTEST: A Test Data Generation Suite for Ada Software Systems,"
"Automatic Test Data Generation using Constraint Solving Techniques,"
"ATLAS - An Automated Software Testing System,"
"Automated Software Test Data Generation,"
A Dynamic Approach of Test Data Gener- ation
"An Automatic Data Generation System for Data Base Simulation and Testing,"
"Automatic Generation of Testcase Datasets,"
"Automatic Generation of Floating-Point Test
"Numerical Analysis,"
--TR
Automatic generation of random self-checking test cases
Automated Software Test Data Generation
Constraint-Based Automatic Test Data Generation
ADTEST
Automatic test data generation using constraint solving techniques
A Rule-Based Software Test Data Generator
Test plan generation using formal grammars
ATLAS-An Automated Software Testing System
--CTR
JinHui Shan , Ji Wang , ZhiChang Qi , JianPing Wu, Improved method to generate path-wise test data, Journal of Computer Science and Technology, v.18 n.2, p.235-240, March
Nguyen Tran Sy , Yves Deville, Consistency techniques for interprocedural test data generation, ACM SIGSOFT Software Engineering Notes, v.28 n.5, September
Jon Edvardsson , Mariam Kamkar, Analysis of the constraint solver in UNA based test data generation, ACM SIGSOFT Software Engineering Notes, v.26 n.5, Sept. 2001
Xun Yuan , Atif M. Memon, Using GUI Run-Time State as Feedback to Generate Test Cases, Proceedings of the 29th International Conference on Software Engineering, p.396-405, May 20-26, 2007
Stephen Thomas , Laurie Williams, Using Automated Fix Generation to Secure SQL Statements, Proceedings of the Third International Workshop on Software Engineering for Secure Systems, p.9, May 20-26, 2007
Meng-Luo Ji , Ji Wang , Shuhao Li , Zhi-Chang Qi, Automated WCET analysis based on program modes, Proceedings of the 2006 international workshop on Automation of software test, May 23-23, 2006, Shanghai, China
Hari Hampapuram , Yue Yang , Manuvir Das, Symbolic path simulation in path-sensitive dataflow analysis, ACM SIGSOFT Software Engineering Notes, v.31 n.1, January 2006
Paolo Tonella, Evolutionary testing of classes, ACM SIGSOFT Software Engineering Notes, v.29 n.4, July 2004
Shan Lu , Pin Zhou , Wei Liu , Yuanyuan Zhou , Josep Torrellas, PathExpander: Architectural Support for Increasing the Path Coverage of Dynamic Bug Detection, Proceedings of the 39th Annual IEEE/ACM International Symposium on Microarchitecture, p.38-52, December 09-13, 2006
Torsten Robschink , Gregor Snelting, Efficient path conditions in dependence graphs, Proceedings of the 24th International Conference on Software Engineering, May 19-25, 2002, Orlando, Florida
David Chays , Saikat Dan , Phyllis G. Frankl , Filippos I. Vokolos , Elaine J. Weber, A framework for testing database applications, ACM SIGSOFT Software Engineering Notes, v.25 n.5, p.147-157, Sept. 2000
Rupak Majumdar , Koushik Sen, Hybrid Concolic Testing, Proceedings of the 29th International Conference on Software Engineering, p.416-426, May 20-26, 2007
Wei Zhao , Lu Zhang , Yin Liu , Jiasu Sun , Fuqing Yang, SNIAFL: Towards a static noninteractive approach to feature location, ACM Transactions on Software Engineering and Methodology (TOSEM), v.15 n.2, p.195-226, April 2006
Gregor Snelting , Torsten Robschink , Jens Krinke, Efficient path conditions in dependence graphs for software safety analysis, ACM Transactions on Software Engineering and Methodology (TOSEM), v.15 n.4, p.410-457, October 2006
Cristian Cadar , Vijay Ganesh , Peter M. Pawlowski , David L. Dill , Dawson R. Engler, EXE: automatically generating inputs of death, Proceedings of the 13th ACM conference on Computer and communications security, October 30-November 03, 2006, Alexandria, Virginia, USA
Marc Fisher, II , Gregg Rothermel , Darren Brown , Mingming Cao , Curtis Cook , Margaret Burnett, Integrating automated test generation into the WYSIWYT spreadsheet testing methodology, ACM Transactions on Software Engineering and Methodology (TOSEM), v.15 n.2, p.150-194, April 2006 | predicate sliccs;input dependency set;predicate residuals;relaxation methods;dynamic test data generation;path testing |
288811 | Learning to Recognize Volcanoes on Venus. | Dramatic improvements in sensor and image acquisition technology have created a demand for automated tools that can aid in the analysis of large image databases. We describe the development of JARtool, a trainable software system that learns to recognize volcanoes in a large data set of Venusian imagery. A machine learning approach is used because it is much easier for geologists to identify examples of volcanoes in the imagery than it is to specify domain knowledge as a set of pixel-level constraints. This approach can also provide portability to other domains without the need for explicit reprogramming; the user simply supplies the system with a new set of training examples. We show how the development of such a system requires a completely different set of skills than are required for applying machine learning to toy world domains. This paper discusses important aspects of the application process not commonly encountered in the toy world, including obtaining labeled training data, the difficulties of working with pixel data, and the automatic extraction of higher-level features. | Introduction
Detecting all occurrences of an object of interest in a set of images is a problem that
arises in many domains, including industrial product inspection, military surveil-
lance, medical diagnosis, astronomy, and planetary geology. Given the prevalence
of this problem and the fact that continued improvements in image acquisition and
storage technology will produce ever-larger collections of images, there is a clear
need for algorithms and tools that can be used to locate automatically objects of
interest within such data sets.
The application discussed in this paper focuses on data from NASA/JPL's highly
successful Magellan mission to Venus. The Magellan spacecraft was launched from
Earth in May of 1989 with the objective of providing global synthetic aperture radar
(SAR) mapping of the entire surface of Venus. In August of 1990 the spacecraft
entered a polar elliptical orbit around Venus. Over the next four years Magellan
returned more data than all previous planetary missions combined (Saunders et
al., 1992), specifically, over 30,000 1024 \Theta 1024 pixel images covering 98% of the
planet's surface. Although the scientific possibilities offered by this data set are
numerous, the sheer volume of data is overwhelming the planetary geology research
community. Automated or semi-automated tools are necessary if even a fraction of
the data is to be analyzed (Burl et al., 1994a).
1.1. Scientific Importance
Volcanism is the most widespread and important geologic phenomenon on Venus (Saun-
ders et al., 1992), and thus is of particular interest to planetary geologists studying
the planet. From previous low-resolution data, it has been estimated that there are
on the order of one million small volcanoes (defined as less than 20 km in diameter)
that will be visible in the Magellan imagery (Aubele and Slytua, 1990). Understanding
the global distribution and clustering characteristics of the volcanoes is
central to understanding the geologic evolution of the planet (Guest et al., 1992;
Crumpler et al., 1997). Even a partial catalog including the size, location, and
other relevant information about each volcano would enable more advanced scientific
studies. Such a catalog could potentially provide the data necessary to answer
basic questions about the geophysics of Venus, questions such as the relationship
between volcanoes and local tectonic structure, the pattern of heat flow within the
planet, and the mechanics of volcanic eruption.
1.2. Impact of an Automated System
A catalog of large Venusian volcanoes (greater than 20 km in diameter) has been
completed manually (Crumpler et al., 1997; Stofan et al., 1992). However, by optimistic
estimates the time for a geologist to generate a comprehensive catalog of
small volcanoes on Venus would be ten to twenty man-years. In fact, in our experiments
we have found that humans typically become quite fatigued after labeling
only 50-100 images over a few days. Thus, large-scale sustained cataloging by geologists
is not realistic even if they had the time to devote this task. An automated
system would provide many benefits, including the ability to maintain a uniform,
objective standard throughout the catalog, thereby avoiding the subjectivity and
drift common to human labelers (Cooke 1991; Poulton 1994). Even a partially automated
system that functions as an "intelligent assistant" would have considerable
impact.
1.3. Motivation for a Learning Approach
There are two approaches one could follow for building an automated volcano cataloging
system. The first would be to develop hand-coded, volcano-specific detectors
based on a high-level description of the problem domain provided by human ex-
perts. There are, however, a number of drawbacks to this method. Geologists are
quite good at identifying examples of the objects of interest, but it is often difficult
for them to identify precisely which characteristics of each volcano in the image
led to its detection. High-level features, such as circularity or the presence of a
summit pit, are difficult to translate into pixel-level constraints. Finally, visual
recognition of localized objects is a problem that arises in many domains; using
a hand-coded approach would require a significant amount of reprogramming for
each new domain.
The second approach is to use learning from examples. Since the geologists can
identify examples of volcanoes with relative ease, their domain knowledge can be
captured implicitly through the set of labeled examples. Using a learning algorithm,
we can extract an appearance model from the examples and apply the model to
find new (previously unseen) volcanoes. This approach can also provide portability
since the user must merely supply a new set of training examples for each new
problem domain-in principle no explicit reprogramming is required.
1.4. Related Work
Most prior work on automated analysis of remotely sensed imagery has focused on
two problems: (1) classification of homogeneous regions into vegetation or land-use
types, e.g., (Richards, 1986) and (2) detection of man-made objects such as
airports, roads, etc. The first technique is not applicable to the volcano detection
problem, and the second is inappropriate because naturally-occurring objects (such
as volcanoes) possess much greater variability in appearance than rigid man-made
objects. Several prototype systems (Flickner et al. 1995; Pentland, Picard, and
Sclaroff, 1996; Picard and Pentland, 1996) that permit querying by content have
been developed in the computer vision community. In general, these systems rely
on color histograms, regular textures, and boundary contours or they assume that
objects are segmented and well-framed within the image. Since the small volcanoes
in the Magellan imagery cannot be characterized by regular textures or boundaries,
none of these approaches are directly applicable to the volcano cataloging problem.
(For example, we found that the edge contrast and noise level in the SAR images
did not permit reliable edge-detection.)
In general, there has been relatively little work on the problem of finding natural
objects in a cluttered background when the objects do not have well-defined edge
or spectral characteristics. Hough transform methods were used for the detection
of circular geologic features in SAR data (Cross, 1988; Skingley and Rye, 1987) but
without great success. For the small volcano problem, Wiles and Forshaw (Wiles
and Forshaw, 1993) proposed using matched filtering. However, as we will see in
Section 3, this approach does not perform as well as the learning system described
in this paper. Fayyad and colleagues (Fayyad et al., 1996) developed a system to
catalog sky objects (stars and galaxies) using decision tree classification methods.
For this domain, segmentation of objects from the background and conversion to
a vector of feature measurements was straightforward. A good set of features
had already been hand-defined by the astronomy community so most of the effort
focused on optimizing classification performance. In contrast, for the Magellan
images, separating the volcanoes from the background is quite difficult and there
is not an established set of pixel-level features for volcanoes.
1.5. The JARtool System
JARtool (JPL Adaptive Recognition Tool) is a trainable visual recognition system
that we have developed in the context of the Magellan volcano problem. The
basic system is illustrated in Figure 1. Through a graphical user interface (GUI),
which is shown in Figure 2, a planetary scientist can examine images from the
Magellan CD-ROMs and label examples in the images. The automated portion
of the system consists of three components: focus of attention (FOA), feature
extraction, and classification. Each of these components is trained for the specific
problem of volcano detection through the examples provided by the scientist.
The specific approach taken in JARtool is to use a matched filter derived from
training examples to focus attention on regions of interest within the image. Principal
components analysis (PCA) of the training examples provides a set of domain-specific
features that map high-dimensional pixel data to a low-dimensional feature
space. Supervised machine learning techniques are then applied to derive a mapping
from the PCA features to classification labels (volcano or non-volcano). The PCA
technique, which is also known as the discrete Karhunen-Loeve transform (Fuku-
naga, 1990), has been used extensively in statistics, signal processing, and computer
vision (Sirovich and Kirby, 1987; Turk and Pentland 1991; Moghaddam and Pent-
land, 1995; Pentland et al., 1996) to provide dimensionality reduction. PCA seeks a
lower-dimensional subspace that best represents the data. An alternate approach is
linear discriminant analysis (LDA) (Duda and Hart, 1973; Swets and Weng, 1996),
which seeks a subspace that maximizes the separability between classes. However,
in the volcano context, the non-volcano class is so complex that LDA methods at
the pixel-level do not work well.
1.6. Outline
In Section 2 the JARtool system design process is described with an emphasis
on the real-world issues that had to be addressed before standard "off-the-shelf"
classification learning algorithms could be applied. In Section 3 we provide an
empirical evaluation of our learning-based system and compare the performance
with that of human experts. Section 4 indicates the current status of the project. In
Section 5 we discuss the lessons learned from the project and how these application
lessons could provide useful directions for future machine learning research. In
Convolve image with matched filter and
select regions with highest response
Project each candidate region onto a bank
of filters derived by principal components analysis
filter
Discriminate between
volcanoes and
non-volcanoes
in projected feature space
Figure
1. Overview of the JARtool system.
Figure
2. In addition to the standard image browsing and labeling capabilities, the JARtool
graphical user interface enables the user to learn models of an object and then look for novel
instances of the object. The image displayed here is a 30km \Theta 30km region on Venus containing
a number of small volcanoes. (See Figure 5 to find out where the volcanoes are located.)
Section 6 we conclude with a summary of the main points of the article and indicate
directions for future work.
2. System Design
2.1. Magellan Imagery
Pettengill and colleagues (Pettengill et al., 1991) give a complete description of
the Magellan synthetic aperture radar system and associated parameters. Here we
bright dark
strong backscatter weak backscatter
near-range flank far-range flank
RADAR
Figure
3. Because of the topography, the near-range volcano flanks scatter more energy back to
the radar and appear bright. In contrast, the far-range flanks scatter energy away and appear
dark.
focus only on how the imaging process affects the appearance of the volcanoes in
the dataset.
Figure
2 shows a 30km \Theta 30km area of Venus as imaged by Magellan. This
area is located near lat . Illumination is from the lower left and
the pixel spacing 1 is 60m. Observe that the larger volcanoes in the image have
the classic radar signature one would expect based on the topography; that is, the
side of the volcano closest to the radar (near-range) appears bright and the side
away from the radar (far-range) appears dark. The reason is that the near-range
side scatters more energy back to the sensor than the surrounding flat plains, while
the far-range side scatters most of the energy off into space. The brightness of
each pixel is proportional to the log of the returned energy, so volcanoes typically
appear as a bright-dark pair within a circular planimetric outline. Near the center,
there is often a visible summit pit that appears as a dark-bright pair since the radar
energy backscatters strongly from the far-range rim. However, if the pit is too small
relative to the image resolution, it may not appear at all or may appear just as a
bright spot. A high-level illustration of the imaging process is given in Figure 3.
These topography-induced features are the primary visual cues that geologists
report using to locate volcanoes. However, there are a number of other, more
subtle cues. The apparent brightness of an area in a radar image depends not
only on the macroscopic topography but also on the surface roughness relative to
the radar wavelength. Thus, if the flanks of a volcano have different roughness
properties than the surrounding plains, the volcano may appear as a bright or dark
circular area instead of as a bright-dark pair. Volcanoes may also appear as radial
flow patterns, texture differences, or as disruptions of graben. (Graben are ridges or
grooves in the planet surface, which appear as bright lines in the radar imagery-see
Figure
2.)
2.2. Obtaining a Labeled Training Database
Although the Magellan imagery of Venus is the highest resolution available, expert
geologists are unable to determine with 100% certainty whether a particular image
feature is indeed a volcano. This ambiguity is due to a variety of factors such as
image resolution, signal-to-noise level, and difficulties associated with interpreting
SAR data. For the same image, different geologists will produce different labelings,
and even the same geologist may produce different labelings at different times.
To help quantify this uncertainty, the geologists are asked to assign the training
examples to subjective probability "categories." Based on extensive discussions
with the geologists, five categories are used.
almost certainly a volcano (p - 0.98); the image clearly shows a
summit pit, a bright-dark pair, and a circular planimetric outline.
Category probably a volcano (p - 0.80); the image shows only two of the
three category 1 characteristics.
Category 3 - possibly a volcano (p - 0.60); the image shows evidence of bright-
dark flanks or a circular outline; summit pit may or may not be visible.
Categroy 4 - a pit (p - 0.50); the image shows a visible pit but does not provide
conclusive evidence for flanks or a circular outline.
Category 5 - not a volcano (p - 0.0).
The probability p attached to category i is interpreted as follows. Given that a geologist
has assigned an image feature to category i, the probability that the feature
is truly a volcano is approximately p i . Figure 4 shows some typical volcanoes from
each category. The use of quantized probability bins to attach levels of certainty
to subjective image labels is not new. The same approach is used routinely in
the evaluation of radiographic image displays to generate subjective ROC (receiver
operating characteristic) curves (Bunch, 1978; Chesters, 1992).
A simple experiment was conducted to assess the variability in labeling between
two planetary geologists, who will be referred to as A and B. Both of these geologists
were members of the Volcanism Working Group of the Magellan science team and
have extensive experience in studying Earth-based and planetary volcanism. They
have published some of the standard reference works on Venus volcanism (Guest et
al., 1992; Aubele and Slyuta, 1990; Crumpler et al., 1997). Each geologist separately
labeled a set of four images known as HOM4. The labels were then compared using
a simple spatial thresholding step to determine the correspondence between label
events from the two geologists. simply refers to a labeler circling an
image feature and assigning a subjective confidence label.) The resulting confusion
matrix is given in Table 1.
The (i; j)th element of the confusion matrix counts the number of times that
labeler A assigned a visual feature to category i while labeler B assigned the same
feature to category j. Two label events are considered to belong to the same visual
feature, if they are within a few pixels of each other. The (i; 5) entries count the
Volcanoes
Category 1:
Category 2:
Category 3:
Category 4:
Figure
4. A selection of volcanoes from each of the confidence categories.
Table
1. Confusion matrix of geologist A vs. geologist B on HOM4.
geologist B
Label 1 Label 2 Label 3 Label 4 Label 5
geologist A
Label
Label
Label
instances where labeler A provided label i, but labeler B did not provide any label
(and vice versa for the (5; Entry (5,5) is not well-defined.
If both labelers agreed completely (same location and label for all events), the
confusion matrix would have only diagonal entries. In the case shown in Table 1,
there is clearly substantial disagreement, as evidenced by the off-diagonal elements
in the matrix. For example, label 3's are particularly noisy in both ``directions.''
Label 3's are actually noisier than label 4's because there is greater variability in the
appearance of category 3's compared to category 4's (4's are simple pits, while 3's
are less well-defined). About 50% of the objects assigned label 3 by either labeler
are not detected at all by the other labeler. On the other hand, only about 30% of
the objects assigned label 4 and 10% of the objects assigned label 1 by one labeler
are missed by the other.
The confusion matrix clearly illustrates that there is considerable ambiguity in
small volcano identification, even among experts. Success for the task can only be
measured in a relative manner. To evaluate performance, we treat one set of labels
as ground truth and measure how well an algorithmic detector agrees with this set
of reference labels. In this paper, reference labels 1-4 are all considered to be true
volcanoes for the purpose of performance evaluation. An alternative "weighted"
performance metric is discussed in (Burl et al., 1994b). We also measure how well
human labelers agree with the reference labels. Ideally, an algorithm should provide
the same consistency with the reference labels as the human experts. A consensus
labeling generated by a group of geologists working together and discussing the
merits of each image feature is often used as the reference, since in general this
labeling will be a more faithful representation of the actual ground truth. A typical
consensus labeling is shown in Figure 5. From the geologists' point of view, it is a
useful achievement to detect most of the category 1's and 2's, as the category 3's
and 4's would not be used as part of a conservative scientific analysis.
2.3. Focus of Attention
The first component in the JARtool system is a focus of attention
that is designed to take as input an image and produce as output a discrete list of
candidate volcano locations. In principle, every pixel in the image could be treated
as a candidate location; however, this is too expensive computationally. A better
approach is to use the FOA to quickly exclude areas that are void of any volcanoes.
Only local regions passing the FOA are given to subsequent (computationally more
expensive) processes. Hence, the FOA should operate in an aggressive, low-miss-
rate regime because any missed volcanoes at this stage will be lost for good. The
rate of false positives (false alarms) from the FOA is not critical; these may still be
rejected by later stages (classification).
Given the constraints of the FOA (low miss rate and low computational cost), a
reasonable approach is to use a matched filter, i.e., a linear filter that matches the
signal one is trying to find. The matched filter is optimal for detecting a known
signal in white (uncorrelated) Gaussian noise (Duda and Hart, 1973). Of course,
the volcano problem does not quite satisfy these underlying assumptions. Specifi-
cally, the set of observed volcanoes show structured variations due to size, type of
volcano, etc. rather than "isotropic" variations implicit with a signal plus white
noise model. Likewise, the clutter background cannot be properly modeled as white
noise. Despite these caveats, we have found empirically that the following modified
matched filtering procedure provides a reasonable focus of attention mechanism.
pixel region around the i-th training volcano. There is some
loss of information due to this windowing process (especially for larger volcanoes).
However, in our experiments, the results have not been particularly sensitive to the
value of k spoiled pixels) (Burl et al., 1996). This may indicate that
most of the information is concentrated at the center of the volcano (for example,
Figure
5. Consensus labeling of a Magellan SAR image of Venus. The labeling shows the size,
location, and subjective uncertainty of each image feature. The dashed box corresponds to the
subimage shown in Figure 2.
the transition in shading and presence of a summit pit) or that the matched filter is
not able to exploit the information from the periphery-both explanations probably
have merit.
Each k \Theta k region can be normalized with respect to the local average image
brightness (DC level) and contrast as follows:
~
Figure
6. The matched filter displayed as a template (left) and as a surface plot (right). The
matched filter captures many of the characteristics that planetary geologists report using when
manually locating volcanoes. In particular, there is a bright central spot corresponding to the
volcanic summit pit and left-to-right bright-dark shading.
is the mean of the pixels in v i , oe i is their standard deviation, and 1
is a k \Theta k matrix of ones. This normalization is essential since there are large
fluctuations in the DC and contrast between images and even between local areas
of the same image. A modified matched filter f is constructed by averaging the
normalized volcano examples from the training set. Figure 6 shows the resulting
filter.
Applying the matched filter to an image involves computing the normalized cross-correlation
of f with each k \Theta k image patch. The cross-correlation can be computed
efficiently using separable kernel methods to approximate the 2-D kernel f as a sum
of 1-D outer products (Treitel and Shanks, 1971). High response values indicate
that there is strong correlation between the filter and the image patch. Candidate
volcano locations are determined by thresholding the response values and spatially
aggregating any threshold crossings that are within a prescribed distance from each
other (default distance = 4 pixels).
Obviously one concern with such a simple FOA is that if the population of volcanoes
contains significant subclasses then a single filter would not be expected to
perform well. However, experiments with an alternative mechanism that uses clustering
to find several matched filters has provided only limited improvement (Stough
and Brodley, 1997).
2.4. Feature Extraction
A region of interest (ROI) identified by the focus of attention algorithm can be
viewed as a point in a k 2 -dimensional space by stringing the k \Theta k pixel values
out into a long vector. Note, however, that there is a loss of spatial neighborhood
information. Algorithms that treat the data in this form will not explicitly know
that certain pixels were adjacent in the original image data. Also, given the small
number of training examples relative to the dimensionality of the data, there is little
hope of learning anything useful without additional constraints. Experimental results
with a variety of feedforward neural network classification models verified this
hypothesis (Baldi, 1994). The training data were often linearly separable in pixel
space resulting in an underconstrained training procedure that allowed the model
to essentially memorize the training data perfectly, but with poor generalization to
unseen data. Thus, direct use of the pixels as input to a classification algorithm is
not practical.
To work around the small number of training examples, we make use of the fact
that for visual data, there is additional prior information that helps constrain the
problem. Specifically, there is reason to believe that the volcanoes "live" on a low-dimensional
manifold embedded in -dimensional space. Although the manifold
is certainly nonlinear, we make use of the principal components analysis (PCA)
paradigm to approximate the manifold with a low-dimensional hyperplane. This
approximation can be viewed as a mapping from the high-dimensional pixel space
to a lower dimensional feature space in which the features consist of linear combinations
of the pixel values. We have also experimented with clustering the training
data in pixel space and applying PCA separately to each cluster. This extension
yields an approximation to the manifold by a union of hyperplanes. (See Section 2.7
for additional discussion.)
Before presenting a more detailed view of the PCA approach, we remark that
PCA is not the only method available for linear feature extraction. The assumption
behind PCA is that it is important to find features that represent the data. Other
approaches, such as linear discriminant analysis (LDA), seek to find discriminative
features that separate the classes. In the context of finding volcanoes, however,
the "other" class is quite complex consisting of all patterns that are not volcanoes.
Direct application of LDA in pixel space leads to poor results.
Recently, a method was proposed that combines PCA and LDA to find "most discriminative
features" (Swets and Weng, 1996). In this approach, PCA is used on
the pooled set of examples (volcanoes and non-volcanoes) to project the pixel data
to a lower dimensional feature space. LDA methods are then applied in the projected
space. Effectively this amounts to using a "linear machine" classifier (Duda
and Hart, 1973) in the space of principal components features. In Section 3 we
demonstrate that by performing PCA on only the positive examples and allowing
more complex classifiers in PCA space, the JARtool algorithm is able to outperform
the method of Swets and Weng by a significant margin.
PCA can be summarized as follows. The goal is to find a q-dimensional subspace
such that the projected data is closest in L 2 norm (mean square error) to the
original data. This subspace is spanned by the eigenvectors of the data covariance
matrix having the highest corresponding eigenvalues. Often the full covariance
matrix cannot be reliably estimated from the number of examples available, but
the approximate highest eigenvalue basis vectors can be be computed using singular
value decomposition (SVD).
Each normalized training volcano is reshaped into a vector and placed as a column
in an n \Theta m matrix X , where n is the number of pixels in an ROI
m is the number of volcano examples. The SVD produces a factorization of X as
follows:
For notational convenience, we will assume m is less than n. Then in Equation 2,
U is an n \Theta m matrix such that U T and diagonal with the
elements on the diagonal (the singular values) in descending order, and V is m \Theta m
with I m\Thetam . Notice that any column of X (equivalently, any ROI)
can be written exactly as a linear combination of the columns of U . Furthermore,
if the singular values decay quickly enough, then the columns of X can be closely
approximated using linear combinations of only the first few columns of U . That
is, the first few columns of U serve as an approximate basis for the entire set of
examples in X .
Thus, the best q-dimensional subspace on which to project is spanned by the
first q columns of U . An ROI is projected into this q-dimensional feature space as
follows:
where x is the ROI reshaped as an n-dimensional vector of pixels, u i is the i-th
column of U , and y is the q-dimensional vector of measured features.
Figure
7-b shows the columns of U reshaped as ROIs. The templates are ordered
according to singular value so that the upper left template corresponds to the
maximum singular value. Notice that the first ten templates (top row) exhibit
structure while the remainder appear very random. This suggests using a subspace
of dimension - 10. The singular value decay shown in Figure 7-c also indicates that
6 to 10 features will be adequate to encode most of the information in the examples.
Indeed, parameter sensitivity experiments, which are reported in (Burl et al., 1996)
show that values of q in the range 4-15 yield similar overall performance.
2.5. Classification
The FOA and feature extraction steps transform the original Magellan images into a
discrete set of feature vectors that can be classified with "off-the-shelf" learning al-
gorithms. The remaining step is to classify ROIs into volcano or non-volcano. FOA
and feature learning are based exclusively on positive examples (volcanoes). The
classifier could also be trained in this manner. However, there are arguments (Fuku-
naga, 1990) showing that single-class classifiers are subject to considerable error
even in relatively low dimensions because the location of the "other" distribution
(a) Training Volcanoes (b) Principal Components
(c) Singular Values
9010Feature Number
Figure
7. (a) The collection of volcanoes used for feature synthesis. (b) The principal components
derived from the examples. (c) The singular values indicate the importance of each of the features
for representing the examples.
is unknown. Experiments based on non-parametric density estimation of the volcano
class verified this hypothesis: the method gave poorer performance than the
two-class methods described below.
The negative examples were not used in the FOA and feature learning steps due
to the complexity of the non-volcano class. Nonetheless these steps provide substantial
conditioning of the data. For example, the FOA centers objects within a
k \Theta k window. The feature extraction step uses prior knowledge about visual data
(i.e., the fact that certain object classes can be modeled by linear combinations
of basis functions) to map the data to a lower-dimensional space in which there
is an improved opportunity for learning a model that generalizes to unseen data.
Hence, in the PCA space it is reasonable to use supervised two-class learning tech-
niques. We have experimented with a variety of algorithms including quadratic (or
Gaussian) classifiers, decision trees, linear discriminant analysis, nearest neighbors
using Euclidean and spatially weighted distance measures (Turmon, 1996), tangent
distance (Simard, le Cun, and Denker, 1993), kernel density estimation, Gaussian
mixture models, and feedforward neural networks (Asker and Maclin, 1997b;
Cherkauer, 1996).
All of these methods (with the exception of linear discriminant analysis) yielded
similar performance on an initial test set of images. We interpret this to mean
that the critical system design choices were already made, specifically in the feature
learning stage; the choice of classifier is of secondary importance. In the experiments
reported in Section 3, the quadratic classifier is used as the default since it is
optimal for Gaussian data and provides posterior probability estimates, which can
be thresholded to vary the trade-off between detection and false alarm rate. Letting
designate the volcano class and ! 2 the non-volcano class, we have the following
from Bayes' rule:
where y is the observed feature vector. For the quadratic classifier, the posterior
probabilities are estimated by assuming the class-conditional densities are Gaussian
where the statistics of each class (- i and \Sigma i ) are estimated from labeled training
data.
2.6. Summary of the Training Procedure
In summary, training consists of a three-step process based on the geologist-labeled
images:
1. Construct the FOA detection filter from the volcanoes labeled in the training
images. Apply the FOA to the training images and then use the "ground truth"
labels to mark each candidate ROI as a volcano or non-volcano.
2. Determine principal component directions from the ROIs that were detected in
step 1 and marked as volcanoes.
3. Estimate the parameters of a classifier from the labeled feature vectors obtained
by projecting all of the training data detected in step one onto the PCA templates
of step two. ROIs marked as true volcanoes in step one serve as the
positive examples, while ROIs marked as non-volcanoes serve as the negative
examples.
Figure
8. Example volcanoes from four different clusters and their respective cluster centers. Each
row represents a sample of volcanoes that have been clustered together using K-means.
Comment: This training procedure contains some non-idealities. For example, the
positive examples supplied to the classifier are the same examples used to derive the
features (principal component directions). It would clearly be better if the classifier
were to receive a disjoint set of positive training examples, but given the limited
number of examples, we compromised on the procedure described above.
2.7. Extension to the Basic Algorithm
One objection to the baseline approach presented thus far is that there are various
subtypes of volcanoes, each with unique visual characteristics. One would not
expect the (approximate) hyperplane assumption implicit in the PCA approach to
hold across different volcano subtypes. This limitation could affect the algorithm's
ability to generalize across different regions of the planet, and in fact in the experiments
reported later (Section 3), we have observed that the baseline system
performs significantly worse on heterogeneous sets of images selected from various
areas of the planet.
One solution we investigated involves using a combination of classifiers in which
each classifier is trained to detect a different volcano subclass. The outputs from
all the classifiers are then combined to produce a final classification. Subclasses
of volcanoes are found automatically by clustering the raw pixel representation
of the volcanoes in the training set using k-means (Duda and Hart, 1973). In
Figure
8 we show the results of clustering the volcanoes into four classes. Each row
corresponds to a cluster; the first column shows the cluster center, while the other
columns show selected instances. For each cluster, principal components analysis is
performed separately yielding a set of features (basis functions) specific to a subclass
of volcanoes. A classifier is then trained for each subclass, and in the final step the
predictions of all the classifiers are combined into one. Details of the method for
combining classifiers are given in (Asker and Maclin, 1997a). Experimental results
comparing the combined classifier approach with the baseline are given in Section 3.
3. Performance Evaluation
Initial experiments were conducted using a small set of images called HOM4 (denoting
a set of four images which were relatively homogeneous in appearance). The results
from these experiments were used to provide feedback in the algorithm development
process and also served to fix the values of miscellaneous parameters such as the
ROI window size, FOA threshold, number of principal components, and so forth.
Because of this feedback, however, performance on HOM4 cannot be considered as a
fair test of the system (since in effect one is training on the test data). In addition,
HOM4 did not include enough images to provide a thorough characterization of the
system performance and generalization ability.
After these initial experiments, the algorithm and all the miscellaneous parameters
were frozen at specific values (listed in the Appendix). Based on empirical
sensitivity studies (Burl et al., 1996), we believe the system is relatively insensitive
to the exact values of these parameters. Note that "freezing" does not apply to
parameters normally derived during learning such as the matched filter, principal
components, or statistics used by the classifier. These are recalculated for each
experiment from the stated set of training examples.
Extensive tests were conducted on three large image sets (HOM38, HOM56, HET36).
The naming convention for the image sets is to use HOM if the set is considered
homogeneous (images from the same local region) and HET if the set is heterogeneous
(images selected from various locations). The numerical suffix indicates the number
of images in the data set. Note that the smallest of these datasets covers an area
of 450km \Theta 450km.
A summary of the experiments and image sets is given in Table 2. The number
of volcanoes listed corresponds to the number of label events in the "ground-truth"
reference list, i.e., each label event is counted as a volcano regardless of the assigned
confidence. The main conclusion from these tests was that the baseline system
performed well on homogoeneous sets in which all images were taken from the
same region of the planet, but performed poorly on heterogeneous sets in which
images were selected randomly from various locations on the planet.
To better understand this difference in performance, we conducted a follow-up
experiment using a small set of heterogeneous images HET5. Our initial hypothesis
was that the discrepancy occurred because the volcanoes from different regions
looked different. However, what we found was that "knowing the local volcanoes"
was not nearly as important as knowing the local non-volcanoes. The argument
used to arrive at this conclusion is somewhat subtle, but is explained in detail in
Section 3.4.
Table
2. Experiments and image sets used to evaluate system performance.
Experiment Image Set #Volcanoes Description
Initial HOM4 160 4 images from lat
Testing
Extended HOM38 480 38 images from lat
Testing HOM56 230 56 images from lat
HET36 670 36 images from various locations
Follow-up HET5 131 5 images from various locations
3.1. ROC and FROC
As explained in Section 2.2, we evaluate performance by measuring how well a
detector (algorithmic or human) agrees with a set of reference labels. A "detection"
occurs if the algorithm/human indicates the presence of an object at a location
where a volcano exists according to the reference list. Similarly, a "false alarm"
occurs if the algorithm/human indicates the presence of an object at a location
where a volcano does not exist according to the reference list. Consider a system
which produces a scalar quantity indicating detection confidence (e.g., the estimated
posterior class probability). By comparing this scalar to a fixed threshold, one can
estimate the number of detections and false alarms for that particular threshold. By
varying the threshold one can estimate a sequence of detection/false-alarm points.
The resulting curve is known as the receiver operating characteristic (ROC) curve
(Green and Swets, 1966; MacMillan and Creelman, 1991; Spackman, 1989; Provost
and Fawcett, 1997).
The usual ROC curve plots the probability of detection versus the probability of
false alarm. The probability of detection can be estimated by dividing the number of
detections by the number of objects in the reference list. Estimating the probability
of false alarm, however, is problematic since the number of possible false alarms
in an image is not well-defined. A practical alternative is to use a "free-response"
ROC (Chakraborty and Winter, 1990), which shows the probability of
detection versus the number of false alarms (often normalized per image or per
unit area). The FROC methodology is used in all of experiments reported in this
in particular, the x-axis corresponds to the number of false alarms per square
kilometer.
The FROC shares many of the properties of the standard ROC 2 . For example, the
best possible performance is in the upper left corner of the plot so an FROC curve
that is everywhere above and to the left of another has better performance. The
FROC curve is implicitly parameterized by the decision threshold, but in practice
the geologist would fix this threshold thereby choosing a particular operating point
on the curve.
3.2. Initial Experiments
Experiments on HOM4 were performed using a generalized form of cross-validation
in which three images were used for training and the remaining image was reserved
for testing; the process was repeated four times so that each image served once as
the test image. This type of training-testing procedure is common in image analysis
problems (Kubat, Holte, and Matwin, 1998).
The system output was scored relative to the consensus labeling with all subjective
confidence categories treated as true volcanoes. The FROC performance curve is
shown in Figure 9a. The horizontal dashed line across the top of the figure (labeled
FOA=0.35) shows the best possible system performance using an FOA threshold of
0.35. (The line is not at 100% because the FOA misses some of the true volcanoes.)
The performance points of two individual geologists are also shown in the figure.
Geologist A is shown with the '*' symbol, while geologist B is shown with the '+'.
Note that for these images the system performance (at an appropriately chosen
operating point) is quite close to that of the individual geologists. The effect of
using different operating points is shown in table form in Figure 10a.
3.3. Extended Performance Evaluation
3.3.1. Homogeneous Images Given the encouraging results on HOM4, we proceeded
to test the system on larger images sets. The HOM4 images were part of a 7
\Theta 8 block of images comprising a full-resolution Magellan "data product." Within
this block 14 images were blank due to a gap in the Magellan data acquisition
process. The remaining 38 (56 minus 4 minus 14) images were designated as image
set HOM38. Training and testing were performed using generalized cross-validation
in which the set of images was partitioned into six groups or "folds." Two of the
images did not contain any positive examples, so these were used only for training.
The other 36 images were partitioned randomly into six groups of six with the
constraint that each group should have approximately the same number of positive
examples. Five folds were used for training and the remaining fold was used for
testing; the process was repeated so that each fold served once as the test set. This
leave-out-fold method was used rather than leave-out-image to reduce the run time.
The FROC performance is shown in Figure 9b (solid line). Since we did not
have consensus labeling available for the entire image set, the labels of geologist A
were used as the reference. The '+' symbol shows the performance of geologist B
(relative to A), while the 'o' symbol shows the performance of one of the non-
geologist authors (Burl). The performance of the algorithm is similar to the HOM4
case except at higher false alarm rates where the HOM38 performance is lower by
approximately 12%. The discrepancy is probably due to differences in the FOA
performance. Note that for HOM4 the FOA asymptote is around 94%, while it is
only at 83% for HOM38.
For comparison FROC curves are plotted for two other methods. The dashed
curve labeled "FOA" shows the performance that could be achieved by using
only a matched filter but with a lower threshold. The combination of matched
filter and classification yields better performance than the matched filter alone.
(Matched filtering was proposed as a possible solution to the volcano-detection
problem in (Wiles and Forshaw, 1993)). Also shown is the FROC for the discriminative
Karhunen-Loeve (DKL) approach (Swets and Weng, 1996), which combines
principal components and linear discriminant analysis. Observe that the JARtool
approach provides significantly better performance (an increase in detection rate
by 10 percentage points or more for a given false alarm rate).
For the HOM38 experiments, the training images and test images were geographically
close together. To test the system's generalization ability, another experiment
was performed in which training was carried out on HOM4+HOM38 and testing was
carried out on a geographically distinct set of homogeneous images HOM56. The
images were from the same latitude as the training images and visually appeared
to have similar terrain and volcano types. For this data set, reference labels
were provided by one of the non-geologist authors (Burl).
The baseline performance is shown as a solid curve in Figure 9c. The clustering
extension to the baseline algorithm was also applied to the data. The corresponding
FROC is shown with the dashed curve. The clustering approach appears to provide
a slight improvement over the baseline algorithm, consistent with other results
reported in (Asker and Maclin, 1997a). However, the baseline algorithm is still used
in the fielded system because of its simplicity and shorter training time. (These
factors are believed to outweigh the marginal loss in performance.)
Images Finally, the system was evaluated under the most
difficult conditions. A set of 36 images (HET36) was selected from random locations
on the planet. These images contained significantly greater variety in appear-
ance, noisiness, and scale than the previous image sets. Training was done on
HOM4+HOM38. The system FROC performance (relative to consensus) is shown in
Figure 9d, and the performance at selected operating points is shown in Figure 10d.
Here the classifier performs much worse than on the more homogeneous data sets.
For example at 0.001 false alarms/km 2 the detection performance is in the 35-40%
range whereas for all homogeneous image sets, the detection rates were consistently
in the 50-65% range. For the few images where we also have individual labels, the
geologists' detection performance is roughly the same as it was on the homogeneous
images. From these results it appears that human labelers are much more robust
with respect to image inhomogeneity.
3.4. Follow-Up Analysis
To better understand the decreased performance on heterogeneous images, we conducted
follow-up experiments on a smaller set of images (HET5). Performance on
this set was also poor, and our initial hypothesis was that the degradation occurred
because the volcanoes were somehow different from image to image. To investigate
this possibility, we performed experiments with two different training paradigms:
(1) cross-validation in which one image was left out of the training set and (2)
cross-validation in which one example was left out of the training set. The first
method will be referred to as LOI for "leave-out-image"; the second method will
be referred to as LOX for "leave-out-example."
(a) HOM4 (b) HOM38
Alarms km
Detection
Rate
Alarms km
Detection
Rate
(c) HOM56 (d) HET36
Alarms km
Detection
Rate
Alarms km
Detection
Rate
Figure
9. FROC curves showing the performance of the baseline algorithm on four image sets.
Each figure shows the trade-off between detection rate and the number of false alarms per area.
Note that the algorithm performs considerably better on the homogeneous image sets (a,b,c) than
on the heterogeneous set (d).The discrete symbols (*,+,O) in (a) and (b) show the performance
of human labelers.
Two nearest-neighbor classification algorithms were evaluated in addition to the
baseline Gaussian classifier. The nearest-neighbor algorithms were applied directly
to the pixel-space regions of interest (ROIs) identified by the FOA algo-
rithm. To allow for some jitter in the alignment between ROIs, we used the peak
cross-correlation value over a small spatial window as the similarity measure. One
nearest-neighbor algorithm was the standard two-class type in which an unknown
Operating point: OP1 OP2 OP3 OP4 OP5 OP6
Threshold: 0.75 0.80 0.85 0.90 0.95 0.99
Detected Category 1 (%) 88.9 88.9 86.1 80.6 72.2 63.9
Detected Category 2 (%) 89.7 89.7 86.2 79.3 72.4 65.5
Detected Volcanoes (%) 82.2 81.0 79.8 74.9 69.3 62.6
False Alarms per image 19.5 18.5 15.0 13.0 9.5 6.0
False Alarms per
(b) HOM38
Operating point: OP1 OP2 OP3 OP4 OP5 OP6
Threshold: 0.75 0.80 0.85 0.90 0.95 0.99
Detected Category 1 (%) 92.0 92.0 88.0 84.0 84.0 76.0
Detected Category 2 (%) 80.8 78.2 75.6 71.8 65.4 50.0
Detected Volcanoes (%) 68.0 65.0 63.5 60.3 54.6 48.4
False Alarms per image 10.6 8.8 7.3 5.8 3.9 2.1
False Alarms per
(c) HOM56
Operating point: OP1 OP2 OP3 OP4 OP5 OP6
Threshold: 0.75 0.80 0.85 0.90 0.95 0.99
Detected Category 1 (%) 100.0 100.0 91.7 91.7 91.7 91.7
Detected Category 2 (%) 84.2 84.2 81.6 79.0 79.0 60.5
Detected Volcanoes (%) 79.0 77.7 75.1 73.4 70.4 63.1
False Alarms per image 42.8 35.5 28.7 21.4 13.2 5.0
False Alarms per
(d) HET38
Operating point: OP1 OP2 OP3 OP4 OP5 OP6
Threshold: 0.75 0.80 0.85 0.90 0.95 0.99
Detected Category 1 (%) 90.3 87.1 87.1 83.9 79.0 64.5
Detected Category 2 (%) 84.6 82.4 80.2 78.7 75.0 62.5
Detected Volcanoes (%) 74.1 72.0 69.8 66.2 60.9 47.8
False Alarms per image 50.2 43.7 37.1 29.7 20.5 10.6
False Alarms per 10 4
89.2 77.6 65.9 52.7 36.4 18.8
Figure
10. Performance of the baseline system at various operating points along the FROC curve.
(a) HOM4 (b) HET5
Alarms km
Detection
Rate
baseline - tts
baseline - loi
1-class nn - lox
1-class nn - loi
2-class nn - lox
2-class nn - loi
Alarms
Detection
Rate
baseline - tts
baseline - loi
1-class nn - lox
1-class nn - loi
2-class nn - lox
2-class nn - loi
baseline - tts
baseline - loi
1-class nn - lox
1-class nn - loi
2-class nn - lox
2-class nn - loi
Figure
11. Performance results of svd-gauss, 1-class nearest neighbor, and 2-class nearest-neighbor
algorithms under the leave-out-image (LOI) and leave-out-example (LOX) training paradigms. (a)
Results on a set of four homogeneous images from one area of the planet. (b) Results on a set
of five heterogeneous images selected from different areas of the planet. Refer to the text for an
interpretation of the results.
test example is assigned to the same class as its nearest neighbor in the reference
library. The other was a one-class version in which the reference library contains
only positive examples (volcanoes); an unknown example is assigned to the volcano
class if it is similar enough to some member of the reference library.
Performance was evaluated on both the HOM4 and HET5 datasets using the three
classifiers (baseline, 1-class nearest neighbor, and 2-class nearest neighbor) and
two training paradigms (LOI and LOX). The results are shown in Figure 11. For
computational reasons, the baseline method was trained and tested on the same
data (TTS) rather than leaving out an example. The effect of including the test
example in the training set is minimal in this case since one example has little effect
on the class-conditional mean and covariance estimates.
The following is the key observation: on HET5 the baseline and 2-class nearest-neighbor
algorithms work significantly better under the LOX training paradigm
than under the LOI training paradigm; however, the 1-class algorithm works the
same under both training paradigms. If having knowledge about the local volcanoes
were the critical factor, the 1-class algorithm should have worked significantly better
under LOX than under LOI. Instead we conclude that access to the local non-
volcanoes is the critical factor. The 1-class algorithm completely ignores the non-
volcanoes and hence does not show any difference between LOI and LOX. The other
methods do use the non-volcanoes, and these show a dramatic improvement under
LOX.
On HOM4 there is little difference between the LOI and LOX results. Since these
images are from the same area of the planet, the appearance of the non-volcanoes
is similar from image to image. Thus, leaving out one example or leaving out one
image from the training set does not have much effect. The non-volcanoes in HET5
and other heterogeneous image sets vary considerably from image to image and
this may be the source of the degradation in performance (the training data is
inadequate for learning the non-volcano distribution).
4. Project Status
Participating in the development of the JARtool system were two planetary geologists
(Aubele and Crumpler) who were members of the Volcanism Working Group
of the Magellan Science team and principal investigators in NASA's Planetary Mapping
and Venus Data Analysis Programs.
The geologists have been evaluating the JARtool approach both in terms of the
scientific content provided by the analyzed images and as a tool to aid in further
cataloging. From the planetary geologists' point of view, the primary goal was to
achieve annotation of 80% or more of the small volcanoes in the analyzed datasets.
A secondary goal was to obtain accurate diameter estimates for each volcano. Locating
different morphologic types of small volcanoes was also of interest. However,
it was recognized up front that some of the types would be easy to detect and some
would be difficult (both for human experts and for algorithms). To the geologists,
the system should be considered a success if it detects a high percentage of the
"easy" volcanoes (category 1 and 2). Our test results indicate that this level of
performance is achieved on homogeneous image sets. However, we have not succeeded
in developing a reliable method for measuring volcano diameters. Hence,
sizing capability is not included in the fielded system.
Our experiments and those of the scientists have indicated that the choice of
operating point will vary across different areas of the planet, dependent on factors
such as terrain type and local volcano distributions. Hence, the operating point is
left "open" for the scientists to choose. Although the original intent was for the
JARtool system to provide a fully-automated cataloging tool, it appears that the
system will be most useful as an "intelligent assistant" that is used in an interactive
manner by the geologists.
The capabilities of the system were recently expanded through integration of the
Postgres database (Stonebraker and Kemnitz, 1991). A custom query tool supports
arbitrary SQL queries as well as a set of common "pre-canned" queries. JARtool is
also being evaluated for use in other problem domains. Researchers or scientists who
are interested in the software can direct inquiries to [email protected].
5. Lessons Learned and Future Directions
Real-world applications of machine learning tend to expose practical issues which
otherwise would go unnoticed. We share here a list of "lessons learned" which can
be viewed as a "signpost" of potential dangers to machine learning practitioners. In
addition, for each "lesson learned" we discuss briefly related research opportunities
in machine learning and, thus, provide input on what topics in machine learning re-search
are most likely to have practical impact for these types of large-scale projects
in the future.
1. Training and testing classifiers is often only a very small fraction of the project
effort. On this project, certainly less than 20%, perhaps as little as 10% effort
was spent on classification aspects of the problem. This phenomenon has been
documented before across a variety of applications (Langley and Simon, 1994;
Brodley and Smyth, 1997). Yet, this directly contradicts the level of effort
spent on classification algorithms in machine learning research, which has traditionally
focused heavily on the development of classification algorithms. One
implication is that classification technology is relatively mature and it is time
for machine learning researchers to address the "larger picture." A difficulty
with this scenario is that these "big picture" issues (some of which are discussed
below) can be awkward to formalize.
2. A large fraction of the project effort (certainly at least 30%) was spent on "fea-
ture engineering," i.e., trying to find an effective representation space in which to
apply the classification algorithms. This is a very common problem in applications
involving sensor-level data, such as images and time-series. Unfortunately,
there are few principled design guidelines for "feature engineering" leading to
much trial-and-error in the design process. Commonly used approaches in machine
learning and pattern recognition are linear projection techniques (such as
PCA) and feature selection methods. Non-linear projection techniques can be
useful but are typically computationally complex. A significant general problem
is the branching factor in the search space for possible feature representations.
There are numerous open problems and opportunities in the development of
novel methods and systematic algorithms for feature extraction. In particular,
there is a need for robust feature extraction techniques which can span non-standard
data types, including mixed discrete and real-valued data, time series
and sequence data, and spatial data.
3. Real-world classification systems tend to be composed of multiple components,
each with their own parameters, making overall system optimization difficult if
not impossible given finite training sets. For JARtool, there were parameters
associated with FOA, feature extraction, and classification. Joint optimization
of all of these parameters was practically impossible. As a result many parameters
(such as the window size for focus of attention) were set based on univariate
sensitivity tests (varying one parameter while keeping all others fixed at reason-able
values). Closer coupling of machine learning algorithms and optimization
methods would appear to have significant potential payoffs in this context.
4. In many applications classification labels are often supplied by experts and may
be much noisier than initially expected. At the start of the volcano project, we
believed the geologists would simply tell us where all the volcanoes were in the
training images. Once we framed the problem in an ROC context and realized
that the resolution of the images and other factors led to inherent ambiguity in
volcano identification, we began to understand the noisy, subjective nature of
the labeling process. In fact, the geologists were also given cause to revise their
opinions on the reliability of published catalogs. As real-world data sets continue
to grow in size, one can anticipate that the fraction of data which is accurately
labeled will shrink dramatically (this is already true for many large text, speech,
and image databases). Research areas such as coupling unsupervised learning
with supervised learning, cognitive models for subjective labeling, and active
learning to optimally select which examples to label next, would appear to be
ripe for large-scale application.
5. In applying learning algorithms to image analysis problems, spatial context is
an important factor and can considerably complicate algorithm development
and evaluation. For example, in testing our system we gradually realized that
there were large-scale spatial effects on volcano appearance, and that training
a model on one part of the planet could lead to poor performance elsewhere.
Conversely, evaluating model performance on images which are spatially close
can lead to over-optimistic estimates of system performance. These issues might
seem trivial in a machine learning context where independence of training and
test sets is a common mantra, yet the problem is subtle in a spatial context
(How far away does one have to go spatially to get independent data?) and
widely ignored in published evaluations of systems in the image analysis and
computer vision communities. Thus, there is a need to generalize techniques
such as cross-validation, bootstrap, and test-set evaluation, to data sources
which exhibit dependencies (such as images and sequences).
A common theme emerging from the above "lessons" is that there is a need for
a systems viewpoint towards large-scale learning applications. For example, in ret-
rospect, it would have been extremely useful to have had an integrated software
infrastructure to support data labeling and annotation, design and reporting of
experiments, visualization, classification algorithm application, and database support
for image retrieval. (For JARtool development, most of these functions were
carried out within relatively independent software environments such as standalone
C programs, Unix shell scripts, MATLAB, SAOimage, and so forth). Development
of such an integrated infrastructure would have taken far more resources than were
available for this project, yet it is very clear that such an integrated system to support
application development would have enabled a much more rapid development
of JARtool.
More generally, with the advent of "massive" data sets across a variety of disci-
plines, it behooves machine learning researchers to pay close attention to overall
systems issues. How are the data stored, accessed, and annotated? Can one develop
general-purpose techniques for extracting feature representations from "low-level"
data? How can one best harness the prior knowledge of domain experts? How can
success be defined and quantified in a way which matches the user's needs?
6. Conclusion
The Magellan image database is a prime example of the type of dataset that motivates
the application of machine learning techniques to real-world problems. The
absence of labeled training data and predefined features imposes a significant challenge
to "off the shelf" machine learning algorithms.
JARtool is a learning-based system which was developed to aid geologists in cataloging
the estimated one million small volcanoes in this dataset. The system is
trained for the specific volcano-detection task through examples provided by the
geologists. Experimental results show that the system approaches human performance
on homogeneous image sets but performs relatively poorly on heterogeneous
sets in which images are selected randomly from different areas of the planet. The
effect on system performance of a particular classification algorithm was found to
be of secondary importance compared to the feature extraction problem.
Acknowledgments
The research described in this article has been carried out in part by the Jet Propulsion
California Institute of Technology, under contract with the National
Aeronautics and Space Administration. Support was provided by the NASA
Office of Advanced Concepts and Technology (OACT - Code CT), a JPL DDF
award, NSF research initiation grant IRI 9211651, and a grant from the Swedish
Foundation for International Cooperation in Research and Higher Education (Lars
Asker).
We would like to thank Michael Turmon for his help and for performing some
of the experiments. We would also like to thank Saleem Mukhtar, Maureen Burl,
and Joe Roden for their help in developing the software and user-interfaces. The
JARtool graphical user interface is built on top of the SAOtng image analysis
package developed at the Smithsonian Astrophysical Society (Mendel et al., 1997).
Notes
1. The nominal pixel spacing in the highest resolution Magellan data products is 75m, but this
image was resized slightly.
2. One difference is that the area under an FROC curve cannot be interpreted in the same way
as for a true ROC curve.
--R
Small domes on Venus: characteristics and origins.
Personal communication.
Applying classification algorithms in practice.
Trainable cataloging for digital image libraries with applications to volcano detection.
Personal communication.
Human visual perception and ROC methodology in medical imaging.
Experts in Uncertainty.
Detection of circular geological features using the Hough transform.
Volcanoes and centers of volcanism on Venus.
Pattern Classification and Scene Analysis.
A learning approach to object recognition: applications in science image analysis.
Query by image and video content - the QBIC system
Introduction to Statistical Pattern Recognition.
Signal Detection Theory and Psychophysics.
Small volcanic edifices and volcanism in the plains of Venus.
Machine learning for the detection of oil spills in satellite radar images.
Applications of machine learning and rule induction.
Signal Detection Theory: A User's Guide.
SAOimage: the next generation.
Maximum likelihood detection of faces and hands.
Magellan: Radar Performance and Data Products.
Introduction to the special section on digital li- braries: representation and retrieval
Behavioral Decision Theory: A New Approach.
Analysis and visualization of classifier performance: comparison under imprecise class and cost distributions.
Remote Sensing for Digital Image Analysis.
Magellan Mission Summary.
Efficient pattern recognition using a new transformation distance.
Low dimensional procedure for the characterization of human faces.
The Hough transform applied to SAR images for thin line detection.
Signal detection theory: valuable tools for evaluating inductive learning.
Global distribution and characteristics of coronae and related features on Venus - Implications for origin and relation to mantle processes
The POSTGRES next generation database- management system
Image feature reduction through spoiling: its application to multiple matched filters for focus of attention.
Using discriminant eigenfeatures for image retrieval.
The design of multistage separable planar filters.
Eigenfaces for recognition.
Personal communication.
Recognition of volcanoes using correlation methods.
--TR
The Hough transform applied to SAR images for thin line detection
Introduction to statistical pattern recognition (2nd ed.)
Signal detection theory: valuable tools for evaluating inductive learning
The POSTGRES next generation database management system
Applications of machine learning and rule induction
Introduction to the Special Section on Digital Libraries
Using Discriminant Eigenfeatures for Image Retrieval
Photobook
Machine Learning for the Detection of Oil Spills in Satellite Radar Images
Remote Sensing
Applying classification algorithms in practice
Query by Image and Video Content
Efficient Pattern Recognition Using a New Transformation Distance
--CTR
Steve Chien , Rob Sherwood , Daniel Tran , Benjamin Cichy , Gregg Rabideau , Rebecca Castano , Ashley Davies , Rachel Lee , Dan Mandl , Stuart Frye , Bruce Trout , Jerry Hengemihle , Jeff D'Agostino , Seth Shulman , Stephen Ungar , Thomas Brakke , Darrell Boyer , Jim Van Gaasbeck , Ronald Greeley , Thomas Doggett , Victor Baker , James Dohm , Felipe Ip, The EO-1 Autonomous Science Agent, Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, p.420-427, July 19-23, 2004, New York, New York
Rie Honda , Shuai Wang , Tokio Kikuchi , Osamu Konishi, Mining of Moving Objects from Time-Series Images and its Application to Satellite Weather Imagery, Journal of Intelligent Information Systems, v.19 n.1, p.79-93, July 2002
Steve Chien , Rob Sherwood , Daniel Tran , Benjamin Cichy , Gregg Rabideau , Rebecca Castao , Ashley Davies , Dan Mandl , Stuart Frye , Bruce Trout , Jeff D'Agostino , Seth Shulman , Darrell Boyer , Sandra Hayden , Adam Sweet , Scott Christa, Lessons learned from autonomous sciencecraft experiment, Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, July 25-29, 2005, The Netherlands
Jianming Liang , Marcos Salganicoff, On the medical frontier: the 2006 KDD Cup competition and results, ACM SIGKDD Explorations Newsletter, v.8 n.2, p.39-46, December 2006
Steve Chien , Rob Sherwood , Gregg Rabideau , Rebecca Castano , Ashley Davies , Michael Burl , Russell Knight , Tim Stough , Joe Roden , Paul Zetocha , Ross Wainwright , Pete Klupar , Jim Van Gaasbeck , Pat Cappelaere , Dean Oswald, The Techsat-21 autonomous space science agent, Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 2, July 15-19, 2002, Bologna, Italy
George Gigli , loi Boss , George A. Lampropoulos, An optimized architecture for classification combining data fusion and data-mining, Information Fusion, v.8 n.4, p.366-378, October, 2007
Tom Fawcett , Foster Provost, Activity monitoring: noticing interesting changes in behavior, Proceedings of the fifth ACM SIGKDD international conference on Knowledge discovery and data mining, p.53-62, August 15-18, 1999, San Diego, California, United States
Chandrika Kamath , Erick Cant-Paz , Imola K. Fodor , Nu Ai Tang, Classifying of Bent-Double Galaxies, Computing in Science and Engineering, v.4 n.4, p.52-60, July 2002
Foster Provost , Ron Kohavi, Guest Editors Introduction: On Applied Research in MachineLearning, Machine Learning, v.30 n.2-3, p.127-132, Feb./ March, 1998
Miroslav Kubat , Robert C. Holte , Stan Matwin, Machine Learning for the Detection of Oil Spills in Satellite Radar Images, Machine Learning, v.30 n.2-3, p.195-215, Feb./ March, 1998
Paul Stolorz , Peter Cheeseman, Onboard Science Data Analysis: Applying Data Mining to Science-Directed Autonomy, IEEE Intelligent Systems, v.13 n.5, p.62-68, September 1998
Dennis Decoste , Bernhard Schlkopf, Training Invariant Support Vector Machines, Machine Learning, v.46 n.1-3, p.161-190, 2002
M. A. Maloof , P. Langley , T. O. Binford , R. Nevatia , S. Sage, Improved Rooftop Detection in Aerial Images with Machine Learning, Machine Learning, v.53 n.1-2, p.157-191, October-November | automatic cataloging;data mining;detection of natural objects;principal components analysis;JARtool;volcanoes;trainable;learning from examples;pattern recognition;venus;machine learning;large image databases;Magellan SAR |
288858 | Interference-Minimizing Colorings of Regular Graphs. | Communications problems that involve frequency interference, such as the channel assignment problem in the design of cellular telephone networks, can be cast as graph coloring problems in which the frequencies (colors) assigned to an edge's vertices interfere if they are too similar. The paper considers situations modeled by vertex-coloring d-regular graphs with n vertices using a color set 1, 2,..., n, where colors i and j are said to interfere if their circular distance $\min \{ | i-j | , n- | i-j | \}$ does not exceed a given threshold value $\alpha$. Given a d-regular graph G and threshold $\alpha$, an interference-minimizing coloring is a coloring of vertices that minimizes the number of edges that interfere. Let $I_\alpha (G)$ denote the minimum number of interfering edges in such a coloring of $G$. For most triples $(n, \alpha ,d),$ we determine the minimum value of $I_\alpha (G)$ over all d-regular graphs and find graphs that attain it. In determining when this minimum value is 0, we prove that for $r \geq 3$ there exists a d-regular graph G on n vertices that is r-colorable whenever $d \leq (1- \frac{1}{r}) n-1$ and nd is even. We also study the maximum value of $I_\alpha (G)$ over all d-regular graphs and find graphs that attain this maximum in many cases. | Introduction
This paper is motivated by telecommunication problems such as the design of planar regions
for cellular telephone networks and the assignment of allowable frequencies to the regions. In
our graph abstraction, vertices are regions, edges are pairs of contiguous regions, and colors
correspond to frequencies. We presume that every region has the same number d of neighbors,
which leads to considering degree-regular graphs. Interference occurs between two regions if
they are neighbors and their frequencies lie within an interference threshold. We adopt the
simplifying assumption that the number of colors available equals the number n of regions, and
let ff denote the threshold parameter so that colors i and j in f1; ng interfere precisely
when their circularly-measured scalar distance is less than or equal to ff. Precedents for the
use of circularly-measured distance in graph coloring include Vince (1988) and Guichard and
Krussel (1992).
Our formulation leads to several interesting graph-theoretic problems. One is to determine
for any given d-regular graph G and threshold ff the minimum number I ff (G) of interfering
edges over the possible colorings of G. Another is: given parameters n; ff, and d, determine
the minimum and maximum values of I ff (G) and find graphs G that attain these values. We
focus on the latter problem. More specifically, let G(n; d) denote the set of undirected d-regular
graphs on n vertices, which have no loops or multiple edges, but may be disconnected. We
wish to determine the (global) minimum interference level '(n; ff; d), which is the minimum of
I ff (G) over G(n; d). For comparison purposes, we also wish to determine the (global) minimax
interference level L(n; ff; d), which is the maximum of I ff (G) over G(n; d). This latter problem
measures how badly off you would be if an adversary gets to choose G 2 G(n; d), and you can
then color G to minimize interference.
Our graph-theoretic model is an approximation to the frequency assignment problem for
cellular networks studied in Benveniste et al. (1995). In that paper the network of cellular
nodes is viewed as vertices of a hexagonal lattice in R 2 , and the graph G is specified by a
choice of sublattice 0 of , with being the index of the sublattice 0 in . More
precisely, the vertices of G are cosets of and we draw an edge between two cosets if the
cosets are "close" in the sense that they contain vectors v, v 0 respectively with
where jj \Delta jj is a given norm on R 2 and x is a cutoff value. Such graphs 1 G are d-regular for
1 The graph G represents a fundamental domain of . In the cellular terminology a fundamental domain for
called a "reuse group." More generally a "reuse group" is a collection of contiguous cells that exhausts
all frequencies, with no two cells in the group using the same frequency.
some value of d; the usual nearest-neighbors case gives et al. (1995).
The frequency spectrum is also divided into cosets (modulo n), and nodes in the same coset
(mod 0 ) are assigned a fixed coset of frequencies (mod n). In cellular problems the graph G
is fixed (depending on 0 ). Typical parameters under consideration are
and n=ff about 2 or 3. From this standpoint the quantities '(n; ff; d) and L(n; ff; d) represent
lower and upper bounds for attainable levels of interference.
Related coloring problems motivated by the channel assignment problem are studied in Hale
(1980), Cozzens and Roberts (1982), Bonias (1991), Liu (1991), Tesman (1993), Griggs and
Liu (1994), Raychaudhuri (1994), Troxell (1996) and Guichard (1996) among others. Roberts
(1991) surveys the earlier part of this work. Factors that distinguish prior work from the
present investigation include our focus on regular graphs and the inevitability of interference
when certain relationships hold among n; ff and d.
Our main results give near-optimal bounds for '(n; ff; d) and L(n; ff; d) and identify d-regular
graphs and colorings that attain extremal values. Many interference-minimizing designs
use only a fraction of the available colors or frequencies. The most common number of colors
used in these optimal designs is
which is the maximum number of mutually noninterfering colors from f1; ng at threshold
ff. Detailed statements of theorems for '(n; ff; d) and L(n; ff; d) appear in Section 2. Proofs
follow in Sections 3 to 7.
In the course of our analysis we derive a graph-theoretic result of interest in its own right,
which is a condition for the existence of a d-regular graph having chromatic number - r.
Theorem 1.1. If r - 3, then G(n; d) contains an r-colorable graph if nd is even and
r
This result is proved in Section 5, and the proof can be read independently of the rest of the
paper. Note that if nd is odd then G(n; d) is the empty set.
We preface the results in the next section with a few comments to indicate where we are
headed. The case corresponds to no interference because the number of available colors
equals the number of vertices, and therefore '(n; 0; d) = L(n; 0; d) = 0. We assume that ff - 1
in the rest of the paper.
For degrees near 0 or n, namely 1, the set G(n; d) contains only one
unlabelled graph, so these cases are essentially trivial. We note at the end of Section 4 that
Our first main result in the next section, Theorem 2.1, applies to degree 2 and shows that
most values of ' and L for exception is that L(n; 2; 2) is approximately
n=3.
Subsequent results focus on d - 3, where we use the maximum number of noninterfering
colors fl to express the results. The case because then all colors interfere with
each other, so that #(edges of d) for most values of
(n; ff; d) is approximately
nd
Moreover, L(n; ff; d) = 0 whenever fl ? d, whereas if n is much larger than d, and d is
somewhat larger than fl, then L(n; ff; d) is approximately nd=(2fl).
Extremal graphs which attain '(n; ff; d) when ' ? 0 are usually connected, and the associated
coloring can often be achieved using fl noninterfering colors. On the other hand, graphs
that attain L(n; ff; d) when L ? 0 are usually disconnected and contain many copies of the
complete graph K d+1 . There are exceptions, however.
Our results imply that there is often a sizable gap between the values of ' and L. The
smallest instance of l ! L occurs at (n; ff; d) 2. Figure 1.1
shows the two graphs in G(6; 2) with interference-minimizing colorings for 2.
Figure
1.1 about here
A qualitative comparison of the regions where ' and L equal 0 and are positive is given in
Figure
1.2, where the coordinates are d=n and fl=n.
Figure
1.2 about here
2. Main Results
An undirected graph is simple if it has no loops or multiple edges. Let G(n; d) denote the
set of d-regular graphs on n vertices which are simple but which are not necessarily connected.
ng be a set of n colors with circular distance measure
and let ff 2 f0; be the threshold-of-interference parameter. A coloring of the vertex
set V (G) of graph d) is a [n]. The interference
I ff (G; f) of coloring f of G at threshold ff is
I ff (G; f) := jffx; yg
The minimum interference in G at threshold ff is
I ff (G) := min
f :V (G)![n]
I ff (G; f) :
We study the (global) minimum interference level
'(n; ff; d) := min
G2G(n;d)
I ff (G) (2.1)
and the (global) minimax interference level
G2G(n;d)
I
We first note restrictions on the parameter space. Since all graphs in G(n; d) have ndedges,
it follows that
n and d cannot both be odd : (2.3)
We restrict attention to the threshold range
because ff - nimplies that all colors interfere. Thus
Our first result concerns ' and L for degree 2.
Theorem 2.1. Let 2.
(a) For all
(b) For all fl - 3,
(c) If
This is proved in Section 3.
We now consider d in the range
for the minimum interference level '. The cases of are treated separately. We
obtain an almost complete answer for 2.
Theorem 2.2. Suppose that 2.
(a) If n is even, then
and
nis even, or nand d are both odd
and d is even :
(b) If n is odd, then
and
(c) If n is odd and in the remaining range n\Gamma2ff - d - n, then '(n; ff; d) - d. Furthermore:
there is an integer 2s
(ii) '(n; ff; d) - d
2 is even, and there is an integer 4s
Case (c) above is the only case not completely settled. Instances of it are illustrated in
Figure
2.1. The number beside each vertex clump gives the color assigned to those vertices, and
the number on a line between noninterfering clumps is the number of edges between them. Case
analyses, omitted here, show that no improvements are possible in part (c) of the theorem when
Given n - 21, (i) has three realizations, namely '(15; 5;
0, (ii) has only the realization at the bottom of Figure 2.1, and cases.
Figure
2.1 about here
We remark that the bounds on '(n; ff; d) for d ? n
are obtained using a variant of Turan's
theorem on extremal graphs (Tur'an, 1941; Bondy and Murty, 1976, p. 110). Theorem 2.2 is
proved in Section 4.
We now consider the minimum interference level ' when fl - 3. To handle this case we use
Theorem 1.1, which is proved in Section 5. Let p and q be the unique nonnegative integers
that satisfy
that is
c and
Our bounds for are given in the next two theorems for respectively,
and are proved in Section 6. The case is somewhat simpler.
Theorem 2.3. Suppose that fl - 3 and that fl divides n, i.e.
(a) If d -
(b) If d ?
is odd or if
even and p is even
even and p is odd :
Theorem 2.4 Suppose that fl - 3 and fl doesn't divide n, i.e. q - 1.
(a) If d
(b) If d -
where
even and p is even
even and p is odd :
We turn next to results for the minimax interference level L. We first distinguish cases
Theorem 2.5. Suppose that 3 - d - 2. Then:
(a) L(n; ff; d)
(b) L(n; ff; d) ? 0 for
The only cases in the parameter range 1 - ff - n\Gamma 1 and fl - 2 not settled by this theorem
are those with
Both occur in this exceptional case, e.g. for
Our final main result provides bounds for L. Set
c
and
In view of Theorem 2.5 we consider only the range that 2 - fl - d.
Theorem 2.6. Suppose that 3 - d -
and
In the special case that d can be written more simply as
nd
This applies in particular when which case the upper and lower bounds coincide,
yielding (1.1). If n is substantially larger than d, and d is somewhat larger than fl, then L is
closely approximated by nd
Theorems 2.5 and 2.6 are proved in Section 7.
3. Elementary Facts: Theorem 2.1
We derive general conditions that guarantee
graphs (Theorem 2.1).
Lemma 3.1. If 1 - ff ! n
(a) '(n; ff; d)
and
(b) '(n; ff; d)
2 and n is even: (3.2)
Proof. (a) Given d ng and consider the coloring
for every i. We construct a suitable G starting with the edge set
If n is odd, or if n is even and d is odd, let every vertex has degree d and
every edge has D ? ff, so '(n; ff; d) = 0. If n and d are both even, so ff - (n
ng
. Again, every
vertex has degree d and every edge has D ? ff, so '(n; ff; d) = 0.
(b) Let -G denote the chromatic number of the graph G. The definition implies that:
If n is even and d - n
d) contains a bipartite graph with n=2 vertices in each part,
so (b) follows from (3.3), since fl - 2. \Xi
We remark that the construction in part (a) uses all n colors, and when d
same construction gives many interfering edges. It is natural to consider the opposite extreme,
which is to use only a maximal set of
noninterfering colors. This leads to part (b).
The restriction in part (b) that n be even is crucial, because no d-regular bipartite graph
exists for odd n. Indeed there are exceptions where '(n; ff; d) ? 0 for some d ! n
2 with n odd:
see Theorems 2.1 and 2.2. These exceptions occur when but are not an issue for fl - 3.
We obtain bounds on the minimax interference level L using the following well-known
bound for the chromatic number -G of a graph G.
Proposition 3.1. For every finite simple graph G,
where \Delta G is the maximum degree of a vertex of G. Furthermore -G - \Delta G provided that no
connected component of G is an odd cycle or a complete graph.
Proof. Brooks (1941); Bondy and Murty (1976, pp. 118 and 122). \Xi
This result immediately yields the following condition for the minimax interference level
Lemma 3.2. If 1 - ff ! n
Proof. The definition of L(n; ff; d) gives
for a d-regular graph, (3.5) follows from Proposition 3.1 via (3.6). \Xi
Proof of Theorem 2.1. (a) Since follows from (3.2) if n is even, and from
if n is odd and ff - n
(b) follows from Lemma 3.2.
(c) Given every graph in G(n; 2) is a sum of vertex-disjoint cycles. Suppose
1. Then an even cycle has minimum interference 0, a 3-cycle has minimum
interference 1, and an odd cycle with five or more vertices has minimum interference 0 or 1.
It follows that
one 4-cycle), and L 2 2. The last case uses M \Gamma 1 3-cycles and one
5-cycle. When the 5-cycle's vertices are colored successively as 1, ff
ff, it has no interference if [n
so 1 in this case. More generally, suppose one vertex of the 5-cycle is colored 1.
Its neighbors must have colors in [ff to avoid interference. Then their uncolored
neighbors, which are adjacent, must have colors in [2ff to avoid
interference. This set has which is - ff if (2n \Gamma 4)=5 - ff.
Hence
4. Minimal Interference Level: Theorem 2.2
We prove Theorem 2.2 in this section. The ranges stated where '(n; ff; d) = 0 follow from
Lemma 3.1, so the main content of parts (a) and (b) of Theorem 2.2 concerns the values
'(n; ff; d) for d ? n
2 . To obtain these we use a variant of Tur'an's theorem (Tur'an, 1941; Bondy
and Murty, 1976, p. 110), which we state as a lemma. An application of the lemma at the
end of the section yields the exact value of L(n; ff; well as '(n; ff; 1). Recall that
an equi-t-partition of a vertex set V is a partition fV
Lemma 4.1. The maximum number of noninterfering edges in the complete graph K n with
vertex set V and threshold parameter ff is attained only by a coloring [n] that has
are in different parts of an equi-fl-partition of V .
Proof. Suppose that a coloring f of the complete graph K n has f i vertices of color i
ab denote the number of vertices of
colors other than a and b that interfere with a and not b. If all color-i vertices are recolored
j, the net increase in interference is f i (m vertices are recolored i, the
net increase in interference if f Hence at least one of the recolorings does not
increase interference. Continuing this recoloring process implies that noninterference in K n is
maximized by a fl-partite partition of V such that D(f(x); f(y)) ? ff whenever x and y are in
different parts of the partition. Tur'an's theorem then implies that maximum noninterference
obtains only when the partition is an equi-fl-partition. \Xi
We can assume without loss of generality that the coloring f found in Lemma 4.1 is constant
on each part of an equi-fl-partition, with f(V flg. If
interfering edges are then dropped from K n , we obtain a complete equi-fl-partite graph with
zero interference and chromatic number fl. This graph is regular if and only if fl divides n and
each part of the partition has n=fl vertices.
Proof of Theorem 2.2. Throughout this proof
We consider first (a) and (b). The ranges given where '(n; ff; d) = 0 come from Lemma 3.1.
So assume now that d ? n
. Let G 0 be a complete bipartite graph fA; Bg such that
Lemma 4.1 implies that two-coloring G 0 using noninterfering colors for A and B uniquely
maximizes the number of edges with no interference when 2. Therefore ' - nd=2 \Gamma jAjjBj .
(a) Suppose n is even. If n=2 and d are odd, the number of edges needed within each part
of G 0 to increase all degrees to d is (n=2)(d \Gamma n=2)=2, which is an integer since d \Gamma n=2 is even.
It follows that if n=2 is even, or if n=2 and d are odd, then
If instead n=2 is odd, and d is even, then (n=2)(d \Gamma n=2) is odd, G 0 is not part of any
graph in G(n; d), and ' ? replacing G 0 with
a complete bipartite graph G 1 with bipartition fA
Beginning with G 1 , each vertex in A 0 requires more degrees to have degree d, and
each vertex in B 0 requires edges added to have degree d. Both d
and are even, so edge additions as needed can be made within A 0 and B 0 to obtain
1 in this case; and (2.9)
is proved.
(b) Suppose n is odd, so d is even by (2.3). Beginning with G 0 , each of the (n
vertices in A requires d \Gamma (n \Gamma 1)=2 more incident edges added to have degree d, and each of the
incident edges added to have degree d. Each
of f(n contains an even integer, so we can
make the required additions of edges within A and B. Hence
It remains to prove (c), which has three parts (i)-(iii). Assume henceforth that n is odd
even because n is odd. Augmented equi-bipartite graphs,
illustrated at the top of Figure 2.1, show that ' - d=2 since they require d=2 edges within the
1)=2-vertex part to obtain degree d for every vertex. Sometimes case in point
is the largest possible ff for
Suppose 1g. Each vertex in the
color set [n] has exactly two others for which D ? ff, and the graph of noninterfering colors is
an n-cycle whose successive colors are 1)=2. If every color
were assigned to some vertex in G 2 G(n; d), there would be at least n(d \Gamma 2)=2 interference
edges. But n(d must avoid at least one color to attain '. Deletion of one
color from the n-cycle of noninterfering colors breaks the cycle and leaves the noninterference
graph
Because all x i colors interfere with each other, and all y i colors interfere with each other, we
can presume that f uses only one x i and an adjacent y j . This yields the augmented bipartite
structure of the preceding paragraph, and it follows from maximization of between-parts edges
that This completes the proof of (iii).
For (i) and (ii), assume ff ! (n \Gamma 3)=2 and consider an odd r - 5 sequence of colors c 1 ,
1. The tightest such sequence
has to color
k. It follows that the final color c r can be chosen not to interfere with c
i.e., if
We usually consider the smallest such odd r - 5 because this allows the
the largest d values. Our approach, illustrated on the lower part of Figure 2.1, is to assign
clumps of vertices to the c i in such a way that all edges for G 2 G(n; d) are between adjacent
clumps on the noninterference color cycle c
Suppose (4:2) holds for a fixed odd r - 5. We assume that r ! n because the ensuing
analysis requires this for d - 3. Let a and b be nonnegative integers that satisfy
We prove (i), then conclude with (ii). The analysis for (i) splits into three cases depending on
the parity of a and br=4c.
Case 1: a odd
Case 2: a even, br=4c odd
Case 3: a even, br=4c even.
Because n is odd, Case 1 requires b to be even and Cases 2 and 3 require b to be odd.
Case 1. Given an odd a, we partition the n vertices into b clumps of a vertices each
clumps of a vertices each. The clumps are assigned to colors in the noninterference
cycle so that the clumps of each type are contiguous. Cases for
are illustrated at the top of Figure 4.1. We begin at the central (top) a clump and proceed
symmetrically in both directions around the color cycle, assigning between-clumps edges as
we go so that all vertices end up with degree 2a. The required edges into the next clump
encountered are distributed as equally as possible to the vertices in that clump. When we get
into the clumps with a+ 1 vertices, the number of between-clumps edges needed will generally
be less than the maximum possible number of (a . Numbers of between-clumps edges
used to get degree 2a for every vertex are shown on the noninterference lines between the c i
on
Figure
4.1.
Figure
4.1 about here
The preceding construction yields '(n; ff; d) = 0 for even d is less
than 2a, say we modify the procedure by using fewer between-clumps
edges for the required vertex degrees: clump sizes are unchanged. Because n=r
yields the contradiction that n ! ra, it follows for Case 1 that
Case 2. With a even and br=4c odd, we have b odd and r 2 f5; 7; 13; 15; :g. In this
case we assign a \Gamma 1 vertices to c 1 and proceed in each direction around the c i cycle, assigning
a, a vertices to the next (r \Gamma 1)=2 c i in order. The
penultimate number a 0 equals a if r 2 f5; 13; and is a :g. The
ultimate number a in chosen so that there are [n \Gamma (a \Gamma 1)]=2 vertices (excluding the a \Gamma 1 for
1 ) on each side of the color cycle. If a
a 1)=2. The two cases are shown on the lower left of Figure 4.1 with numbers of
between-clumps edges that give degree d = 2a for every vertex. If even d is less than 2a, fewer
edges are used, as needed, down the two sides. As in Case 1, we get
Case 3. With a even and br=4c even, we have b odd and r 2 f9; 11; 17; 19; 25;
Here we assign a+ 1 vertices to c 1 and proceed with a, a \Gamma 1, a, a+ 1, a, a \Gamma 1, a, a+
a vertices assigned to the next (r \Gamma 1)=2 c i in each direction away from c 1 . We get a
and a
:g. The two cases are shown on the lower right of Figure 4.1. As before,
if d - (2=r)n.
This completes the proof of (i), after defining s by 1. We have also checked that
the construction used here cannot yield unless the conditions of (i) hold.
There is however one other set of circumstances where this construction yields a value of
r n, and these circumstances are exactly the hypotheses of (ii), namely:?
d - 8; d=2 is even
is odd since d=2 is even)
In this case we partition the vertices into (r +1)=2 clumps of vertices each and (r \Gamma 1)=2
clumps of d=2 The clumps are arranged
around the noninterference color cycle c 1 , c
We use all possible between-clumps edges. This gives degree d for every
vertex except those in the c 2 clump, which has incoming edges
from c 1 and c 3 . The degree total for c 2 should be d, so we need to add
to get degree d for each c 2 vertex. Prior to the additions,
each c 2 vertex has degree d \Gamma 2 by our equalization construction, so the additions can be made
by a complete cycle within the clump. It follows that ' - d=2 \Gamma 1, proving (ii). \Xi
We conclude this section by noting that the modified Tur'an's theorem (Lemma 4.1) easily
allows us to completely settle the case of degree
Corollary 4.2. For
c
Proof. Write
so
An equi-fl-partition of an n vertex set has? !
q parts, each with
each with p vertices.
Now the unique graph G 2 G(n; applying Lemma 4.1, we have
p!
which is (4.4). \Xi
5. Chromatic Number Bound: Theorem 1.1
This section gives a self-contained proof of Theorem 1.1. We first recall two preliminary
facts, stated as propositions.
Proposition 5.1. (Dirac (1942)) Let G be a simple graph. If every vertex of G is of degree
at least jV (G)j=2, then G is Hamiltonian, that is, G has a cycle of length jV (G)j.
Proof. See Bondy and Murty (1976), p. 54. \Xi
Recall that a matching in a simple graph G is a subset of mutually vertex-disjoint edges of
G. A matching is perfect if every vertex in G is on some edge of the matching. The following
is a consequence of a well-known theorem of Hall (1935).
Proposition 5.2. (Marriage Theorem) If G is a d-regular bipartite graph with d ? 0, then
G has a perfect matching.
Proof. See Bondy and Murty (1976), p. 73. \Xi
We study the function OE(n; d; r) defined by
1 if there exists an n-vertex d-regular r-colorable graph,
When OE(n; d; denote such a d-regular r-colorable (that is, r-partite)
graph having n vertices. We consider only values in which nd is even.
Our first observation is that because an r-colorable graph is also (r + 1)-colorable,
The purpose of the next two lemmas is to prove that OE(n; d; r) is monotone when r - 3 is
held fixed and d varies over values where nd is even.
Lemma 5.1 (a) If d - nand if either r - 3 or
(b) If d - n
If in addition n is even, then
Proof. (a) Suppose that n is even. The inequality (5.1) implies that it is enough to show
OE(n; d;
We use reverse induction on d - n=2. For the base case the complete equi-2-partite
graph gives OE(n; n=2; 1. For the induction step, suppose we know that OE(n; d;
a d-regular bipartite graph G(n; d; 2) exists, and by Proposition 5.2 it has a perfect matching
M . Remove all edges in M from G to obtain a (d \Gamma 1)-regular bipartite graph G(n; d \Gamma
Hence OE(n; d \Gamma
Suppose n is odd. Then (5.1) implies that it is enough to show
OE(n; d;
Now d must be even by (2.3), and d - (n \Gamma 1)=2. Because
by (5.5). Consider G := G(n \Gamma 5.2 we may find a
perfect matching of G, say
from G the edges and add to G a new vertex z and the edges fz; x i g and
it is easy to see that the resulting graph is a d-regular 3-partite
graph with n vertices, which proves (5.6).
(b) Let G = G(n; d; r), which exists by hypothesis. Since d - n, Proposition 5.1 guarantees
that G has a Hamilton cycle C. Removing all edges from C yields a G(n; d \Gamma 2; r), so OE(n; d \Gamma
1. If moreover n is even, then C has even length and we get a perfect matching M
by taking alternate edges in C. Removing all edges in M from G yields a G(n; d \Gamma 1; r), so
1 in this case. \Xi
Lemma 5.2. If r - 3, then
provided that nd 1 and nd 2 are both even.
Proof. Suppose
are done.
Suppose used inductively on decreasing d gives
For odd n, since nd 1 and nd 2 are both even, both d 1 and d 2 must be even. Now Lemma 5.1(b)
gives
so (5.7) follows. \Xi
Proof of Theorem 1.1. To commence the proof, we define p and q by
that is
r c - 1. Note that r divides only if In terms of p and q the
assertions of the theorem then become:
(i) If
(iii) If 2.
To prove (i)-(iii), we use the complete equi-r-partite graph G r (n) defined as follows. The
graph G r (n) has vertices we define the vertex sets
The edge set of G r (n) is
E(G r
Here is an equi-r-partition of V with
For 1 - a - b - r we let G r
a;b denote the induced subgraph of G r (n) on the vertex set
To prove (i), if is an (n \Gamma p)-regular graph, hence
Lemma 5.2 implies OE(n; d;
To prove (ii), let
q+1;r . Then (5.10) shows that H is a p(r \Gamma q \Gamma 1)-regular graph
having vertices. Now r \Gamma q - 2 implies that H has degree p(r \Gamma q \Gamma 1), which is greater
than half its vertices, so H has a Hamilton cycle C by Proposition 5.1.
If p(r \Gamma q) is even, then H has a perfect matching M obtained by taking every other edge
in C. Removing all edges in M from G r (n), the resulting graph is (n
then completes the proof of (ii).
If p(r \Gamma q) is odd, then p is odd, hence so is
by (2.3). Thus it suffices to show that
this case, for then Lemma 5.2 gives OE(n; d; 2.
1;q . Then H 0 is a (p vertices. If q ? 1
then
hence H 0 is Hamiltonian. Since (p + 1)q is even, H 0 has a perfect matching M 0 . Removing all
edges in M 0 [C from G r (n), the resulting graph is (n
Suppose 1. Notice that since p(r \Gamma 1) is odd, r 6= 3, hence r - 4. Let H 00 be the
induced subgraph of G r (n) on the set
r
and
Then the number of vertices of H 00 is p(r and the minimum degree of H 00 is
Proposition 5.1 implies that H 00 has a Hamiltonian cycle C 00 . By removing all edges in C 00 [E
from G r (n) we have an (n
To prove (iii) we proceed by induction on r, with an induction step from r to r 2. There
are two base cases,
Base Case r = 3. We have 2. Let
Consider the graph G obtained by removing from G 3 (n) all edges in g.
Then it is easy to see that G is (n
gives OE(n; d; 2.
Base Case 4. We have 3. Suppose first that p is odd. We relabel
the vertices of G 4 (n) so that the sets X j in (5.9) become
Let H be the subgraph of G 4 (n) induced on the vertex set fw 3g. Then
even and H is Hamiltonian. Thus H has a perfect matching, call it M .
Consider the edge set
and form a graph G by removing all edges in E [ M from G 4 (n). Then G is an (n
regular subgraph of G 4 (n), hence OE(n;
Lemma 5.2.
Suppose now that p is even. Then
is forbidden by (2.3). It suffices therefore to show that OE(n; this case, for
then Lemma 5.2 gives OE(n; d; 2. We use
the vertex labelling (5.12), and let H be the subgraph of G 4 (n) induced on fw 3pg.
Then jV so H has a perfect matching M . Consider
the edge set
Form a graph G by removing (n). It is an (n
Induction Step. Fix r - 5 and define
nd is eveng ;
so 3. It is enough to show that OE(n; d
yields OE(n; d;
To do this, set
furthermore we
easily check that
We may apply the induction hypothesis at r
to conclude that there exists a d 0 -regular (r \Gamma 2)-partite graph
a d 1 -regular bipartite graph with 2(p vertices disjoint from those of G; such a graph H
exists by Lemma 5.1(a). Take the disjoint union of G and H and add in all edges between
V (G) and V (H) to obtain a new graph G 0 on n vertices which is d 0 (n; r)-regular, according to
(5.13). Thus OE(n; d completing the induction step for (iii). \Xi
6. Minimal Interference Level: Theorems 2.3 and 2.4
In this section we study the range fl - 3 and prove Theorems 2.3 and 2.4. The cases where
'(n; ff; d) = 0, i.e. for d smaller than about follow from Theorem 1.1 applied with
For the remaining cases, the harder step in the proofs is obtaining the (exact) lower bounds
for '(n; ff; d). The upper bounds are obtained by explicit construction.
We proceed to derive a lower bound for '(n; ff; d) stated as Lemma 6.2 below. Let G be
any d-regular graph on n-vertices, let ng be a given coloring of G, and
let ff also be given. We begin by partitioning the n colors into fl groups f ~
that each group ~
A i consists of consecutive colors and the groups ~
A fl are themselves
consecutively arranged with respect to the cyclic ordering of colors (mod n), with all groups
but ~
A 1 containing exactly ff
A 1 contains the remaining ff
m is given by
and such a partition is completely determined by the choice of ~
We now choose ~
A 1 so as to minimize the number of vertices v in G that are assigned colors
f(v) in ~
A 1 . After doing this, we have the freedom to cyclically relabel the colors (via the map
affecting which edges have vertex colors that interfere. We use
this freedom to specify that
~
in which case
~
see
Figure
6.1. Notice that for 2 - i - fl any two colors in ~
A i interfere with each other.
Figure
6.1 about here
This partition of the colors induces a corresponding partition of the vertices of G into the
color classes
Now set
a
We now count the edges in G and in its complement -
various ways. For any
two subsets V and W of vertices, let e(V; W ) count the number of edges between vertices in V
and those in W , and let -
count the number of edges between A i and
A j that are not in G, which is
-a i;j := a i a
Along with this we define
a
The d-regularity of G then yields
a
The potential interfering edge set B i;j between vertices in A i and those in A j is
The actual interfering edge set is and we set
c i;j :=
We clearly have
- a i;j
Finally, let ffi and ffi count the potential and actual non-interfering edges in A 1 , respectively,
i.e.
Certainly ffi - ffi. Since all edges between the vertices in the same component A i interfere,
except for ffi edges in A 1 , we obtain the bound
I ff G;
To bound this further, we need the following bounds for edges connecting a vertex in the color
set ~
A 1 to a vertex in its two neighboring color sets ~
A 2 and ~
A fl .
Lemma 6.1. We have
and
Proof. We start with (6.6). By (6.4) it is enough to show that
It suffices to show for fixed v 2 A 1 with ff
because, using ff - m, this implies that, for sums over v 2 A 1 with ff
To prove (6.8), given v 2 A 1 with ff we define the vertex set
This is a set of ff by the minimizing property of
the color set ~
A 1 . Now ff implies that
Thus
which is exactly (6.8). Thus (6.6) follows.
The proof of (6.7) is analogous. \Xi
To state the lower bound lemma, recall that the quantities p and q are defined by
so
Lemma 6.2. If d -
Proof. We derive this result from the general bound
I ff (G; f) - \Sigma fl
where a for the vertex partition (6.2). To establish (6.10), we first note that Lemma 6.1
Together with (6.3), this yields
Since the left side of this inequality is an integer,
However, (6.3) also gives
Substituting these bounds in (6.5) yields (6.10).
To derive (6.9), we minimize the right side of (6.10) over all possible values: a i - 0 subject
to \Sigma fl
It is easy to verify that this occurs when all the a i 's are as equal as possible,
i.e. 8
q of the a i take the value
of the a i take the value
Thus
I ff (G; f) - qd(p
which gives (6.9). \Xi
Proof of Theorem 2.3. (a) This bound follows from Theorem 1.1, taking noting
that
(b) For d ? first establish the lower bounds
where
is odd or if n \Gamma d is even
and p is even;
even and p is odd ;
using Lemma 6.2. The case simplifies to
Now (6.12) follows on determining the cases for which p(d
To show that this bound is attained, we simply construct the graph G with the coloring f
that makes (6.11) hold. The constructions are easy and are left to the reader. \Xi
Proof of Theorem 2.4. (a) The bounds where '(n; ff; d) = 0 follow from Theorem 1.1
with
There remains the case in which
(where Theorem 1.1 does not apply). We must show that
For the upper bound ' - p, it suffices to construct an appropriate graph. Note first that p
must be even since if p is odd then
is also odd, contradicting (2.3). Now consider the equi-fl-partite graph G fl (n) defined in the
proof of Theorem 1.1. We take a perfect matching M from the induced subgraph of G fl (n)
on the vertex set (X fl We remove all the edges in M from G fl (n) and add
the edges g. Then it is straightforward to
check that the resulting graph G is (n 1)-regular and it clearly has exactly pinterfering
edges when the sets X i are colored with fl mutually noninterfering colors.
To show the lower bound ' - p
G be an (n f an n-coloring
of V (G) such that
I ff (G;
Take the partition fA associated to f constructed at the beginning of
this section. We consider cases.
Case (i). a 1 2.
The minimality property of A 1 implies that, for all
(G)nfvg such that jf(v) \Gamma f(w)j - ff.
Thus I ff (G; f) - n=2 ? p=2.
Case (ii). a
Here the equality in (6.5) combined with
I ff (G; f) - \Sigma fl
Using (6.3) we then have
I ff (G; f) -
Case (iii). All a
case requires that of the a i equal one a i equals p.
Suppose first that a p. Observe that (6.14) and (6.3) yield
I ff (G; f) - \Sigma fl
Now (6.2) and a
a
Substituting this in (6.15) gives I ff (G; f) - pSuppose finally that a only one a
a both. We treat only the case that a since the argument for a
is similar. Let a i 0
p. Now by (6.5) and (6.3)
I ff (G; f) - 1
a i
Lemma 6.1 gives -a 1;2
I ff (G; f) - 1
completing case (iii).
(b) We start from the formula (6.9) of Lemma 6.2, which gives a lower bound. We claim
that equality occurs. This formula of '(n; ff; d) splits into several cases, according to when
are integers or half-integers, and consideration of
the parities of n \Gamma d and p leads to the formulas for ' in (2.12).
For the upper bound, obtaining equality in the formula for '(n; ff; d) requires (6.11) to hold,
and this easily determines the construction of a suitable graph G and a coloring f . We omit
the details. \Xi
7. Minimax Interference Level: Theorems 2.5 and 2.6
We conclude by proving the bounds for L(n; ff; d) stated in Section 2.
Proof of Theorem 2.5. To show part (a), the condition L(n; ff; d) certainly holds
if the chromatic number -G - fl for all G 2 G(n; d). This holds for fl ? d by Brooks' theorem
(Proposition 3.1). For the case we use the strong version of Brooks' theorem, which
states that -G - \Delta G if no component of G is an odd cycle or a complete subgraph. Here
d, and d - 3 implies there are no odd cycles, while the condition
any connected component being the complete subgraph K d , for any other components must
be d-regular but have at most d vertices, a contradiction.
To show part (b), suppose that n - 2(d 1). Let G 2 G(n; d) consist of a complete graph
K d+1 plus a d-regular graph G 0 on the other vertices. If d is odd then n is
even, so that even, and the existence of G 0 is assured by a theorem of Erd-os and
Gallai (1960) for simple graphs with specified degree sequences. If fl - d, at least two vertices
of K d+1 interfere, so L ? 0.
Suppose that 1. This implies d - 2a because we presume that
2. Let G consist of two disjoint copies of K d+1\Gammaa , adding edges between them that
increase every degree to d. Each vertex requires a such edges, and this is feasible because
a, at least two vertices of K d+1\Gammaa interfere, so L ? 0.
Suppose finally that 1. Then n is odd, so d must be even.
consist of two disjoint graphs G
additions and deletions as follows. Add a edges from each G 1
vertex to G 2 vertices in as equal a way as possible for resulting vertex degrees in G 2 . Then
each vertex in G 1 has degree d, x vertices in G 2 have degree d + 1, and y vertices in G 2 have
degree d, where
These equations imply that so x is even. We then remove x=2 edges within
so that all vertices have degree d. We thus arrive at a graph G 2 G(n; d). If fl - d \Gamma a then
at least two vertices in G 1 interfere, so L ? 0. Thus part (b) holds. \Xi
Proof of Theorem 2.6. Suppose that 3 - d -
W be nonnegative integers that satisfy
To derive the upper bound on L in Theorem 2.6, let G be any graph in G(n; d). Let S
denote the family of all partitions of the vertex set of G into fl groups, with q groups of size
groups of size p. We adopt a probability model for S that assigns probability
1=jSj to each partition. Whichever partition obtains, we use fl mutually noninterfering colors
for the fl groups in the partition. Suppose fu; vg is an edge in G. The probability that u and
v lie in the same part of a member of S, so that fu; vg is an interference edge, is
The expected number E[I ] of interference edges is nd=2 times this amount, i.e.,
so some member of S has a coloring that gives less than or equal to E[I ] edges whose vertices
interfere. This is true for every G 2 G(n; d). Therefore we get the upper bound
For the lower bound, assume initially that (d
Let G consist of U disjoint copies of K d+1 . Then L(n; ff; d) - UL(d
is the minimum number of interfering edges in K d+1 for an [n]. The analysis in
Lemma 4.1 shows that L(d + 1; ff) is attained by an equi-fl-partition of V d+1 with f constant
in each part. Since an equi-fl-partition of V d+1 has!
Q groups of
of P vertices each,
we have
To
form G we begin with U disjoint copies of K d+1 and a disjoint KW . Each vertex in KW needs
incident edges, so we add a total of W (d
the K d+1 in such a way that W (d edges can be removed from within the K d+1 to
end up with degree d for every vertex. Note that W (d
and d would be odd. We ignore possible interference within KW and allow for the possibility
that every edge removed from the K d+1 is an interference edge to get the lower bound
--R
On sublattices of the hexagonal lattice
Graph Theory with Applications
On coloring the nodes of a network
Some theorems on abstract graphs
Graphs with prescribed degrees of vertices (in Hungarian)
The channel assignment problem for mutually adjacent sites
Pair labellings of graphs
theory and application
On representatives of subsets
Graph homomorphisms and the channel assignment problem
Further results on T
An extremal problem in graph theory (in Hungarian)
--TR | graph coloring;interference threshold;regular graph |
288859 | Combinatorial Properties and Constructions of Traceability Schemes and Frameproof Codes. | In this paper, we investigate combinatorial properties and constructions of two recent topics of cryptographic interest, namely frameproof codes for digital fingerprinting and traceability schemes for broadcast encryption. We first give combinatorial descriptions of these two objects in terms of set systems and also discuss the Hamming distance of frameproof codes when viewed as error-correcting codes. From these descriptions, it is seen that existence of a c-traceability scheme implies the existence of a c-frameproof code. We then give several constructions of frameproof codes and traceability schemes by using combinatorial structures such as t-designs, packing designs, error-correcting codes, and perfect hash families. We also investigate embeddings of frameproof codes and traceability schemes, which allow a given scheme to be expanded at a later date to accommodate more users. Finally, we look briefly at bounds which establish necessary conditions for existence of these structures. | Introduction
Traceability schemes for broadcast encryption were defined by Chor, Fiat and Naor [8], and
frameproof codes for digital fingerprinting were proposed by Boneh and Shaw [4]. Although
these two objects were designed for different purposes, they have some similar aspects. One
of the purposes of this paper is to investigate the relations between traceability schemes and
frameproof codes. We first give combinatorial descriptions of these two objects in terms of
set systems, and also discuss the Hamming distance of frameproof codes when viewed as
error-correcting codes. From these descriptions, it is seen that existence of a c-traceability
scheme implies the existence of a c-frameproof code.
In [4, 8], some constructions of frameproof codes and traceability schemes were pro-
vided. We will provide new (explicit) constructions by using combinatorial structures such
as t-designs, packing designs, error-correcting codes and perfect hash families. We also
investigate embeddings of frameproof codes and traceability schemes, which allow a given
scheme to be expanded at a later date to accommodate more users. Finally, we look briefly
at bounds which establish necessary conditions for existence of these structures.
In this rest of this section we review the definitions of c-frameproof codes and c-
traceability schemes which were given in [4] and [8], respectively.
1.1 Frameproof codes
In order to protect a product (such as computer software, for example), a distributor marks
each copy with some codeword and then ships each user his data marked with that codeword
(for some examples of how this might be done in practice, see [5]). This marking allows
the distributor to detect any unauthorized copy and trace it back to the user. Since a
marked object can be traced, the users will be deterred from releasing an unauthorized
copy. However, a coalition of users may detect some of the marks, namely the ones where
their copies differ. They can then change these marks arbitrarily. To prevent a group of
users from "framing" another user, Boneh and Shaw [4] defined the concept of c-frameproof
codes. A c-frameproof code has the property that no coalition of at most c users can frame
a user not in the coalition.
Let v and b be positive integers (b denotes the number of users in the scheme). A set
is called a (v; b)-code and each w (i) is called a codeword.
So a codeword is a binary v-tuple. We can use a b \Theta v matrix M to depict a (v; b)-code, in
which each row of M is a codeword in \Gamma.
\Gamma be a (v; b)-code. Suppose \Gamma. For
we say that bit position i is undetectable for C if
Let U(C) be the set of undetectable bit positions for C. Then
is called the feasible set of C. (If then we define F .) The feasible
set F (C) represents the set of all possible v-tuples that could be produced by the coalition
C by comparing the d codewords they jointly hold. If there is a codeword w (j) 2 F (C)nC,
then user j could be "framed" if the coalition C produces the v-tuple w (j) . The following
definition from [4] is motivated by the desire for this situation not to occur.
Definition 1.1 A (v; b)-code \Gamma is called a c-frameproof code if, for every W ' \Gamma such that
We will say that \Gamma is a c-FPC(v; b) for short.
Thus, in a c-frameproof code, the only codewords in the feasible set of a coalition of at
most c users are the codewords of the members of the coalition. Hence, no coalition of at
most c users can frame a user who is not in the coalition.
Example 1.1 ([4]) For any integer b, there exists a b-FPC(b; b). The matrix depicting the
code is a b \Theta b identity matrix.
Example 1.2 There exists a 2-FPC(3; 4). The matrix depicting the code is as
1.2 Traceability schemes
In many situations, such as a pay-per-view television broadcast, the data is only available
to authorized users. To prevent an unauthorized user from accessing the data, the data
supplier will encrypt the data and give the authorized users keys to decrypt it. Some
unauthorized users (pirate users) might obtain some decryption keys from a group of one
or more authorized users (called traitors). Then the pirate users can decrypt data that
they are not entitled to. To prevent this, Chor, Fiat and Naor [8] devised a traitor tracing
scheme, called a traceability scheme, which will reveal at least one traitor on the confiscation
of a pirate decoder.
Suppose there are a total of b users. The data supplier generates a base set T of v keys
and assigns k keys to each user. These k keys comprise a user's personal key, and we will
denote the personal key for user U by P (U ). A message consists of an enabling block and a
cipher block. A cipher block is the encryption of the actual plaintext data using some secret
S. The enabling block consists of data, which is encrypted using some or all of the v
keys in the base set, the decryption of which will allow the recovery of S. Every authorized
user should be able to recover S using his or her personal key, and then decrypt the cipher
block using S to obtain the plaintext data.
Some traitors may conspire and give an unauthorized user a "pirate decoder", F . The
pirate decoder F will consist of k base keys, chosen from T , such that F ' [U2C P (U ),
where C is the coalition of traitors. An unauthorized user may be able to decrypt S using a
pirate decoder F . The goal of the data supplier is to assign keys to the users in such a way
that when a pirate decoder is captured and the keys it possesses are examined, it should be
possible to detect at least one traitor in the coalition C, provided that jCj - c (where c is a
predetermined threshold).
Traitor detection would be done by computing jF "P (U)j for all users U . If jF "P (U)j -
users V 6= U , then U is defined to be an exposed user.
Definition 1.2 Suppose any exposed user U is a member of the coalition C whenever a
pirate decoder F is produced by C and jCj - c. Then the scheme is called a c-traceability
scheme and it is denoted by c-TS(k; b; v).
Let us now briefly discuss the difference between our scheme and that of [8]. In [8],
nk for some integer n, and the set T of base keys is partitioned into k subsets S i , each
of size n. We will denote S Each personal key P (U) is
a transversal of (S contains exactly one key from each S i ). Suppose the
secret key S is chosen from an abelian group G. To encrypt S, the data supplier splits
G such that
every share r i with each of the n keys in S i by computing t . The nk values t i;j
comprise the enabling block. Each authorized user has one key from S i , so he or she can
decrypt every r i , and thus compute S.
In our definition, we do not require that each personal key be a transversal. A personal
key can be made up of any selection of k base keys from the set T . The data supplier can
use a k out of v threshold scheme (such as the Shamir scheme [13], for example) to construct
v shares of the key S, and then encrypt each share r i with the key s i , for every s
Note that our definition is a generalization of the one given in [8]. However, the generalization
has to do with the way that the enabling block is formed, and not with the
traceability property of the scheme. Our definition of the traceability property is the same
as in [8].
Example 1.3 We present a 2-TS(5; 21; 21). The set of base keys is Z 21 . The personal key
for user i (0 - i - 20) is
where all arithmetic is done in Z 21 . (This is an application of a construction we will present
in Theorem 3.5.) It can be shown that any two base keys occur together in exactly one
personal key. Now, consider what happens when two traitors U and V construct a pirate
decoder, F . The pirate decoder F must contain at least three personal keys from P (U) or
However, for any other user W 6= U; V , 2. Hence either U or V will
be the exposed user if the pirate decoder F is examined.
1.3 Previous results
In the construction of frameproof codes and traceability schemes, the main goal is to accommodate
as many users as possible. In other words, we want to find constructions with b as
large as possible, given values for the parameters c and v (and k, in the case of traceability
schemes). In general, we would prefer explicit constructions for these objects as opposed to
non-constructive existence results.
For example, Boneh and Shaw [4] proved the following interesting result.
Theorem 1.1 For any integers c; v ? 0, there exists a c-FPC
However, as noted in [4], the proof is not constructive. Hence, they also provide an explicit
construction for a c-FPC
Similarly, Chor, Fiat and Naor [8] gave an interesting non-constructive existence result
for traceability schemes, as follows.
Theorem 1.2 For any integers c; v ? 0, there exists a c-TS(v=(2c 2
We will provide several explicit constructions for frameproof codes and traceability
schemes later in this paper. Although our constructions may not be as good asymptotically
as those in [4] and [8], they will often be better for relatively small values of c and
v. (For example, in order to obtain b - 2 in Theorem 1.1, it is necessary to take v - 16c 2 ,
so the construction is not useful for small values of v.) As well, our constructions are very
simple and could be implemented very easily and efficiently.
Combinatorial descriptions
In this section, we give combinatorial descriptions of c-frameproof codes and c-traceability
schemes. From these descriptions, it is fairly easy to see that the existence of a c-TS(k; b; v)
implies the existence of a c-FPC(v; b).
We will use the terminology of set systems. A set system is a pair (X; B) where X is a
set of elements called points, and B is a set of subsets of X, the members of which are called
blocks. A set system can be described by an incidence matrix. Let (X; B) be a set system
g. The incidence matrix of (X; B) is
the b \Theta v matrix
ae
Conversely, given an incidence matrix, we can define an associated set system in an obvious
way.
2.1 Description of c-frameproof codes
Since a c-FPC(v; b) is a b \Theta v (0; 1)-matrix, we can view a frameproof code as an incidence
matrix or as a set system, as defined above. We have the following characterization of
frameproof codes as set systems.
Theorem 2.1 There exists a c-FPC(v; b) if and only if there exists a set system (X; B)
such that and for any subset of d - c blocks there does
not exist a block B 2 BnfB such that
d
d
Proof. Suppose are d codewords in a c-FPC(v; b) (d - c). Without loss
of generality, assume that in these codewords the first s bit positions are 0, the next t bit
positions are 1, and in every other bit position at least one of the d codewords has the value
0 and at least one has the value 1. (Hence, the undetectable bit positions are the first s
bit positions.) Then, it is not hard to see that the frameproof property is equivalent to
saying that any other codeword w has at least one 1 in the first s bit positions, or at least
one 0 in the next t bit positions. In other words, there does not exist a codeword with 0's
in the first s bit positions and 1's in the next t bit positions.
are the blocks in the set system corresponding to the d codewords
d
and
d
Hence the frameproof condition is equivalent to saying that there does not exist a block B
such that "B
2.2 Description of c-traceability schemes
Since a c-TS(k; b; v) consists of b k-subsets of a v-set, we can think of it as a set system,
where X is the set of base keys and B is the set of personal keys.
Theorem 2.2 There exists a c-TS(k; b; v) if and only if there exists a set system (X; B)
such that B, with the property that for every
choice of d - c blocks for any k-subset F ' [ d
there does not
exist a block B 2 BnfB such that
Proof. Suppose (X; B) is a c-TS(k; b; v). For every set of d - c personal
B, for any k-subset F ' [ d
(i.e., a pirate decoder) and for any other personal key
B, there exists a d) such that there is no block
d. The converse is also
straightforward.
2.3 Relationship of traceability schemes and frameproof codes
We prove the following theorem relating traceability schemes and frameproof codes.
Theorem 2.3 If there exists a c-TS(k; b; v), then there exists a c-FPC(v; b).
Proof. Let (X; B) be the set system corresponding to a c-TS(k; b; v). We prove that (X; B)
is a c-FPC(v; b). Suppose not; then there exist d - c blocks, B, and a block
such that
But this contradicts Theorem 2.2 (letting
2.4 Hamming distance of 2-frameproof codes
Now we investigate some properties of the Hamming distance of c-frameproof codes. For
any (v; b)-code, let d(x; y) denote the Hamming distance of two codewords x; y.
Denote
and
Theorem 2.4 A (v; b)-code \Gamma is 2-frameproof if and only if
for all i
Proof. Let w (i) ; w (j) and w (h) be any three distinct codewords. Without loss of generality,
assume that U(fw (i) ; w (j) so the first r bits of w (i) and w (j) are the same.
We have that d(w (i) ; w (j) Hamming distance is a metric, we have that
it will be the case that
if and only if there is at least one bit position within the first r bit positions such that w (h)
is different from w (i) and w (j) . But this is just the condition that the code is 2-frameproof
(as stated in the proof of Theorem 2.1).
The following result is an immediate corollary of the previous lemma.
Corollary 2.5 A (v; b)-code \Gamma is 2-frameproof if d
We give an example to illustrate the application of this corollary. In [6], a simple
explicit construction is given for a (q; (q
prime power q. Hence, for q ? 81, we see that d
In fact, we have verified by computer that d for the codes produced by this
construction for all odd prime powers q such that 31 - q - 79. Applying Corollary 2.5, we
obtain the following result.
Theorem 2.6 For any odd prime power q - 31, there exists a 2-FPC(q; (q
3 Constructions from combinatorial structures
In this section, we will give some constructions of frameproof codes and traceability schemes
from certain combinatorial designs, including t-designs, packing designs and orthogonal
arrays. All the results on design theory that we require can be found in standard references
such as the CRC Handbook of Combinatorial Designs [9].
3.1 Constructions using t-designs
First we give the definition of a t-design.
Definition 3.1 A t-(v; k; -) design is a set system (X; B), where
B, and every t-subset of X occurs in exactly - blocks in B.
Note that, by simple counting, the number of the blocks in a t-(v; k; 1) design is
. We will use t-(v; k; 1) designs to construct frameproof codes and traceability
schemes, as described in the following theorems.
Theorem 3.1 If there exists a t-(v; k; 1) design, then there exists a c-FPC(v;
proof. Denote distinct blocks, and let
g. If
there exists a d, such that
t. Since we have a t-design with
Hence, for any B 2 BnfB g, we have that B 6' [ d
. The t-design is a set
system satisfying the conditions of Theorem 2.1, so the conclusion follows.
Similarly, we can construct traceability schemes from t-(v; k; 1) designs; the value of c
obtained is smaller, however.
Theorem 3.2 If there exists a t-(v; k; 1) design, then there exists a c-TS(k;
proof. Suppose there exists a t-(v; k; 1) design (X; B). Let d be d - c distinct
blocks. g. If F ' [ d
there exists a B i ,
d, such that
c
r
On the other hand, since we have
Hence, it follows that This shows that the t-design is a set system
satisfying the conditions of Theorem 2.2, and the conclusion follows.
There are many known results on existence and construction of t-(v; k; 1) designs for
3. On the other hand, no is known to exist for
t - 6. However, known infinite classes of 2- and 3-designs provide some nice infinite classes
of frameproof codes and traceability schemes. We illustrate with a few samples of typical
results that can be obtained.
First, for 3 - k - 5, a 2-(v; k; 1) design exists if and only if v j 1 or k mod
[9, Chapter I.2]. Hence, we obtain the following.
Theorem 3.3 There exist frameproof codes as follows:
1. There exists a 2-FPC(v; v(v \Gamma 1)=6) for all v j
2. There exists a 3-FPC(v;
3. There exists a 4-FPC(v;
Similarly, we have the following theorem about the existence of 2-traceability schemes
(note that to get c - 2 when Theorem 3.2, we need k - 5).
Theorem 3.4 There exists a 2-TS(5; v(v \Gamma 1)=20; v), for all v j
A design is known as a projective plane of order q; such a design
exists whenever q is a prime power (see [9, Chapter VI.7]). In a projective plane we have
so the frameproof codes obtained from it are not interesting (in view of Example 1.1,
which does better). However, the traceability schemes will be of interest.
Theorem 3.5 There exists a b p qc-TS(q 1), for all prime powers q.
Example 1.3 is in fact obtained from the case Theorem 3.5.
We give another class of examples derived from 3-(q 2 +1; q +1; 1) designs (these designs
are called inversive planes and exist if q is a prime power; see [9, Chapter VI.7]).
Theorem 3.6 For any prime power q, there exists a
3.2 Constructions using packing designs
Another type of combinatorial design which can be used to construct frameproof codes and
traceability schemes are packing designs. We give the definition as follows.
Definition 3.2 A t-(v; k; -) packing design is a set system (X; B), where
for every B 2 B, and every t-subset of X occurs in at most - blocks in B.
Using the same argument as in the proof of Theorem 3.1, we have the following construction
for frameproof codes.
Theorem 3.7 If there exists a t-(v; k; 1) packing design having b blocks, then there exists
a c-FPC(v; b), where
Similarly, we have the following construction for traceability schemes, using the same
argument as in the proof of Theorem 3.2.
Theorem 3.8 If there exists a t-(v; k; 1) packing design having b blocks, then there exists
a c-TS(k; b; v), where
We mentioned previously that no t-(v; k; 1) designs are known to exist if v
However, for any t, there are infinite classes of packing designs with a "large" number of
blocks (i.e., close to
). These can be obtained from designs known as orthogonal
arrays, which are defined as follows.
Definition 3.3 An orthogonal array OA(t; k; s) is a k \Theta s t array, with entries from a set
of s - 2 symbols, such that in any t rows, every t \Theta 1 column vector appears exactly once.
It is easy to obtain a packing from an orthogonal array, as shown in the next lemma.
Lemma 3.9 If there is an OA(t; k; s), then there is a t-(ks; k; 1) packing design that contains
blocks.
proof. Suppose that there is a OA(t; k; s) with entries from the set f0;
1g. For every column (y in the
orthogonal array, define a block consist of the
blocks thus constructed. It is easy to check that (X; B) is a t-(ks; k; 1) packing design.
The following lemma ([9, Chapter VI.7]) provides infinite classes of orthogonal arrays,
for any integer t.
Lemma 3.10 If q is a prime power and t ! q, then there exists an OA(t; q
hence a t-
packing design with q t blocks exists.
From Theorem 3.7 and Lemma 3.10, we obtain the following.
Theorem 3.11 For any prime power q and any integer t ! q, there exists a
In this construction, b - 2
2c (for frameproof codes) and b - 2
traceability
schemes). Also, the resulting traceability schemes are of the "transversal type" considered
in [8].
3.3 Constructions using perfect hash families
In this section, we present another method to construct frameproof codes, which uses a
perfect hash family.
Definition 3.4 An (n; m;w)-perfect hash family is a set of functions F such that f :
ng ! f1; ng such that
there exists at least one f 2 F such that f j X is one-to-one.
When , an (n; m;w)-perfect hash family will be denoted by PHF(N ; n; m;w).
Observe that a PHF(N ; n; m;w) can be depicted as an N \Theta n matrix with entries from
having the property that in any w columns there exists at least one row such
that the w entries in the given w columns are distinct. Results on perfect hash families
can be found in numerous textbooks and papers. Mehlhorn [12] is a good textbook source;
more recent constructions can be found in the papers [2] and [3].
The following theorem tells us how to use a perfect hash family to enlarge a frameproof
code.
Theorem 3.12 If there exists a PHF(N ; n; m; c + 1) and a c-FPC(v; m), then there exists
a c-FPC(Nv; n).
proof. be a c-FPC(v; m), and let F be a PHF(N ; n; m; c+1).
be the (Nv; n)-code consisting of the n codewords
means concatenation of strings. We will show that \Gamma 0 is a c-
FPC(Nv;n).
g. Recall that U(W ) is the set of undetectable
bit positions of W . Assume that there exists a codeword u (i c+1
F is a PHF(N ; n; m; c + 1), there exists
an h 2 F such that hj C is one-to-one, where g. Thus we have c
different codewords w 1, such that w (h(i c+1 )) is in the feasible set of
cg. This contradicts the fact that \Gamma is c-frameproof.
In [4], the following construction of c-frameproof codes from error-correcting codes is
given.
Theorem 3.13 If there exists a c-FPC(v; q) and an (N; n) q-ary code with minimum Hamming
distance d min ? there exists a c-FPC(vN; n).
Alon [1] gave a construction of perfect hash families from error-correcting codes. We
observe that if we use a perfect hash family constructed by Alon's method to obtain a c-
frameproof code by applying Theorem 3.12, then the resulting code is essentially the same as
the one constructed using Theorem 3.13. However, it is possible to use other constructions
for perfect hash families to obtain new examples of frameproof codes. We provide one
illustration now, which uses the following recursive construction from [3].
Lemma 3.14 Suppose there exists a PHF(N
1. Then
there exists a PHF
for any integer j - 1.
Example 3.1 There exists a PHF(2; 5; 4; 3) as follows:
Theorem 3.15 For any integer j - 1, there exists a 2-FPC(6 \Theta 4
proof. From Lemma 3.14 and Example 3.1, we obtain a PHF(2 \Theta 4
Combine this perfect hash family with the 2-FPC(3; 4) given in Example 1.2, and apply
Theorem 3.12.
4 Embeddings
In many cases the number of users of a scheme will increase after the system is set up.
Initially, the data supplier will constuct a scheme that will accommodate a fixed number of
users (which we denoted by b). If the number of users eventually surpasses b, we would like
a simple method of extending the scheme which is "compatible" with the existing scheme.
In the case of a traceability scheme, we do not want to change the personal keys already
issued when the scheme is expanded. In the case of a frameproof code, we do not want to
have to recall software that has already been sold.
To solve this problem, we will introduce the concept of embedding frameproof codes and
traceability schemes in larger ones.
Definition 4.1 Let \Gamma be a c-FPC(v; b) and let \Gamma 0 be a c-FPC(v
Suppose that, for every codeword w 2 \Gamma, there exists a codeword w such that the first
bit positions of w 0 are the same as w, and the remaining v positions of w 0 are all
0's. Then we say that \Gamma is embedded into
Initially, the distributor could use the code \Gamma to mark the products. When the number
of users surpasses b, then codewords in \Gamma 0 n\Gamma are used. Note that the embedding property
ensures that the codewords in \Gamma do not have to be changed when we proceed to the larger
code.
A similar definition can be given for traceability schemes.
Definition 4.2 Let T be the set of v base keys of a c-TS(k; b; v), and let T 0 be the set of v 0
base keys of a c-TS(k; b Suppose that every personal
key of the c-TS(k; b; v) is also a personal key of the c-TS(k; b we say that the
first scheme is embedded into the second scheme.
Note that the definition of embedding is even simpler if we consider the set system
formulation of frameproof codes and traceability schemes. Namely, we say that (X; B) is
embedded into
Since t-designs and packing designs are set systems, the above definition of embedding
applies. In fact, embeddings of combinatorial designs have been extensively studied, so we
have a convenient method of constructing embeddible frameproof codes and traceability
schemes.
For example, in the case of 2-designs, we have the following result.
Theorem 4.1 If there exists a 2-(v; k; 1) design that can be embedded into a 2-(v
design, then there exists a that can be embedded into a
\Xip
that can
be embedded into a
\Xip
We give a couple of illustrations of this idea. For necessary and sufficient
conditions for embedding 2-(v; k; 1) designs into 2-(v designs are known, namely v j
result is known as the "Doyen-Wilson Theorem" [9, Chapter I.4]; for
III.1].) This provides a convenient way of embedding 2- and 3-frameproof codes into larger
ones by application of Theorem 4.1. The following theorems are obtained.
Theorem 4.2 For all v j 6 such that v 0 - 2v+1, there exists
a 2-FPC(v; (v that can be embedded into a 2-FPC(v
Theorem 4.3 For all v j
exists a 3-FPC(v; (v that can be embedded into a 2-FPC(v
Here is a small example to illustrate.
Example 4.1 Given an embedding of a 2-(7; 3; 1) design into a 2-(15; 3; 1) design, a 2-
FPC(7; 7) can be embedded into a 2-FPC(15; 35). The 35 codewords of the 2-FPC(15; 35)
are given in Figure 1 (the first seven codewords, when restricted to the first seven bit
positions, form the embedded 2-FPC(7; 7)).
@
1101000 00000000
1000000 00110000
0100000 00011000
1000000 00001010
0100000 10000100
0001000 10100000
1000000 01000100
0010000 10010000
0001000 01001000
1000000 10000001
A
Figure
1: A 2-FPC(7; 7) embedded into a 2-FPC(15; 35).
It is also well-known that for any prime power q and for any integers i - j, there exists
a which can be embedded into a 2-(q j ; q; 1) design (in other words, the
affine geometry AG(i; q) is a subgeometry of AG(j; q); see [9, Chapter VI.7]). The following
result is obtained.
Theorem 4.4 Let q be a prime power, and let i and j be positive integers such that i - j.
Then there exists a which can be embedded into a
can be embedded into a
5 Bounds
In this section, we investigate necessary conditions for existence for frameproof codes and
traceability schemes. These take the form of upper bounds on b, as a function of c and v
(and k, in the case of traceability schemes).
First we will give a bound for frameproof codes. be a c-
FPC(v; b). Recall that U(C) denotes the set of undetectable bit positions for a subset
denotes the feasible set of C. For 1 - d - c, let
We begin by stating and proving a simple lemma.
Lemma 5.1 Suppose is a c-FPC(v; b), and suppose t are as
defined above. Then
proof. Suppose t be such that
g. Clearly U(C) ' U(C 0 ); however, since
which contradicts
Definition 1.1.
The next result provides an upper bound on b which depends on t c\Gamma1 .
Theorem 5.2 Suppose is a c-FPC(v; b), and suppose t are as
defined above. Then
l t c\Gamma1m
proof. Let W ' \Gamma be chosen such that jW
any codeword w (i) 2 \GammanW , let R It is easy to see that R i ' R for all
which contradicts the fact that \Gamma is c-frameproof). In other words,
the subsets R i constructed above form a Sperner family with respect to the ground set R.
By Sperner's Theorem (see, for example [11, Theorem 6.3]), we see that
dt
1, the result follows.
The following bound on b is an immediate corollary.
Corollary 5.3 If \Gamma is a c-FPC(v; b), then
proof. From Lemma 5.1, we have that t 2. The conclusion follows from
Theorem 5.2.
Recall that Example 1.1 gave a construction for c-FPC(c; c), and we constructed a
FPC(3; in Example 1.2. In both of these examples, the bound of Corollary 5.3 is met
with equality.
Now we turn our attention to traceability schemes, where we provide an upper bound
on b. In [8], it was shown that b - v k=c if a c-TC(k; v; b) exists. We give a stronger bound,
which is also based on the following observation made in [8].
Lemma 5.4 Suppose (X; B) is a c-TC(k; v; b). Then, for any subset of d - c blocks
there does not exist a block B 2 BnfB such that B '
proof. The proof is essentially the same as the proof of Theorem 2.3.
For obvious reasons, the collection of subsets B is called c-cover-free. Now, applying [10,
Proposition 2.1], which gives an upper bound on the cardinality of a c-cover-free collection
of sets, the following result is immediate.
Theorem 5.5 If a c-TS(k; b; v) exists, then the following bound holds:
c e.
6 Comments
Further results on frameproof codes can be found in the PhD thesis of Yeow Meng Chee [7,
Chapter 9]. Chee gives a probabilistic construction for 2-frameproof codes that improves
upon results in [4], and provides efficient explicit constructions for frameproof codes using
the idea of superimposed codes.
Acknowledgements
We thank the referee for several helpful comments. The authors' research is supported by
NSF grant CCR-9402141.
--R
Explicit construction of exponential sized family of k-independent sets
Some recursive constructions for perfect hash families
Electronic marking and identification techniques to discourage document copying.
On a class of constant weight codes
CRC Handbook of Combinatorial Designs
A Course in Combinatorics
Data Structures and Algorithms
How to share a secret
--TR
--CTR
Sylvia Encheva , Grard Cohen, Frameproof codes against limited coalitions of pirates, Theoretical Computer Science, v.273 n.1-2, p.295-304, February 2002
Omer Berkman , Michal Parnas , Jii Sgall, Efficient dynamic traitor tracing, Proceedings of the eleventh annual ACM-SIAM symposium on Discrete algorithms, p.586-595, January 09-11, 2000, San Francisco, California, United States
Yan Zhu , Wei Zou , Xinshan Zhu, Collusion secure convolutional fingerprinting information codes, Proceedings of the 2006 ACM Symposium on Information, computer and communications security, March 21-24, 2006, Taipei, Taiwan
J. Lfvenberg, Binary Fingerprinting Codes, Designs, Codes and Cryptography, v.36 n.1, p.69-81, July 2005
Maura Paterson, Sliding-window dynamic frameproof codes, Designs, Codes and Cryptography, v.42 n.2, p.195-212, February 2007
Alexander Barg , Gregory Kabatiansky, A class of I.P.P. codes with efficient identification, Journal of Complexity, v.20 n.2-3, p.137-147, April/June 2004
Tran Van Trung , Sosina Martirosyan, On a Class of Traceability Codes, Designs, Codes and Cryptography, v.31 n.2, p.125-132, February 2004
G. Cohnen , S. Encheva , S. Litsyn , H. G. Schaathun, Intersecting codes and separating codes, Discrete Applied Mathematics, v.128 n.1, p.75-83, 15 May
Wen-Guey Tzeng , Zhi-Jia Tzeng, A Public-Key Traitor Tracing Scheme with Revocation Using Dynamic Shares, Designs, Codes and Cryptography, v.35 n.1, p.47-61, April 2005
Satoshi Obana , Kaoru Kurosawa, Bounds and Combinatorial Structure of
Yevgeniy Dodis , Nelly Fazio , Aggelos Kiayias , Moti Yung, Scalable public-key tracing and revoking, Proceedings of the twenty-second annual symposium on Principles of distributed computing, p.190-199, July 13-16, 2003, Boston, Massachusetts
Tran Van Trung , Sosina Martirosyan, New Constructions for IPP Codes, Designs, Codes and Cryptography, v.35 n.2, p.227-239, May 2005
Yevgeniy Dodis , Nelly Fazio , Aggelos Kiayias , Moti Yung, Scalable public-key tracing and revoking, Distributed Computing, v.17 n.4, p.323-347, May 2005
X. Ma , R. Wei, On a Bound of Cover-Free Families, Designs, Codes and Cryptography, v.32 n.1-3, p.303-321, May-July 2004
Dan Boneh , Brent Waters, A fully collusion resistant broadcast, trace, and revoke system, Proceedings of the 13th ACM conference on Computer and communications security, October 30-November 03, 2006, Alexandria, Virginia, USA
Dan Boneh , Brent Waters, A fully collusion resistant broadcast, trace, and revoke system, Proceedings of the 13th ACM conference on Computer and communications security, October 30-November 03, 2006, Alexandria, Virginia, USA
Carlo Blundo , Paolo Darco , Alfredo De Santis , Massimiliano Listo, Design of Self-Healing Key Distribution Schemes, Designs, Codes and Cryptography, v.32 n.1-3, p.15-44, May-July 2004
Stelvio Cimato , Antonella Cresti , Paolo D'Arco, A unified model for unconditionally secure key distribution, Journal of Computer Security, v.14 n.1, p.45-64, January 2006
Charles J. Colbourn, Multiple access communications using combinatorial designs, Theoretical aspects of computer science: advanced lectures, Springer-Verlag New York, Inc., New York, NY, 2002 | traceability scheme;hash family;frameproof code;t-design |
288861 | A Randomness-Rounds Tradeoff in Private Computation. | We study the role of randomness in multiparty private computations. In particular, we give several results that prove the existence of a randomness-rounds tradeoff in multiparty private computation of $\fxor$. We show that with a single random bit, $\Theta(n)$ rounds are necessary and sufficient to privately compute $\fxor$ of n input bits. With $d\ge 2$ random bits, $\Omega(\log n/ d)$ rounds are necessary, and $O(\log n/ \log d)$ are sufficient. More generally, we show that the private computation of a boolean function f, using $d\ge 2 $ random bits, requires $\Omega(\log S(f)/ d)$ rounds, where S(f) is the sensitivity of f. Using a single random bit, $\Omega(S(f))$ rounds are necessary. | Introduction
A 1-private (or simply, private) protocol A for computing a function f is a protocol that allows
possessing an individual secret input, x i , to compute the value
of f(~x) in a way that no single player learns more about the initial inputs of other players
than what is revealed by the value of f(~x) and its own input 1 . The players are assumed to
be honest but curious. Namely, they all follow the prescribed protocol A but they could try
to get additional information by considering the messages they receive during the execution of
the protocol. Private computations in this setting were the subject of a considerable amount of
An early version of this paper appeared in Advances in Cryptology, Proceedings of Crypto '94, Y. Desmedt,
ed., Springer-Verlag, Lecture Notes in Computer Science, Vol. 839, pp. 397-410, 1994.
y Dept. of Computer Science, Technion, Haifa, Israel. e-mail: [email protected];
http://www.cs.technion.ac.il/-eyalk; Work on this research was supported by the E. and J. Bishop Research
Fund, and by the Fund for the Promotion of Research at the Technion. Part of this research was performed
while the author was at Aiken Computation Laboratory, Harvard University, supported by research contracts
ONR-N0001491-J-1981 and NSF-CCR-90-07677.
z Dept. of Computer Science, Tel-Aviv University, Tel-Aviv, Israel. e-mail: [email protected]
1 In the literature a more general definition of t-privacy is given. The above definition is the case
work, e.g., [BGW88, CCD88, BB89, CK89, K89, B89, FY92, CK92, CGK90, CGK92, KMO94].
One crucial ingredient in private protocols is the use of randomness. Quantifying the amount of
randomness needed for computing functions privately is the subject of the present work.
Randomness as a resource was extensively studied in the last decade. Methods for saving
random bits range over pseudo-random generators [BM84, Y82, N90], techniques for re-cycling
random bits [IZ89, CW89], sources of weak randomness [CG88, VV85, Z91], and constructions
of different kinds of small probability spaces [NN90, AGHP90, S92, KM93, KM94, KK94] (which
sometimes even allow to eliminate the use of randomness). A different direction of research is a
quantitative study of the role of randomness in specific contexts, e.g., [RS89, KPU88, BGG90,
CG90, BGS94, BSV94]. In this work, we initiate a quantitative study of randomness in private
computations. We mainly concentrate on the specific task of computing the xor of n input bits.
However, most of our results extend to any boolean function. The task of computing xor was the
subject of previous research due to its being a basic linear operation and its relative simplicity
[FY92, CK92].
It is known as a "folklore theorem" (and is not difficult to show) that private computation of
xor cannot be carried out deterministically (for n - 3). On the other hand, with a single random
bit such a computation becomes possible: At the first round player P n chooses a random bit r
and sends to P 1 the bit x n \Phi r. Then, in round i xors its bit x i\Gamma1 with
the message it received in the previous round, and sends the result to P i . Finally, P n xors the
message it received with the random bit r. Both the correctness and privacy of this protocol are
easy to verify. The main drawback of this protocol is that it takes n rounds. Another protocol
for this task computes xor in two rounds but requires a linear number of random bits: In the
first round each player P i chooses a random bit r i . Then, player P i sends x i \Phi r i to P 1 and r i to
In the second round P 2 xors all the (random) bits it received in the first round and sends
the result to P 1 which xors all the messages it received during the protocol to get the value of
the function. Again, both the correctness and privacy of this protocol are not hard to verify.
In this work we prove that there is a tradeoff between the amount of randomness and the
number of rounds in private computations of xor. For example, we show that while with a single
random bit \Theta(n) rounds are necessary and sufficient 2 , with two random bits O(log n) rounds
suffice. 3 Namely, with a single additional random bit, the number of rounds is significantly
reduced. Additional bits give a much more "modest" saving. More precisely, we prove that with
bits O(log n= log d) rounds suffice
and\Omega\Gammad/3 n=d) rounds are required. Our upper
bound is achieved using a new method that enables us to use linear combinations of random bits
again and again (while preserving the privacy). The lower bounds are proved using combinatorial
arguments, and they are strong in the sense that they also apply to protocols that are allowed to
precisely, dn=2e rounds. This upper bound is achieved by a slight modification of the first protocol
above. Assume, for simplicity, that n is even. At the first round, player Pn sends xn \Phi r to player P 1 , and at
the same time sends r to player Pn\Gamma1 . The players then continue as in the above protocol, forwarding messages
in parallel until the two messages meet. More precisely, in round xors the message it
received with its own input and sends it to player P i and player P n\Gamma(i\Gamma1) xors the message it received with its
own input and sends it to player Pn\Gammai . In round n, player P nreceives two messages and can compute the value
of the function by xoring the two messages together with its own input.
3 All logarithms are base 2, unless otherwise indicated.
make errors, and that they actually show a lower bound on the expected number of rounds. We
also show that if protocols are restricted to certain natural types (that include, in particular, the
protocol that achieves the upper bound) we can even improve the lower bound and show that
\Theta(log n= log d) rounds are necessary and sufficient.
Our lower bound techniques apply not only to the xor function, but in fact give lower bounds
on the number of rounds for any boolean function in terms of the sensitivity of the function.
Namely, we prove that with d - 2 random bits \Omega\Gammats/ S(f)=d) rounds are necessary to privately
compute a boolean function f , whose sensitivity is S(f ). With a single random bit
rounds are necessary.
The question whether private computations in general can be carried out in constant number
of rounds was previously addressed [BB89, BFKR90]. In light of our results, a promising approach
to investigate this question may be by proving that if a constant number of rounds is sufficient
then a large number of random bits is required.
Subsequent to our work, several other works were done regarding the amount of randomness
in privacy. In particular, the amount of randomness required for computing the function xor
t-privately, for t - 2, was studied in [BDPV95, KM96]; in [KOR96] it is shown that the boolean
functions that can be computed privately with a constant number of random bits are exactly the
functions that have linear size boolean circuits.
The rest of the paper is organized as follows: In Section 2 we give some definitions. In Section
3 we give an upper bound on the number of rounds required to privately compute xor. In section
4 we give lower bounds on the number of rounds to privately compute a boolean function, in
terms of its sensitivity. We conclude in Section 5 with lower bounds on the expected number
of rounds in terms of the average sensitivity of the function being computed. The appendix
contains the improved lower bounds for restricted types of protocols.
Preliminaries
We give here a description of the protocols we consider, and define the privacy property of
protocols. More rigorous definitions of the protocols are given in Section 4.1.
1g be any boolean function. A set of n players P i (1 - i - n), each
possessing a single input bit x i (known only to it), collaborate in a protocol to compute the value
of f(~x). The protocol operates in rounds. In each round each player may toss some coins, and
then sends messages to the other players (messages are sent over private channels so that other
than the intended receiver no other player can listen to them). It then receives the messages
sent to it by the other players. In addition, each player at a certain round chooses to output the
value of the function. We assume that each player knows its serial number and the total number
of players n. We may also assume that each player P i is provided with a read-only random tape
R i from which it reads random bits (rather than toss coins).
Each player P i receives during the execution of the protocol a sequence of messages. Let c i
be a random variable of the communication string seen by player P i , and let C i be a particular
communication string seen by P i . Informally, privacy with respect to player P i means that player
anything (in particular, the inputs of the other players) from C i , except what is
implied by its input bit, and the value of the function computed. Formally,
Definition 1: (Privacy) A protocol A for computing a function f is private with respect to
player P i if for any two input vectors ~x and ~y, such that
sequence of messages C i , and for any random tape R i provided to P i ,
where the probability is over the random tapes of all other players.
3 Upper Bound
This section presents a protocol which allows n players to use d - 2 random bits for computing
xor privately. This protocol takes O(log d n) rounds. (For the case similar protocol that
uses dn=2e rounds was already described in the introduction.) All arithmetic operations in this
section are done modulo 2.
Consider the following protocol (which we call the basic protocol): First organize the n players
in a tree. The degree of the root of the tree is d + 1, and the degree of any other internal node is
d (assume for simplicity that n is such that this forms a complete tree). The computation starts
from the leaves and goes towards the root by sending messages (each of them of a single bit) as
follows: Each leaf player P i sends its input bit x i to its parent in the tree. Each internal node,
after receiving messages from its d children, sums them up (modulo 2) together with its input
bit x i and sends the result to its parent. Finally, the root player sums up the d
receives together with its input bit and the result is the output of the protocol.
While a simple induction shows the correctness of this protocol, and it clearly runs in O(log d n)
rounds, it is obvious that it does not maintain the required privacy. The second idea will be to
"mask" each of the messages sent in the basic protocol by an appropriate random bit (constructed
using the d random bits available), in a way that these masks will disappear at the end, and
we will be left with the (un-masked) output. To do so we assign the nodes of the above tree
vectors in GF [2 d ] as follows (the meaning of those vectors will become clear soon): Assign to
the root the vector 0). The children of the root will be assigned d vectors
such that the vectors in any d-size subset of them are linearly independent and the sum of all
the vectors is (for example, the d unit vectors together with the
satisfy these requirements). Finally, in a recursive way, given an internal node which is assigned
a vector v, we assign to its d children d linearly independent vectors whose sum is v (note that
in particular none of these vectors is the ~ 0 vector) 4 .
We now show how to use the vectors we assigned to the nodes, so as to get a private protocol.
We will assume that the random bits b are chosen by some external processor. We will
4 For example, such a collection of d vectors can be constructed as follows: Since v 6= ~ 0 there exists an index i
such that v 1. The first d \Gamma 1 vectors will be the . The last vector
will be
Obviously the sum of these d vectors is v and they are linearly independent.
later see that this assumption can be eliminated easily. Let v be the vector assigned to some player
which is a leaf in the tree. We will give this player a single bit r
the vector consisting of the d random bits, and the product is an inner product (modulo 2). The
players will use the basic protocol, described above, with the modification that a player in a leaf
also xors its message with the bit r v it received (the other players behave exactly as before). We
claim that for every player P i , if in the basic protocol it sends the message m when the input
vector is ~x, then in the modified protocol it sends the message m+ (v i \Delta b), where v i is the vector
assigned to this player. The proof goes by induction: It is trivially true for the leaf players. For
internal nodes the message is calculated by adding the input of the players to the sum of the
incoming messages. Using, the induction hypothesis this sum is
the message received from the k'th child in the basic protocol, and v k is the vector assigned to the
k'th child. Since the construction is such that v i , the vector assigned to P i , satisfies v
then a simple algebraic manipulation proves the induction step. In particular, since the root is
assigned the vector its output equals the output of the basic protocol. Hence, the
correctness follows.
We now prove the privacy property of the protocol. The leaf players do not receive any
message, hence there is nothing to prove. Let P j be an internal node in the tree. Denote by
d the messages it receives. We claim that for every vector
and for any input vector, we have
where the probability is over the random choice of b (note that in this protocol the
players do not make internal random choices). In other words, fix any specific input vector ~x,
then for every vector w, there exists exactly one choice of values for b , such that the
messages that P j receives, when the protocol is executed with input ~x, are the vector w. Denote
by d the vectors corresponding to the d children of P j in the tree, and let
the messages they have to send in the basic protocol given the input vector ~x. As claimed, for
every d, the message that the k-th child sends in the modified protocol can be expressed
as ~r). With this notation, for having s the following linear
system has to be satisfied: 2
are linearly independent, this system has exactly one solution, as needed.
As for the root player the same argument can be applied to any fixed d-size subset of the
receives. This gives us that given any input vector ~x, for all d-size messages
vectors ~
Now take two input vectors ~x and ~y such that x root = y root and such that
by the correctness of the protocol, given a specific d-size messages-vector, the d 1'st message
is the same for ~x and ~y. Thus the privacy property holds with respect to the root too.
Finally, note that we assumed that the random choices were made by some external processor.
However, we can let one of the leaf players randomly choose the bits b supply each
of the leaf players with the appropriate bit r v . As the leaf players only send messages in the
protocol, the special processor that selects the random bits gets no advantage.
Note that if a player is non-honest it can easily prevent the other players from computing the
correct output. However, it cannot get any additional information in the above protocol, since
the only message each player gets after sending its own message is the value of the function. We
have thus proved the following theorem:
Theorem 1: The function xor on n input bits can be computed privately using d - 2 random
bits in O(log n= log d) rounds.
4 Lower Bounds
In this section we prove several lower bounds on the number of rounds required to privately
compute a boolean function, given that the total number of random bits the players can toss
is d. The lower bound is given in terms of the sensitivity of the function. In Section 4.1 we
give some formal definitions. In Section 4.2 we introduce the notion of sensitivity and present
a lemma, central to our proofs, about sensitivity of functions. The proof of the lower bound
appears in Section 4.3.
4.1 Preliminaries
We first give a formal definition for the protocols. A protocol operates in rounds. In each round
each player P i , based on the value of its input bit x i , the values of the messages it received in
previous rounds, and the values of the coins it tossed in previous rounds, tosses a certain number
of additional coins, and sends messages to the other players. The values of these messages may
depend on all of the above, including the coins just tossed by P i . The player may also choose to
output the value of the function as calculated by itself (this is done only once by each player).
Then, each player P i receives the messages sent to it by the other players. To define the protocol
more formally, we give the following definition:
Definition 2: (View)
ffl A time-t partial view of player P i consists of its input bit x i , the messages it has received
in the first t \Gamma 1 rounds, and the coins it tossed in the first t \Gamma 1 rounds. We denote it by
ffl A time-t view of player P i consists of its input bit x i , the messages it has received in the
first rounds, and the coins it tossed in the first t rounds. We denote it by V iew t
Intuitively, the partial view of a player P i in round t determines how many coins (if at all)
toss in round t. Then, its view (which includes those newly tossed coins) determines the
messages P i will send in round t, and whether and which value it will output as the value of f .
The formal definition of a protocol is given below:
Definition 3: A protocol consists of a set of functions R k
which determine how
many coins are tossed by P i in round k, and a set of functions M k
(where M is a finite domain of possible message values), which determine the message sent by
P i to P j at round k.
To quantify the amount of randomness used by a protocol we give the following definition:
Definition 4: A d-random protocol is a protocol such that for any input assignment, the total
number of coins tossed by all players in any execution is at most d.
Note that the definition allows that in different executions different players will toss the coins.
This may depend on both the input of the players, and previous coin tosses. Next, we define the
correctness of a protocol. We usually consider protocols that are always correct; protocols that
are allowed to err will be considered in Section 5.1.
Definition 5: A protocol to compute a function f is a protocol such that for any input vector
~x and every i, player P i always correctly outputs the value of f(~x).
It is sometime convenient to assume that each player P i is provided with a random tape
R i , from which it reads random bits (rather than to assume that the player tosses random
coins). The number of random coins tossed by player P i is thus the rightmost position of this
tape that it reads. We thus denote by R i a specific random tape provided to player P i , and
by ~
the vector or the random tapes of all the players
denote the random variable for these tapes and vector of tapes). Note that if we fix ~
R, we
obtain a deterministic protocol. Furthermore, V iew t
i , for any i and t, is a function of the input
assignment ~x, and the random tapes of the players. We can thus write it as V iew t
denote by T i (~x; ~
R)) the round number in which player P i outputs its result given input assignment
~x and random tapes for the players ~
R.
Definition (Rounds Complexity) An r-round protocol to compute a function f is a protocol
to compute f such that for all i, ~x, ~
R, we have T i (~x; ~
For the purpose of our proofs, we slightly modify our view of the protocol in the following
way. Fix an arbitrary binary encoding for the messages in M . We will consider a protocol where
each player sends instead of a single message from M , a set of boolean messages that represent
the binary encoding of the message to be sent in the original protocol. These messages are sent
"in parallel" in the same round. Henceforth when we refer to messages we refer to these binary
messages. Clearly, the number of rounds remains the same.
4.2 Sensitivity of Functions
In this section we include some definitions related to functions f
finite domain. Then, we present some useful properties related to these definitions.
Definition 7: (Sensitivity)
ffl For a binary vector Y , denote by Y (i) the binary vector obtained from Y by flipping the i'th
entry.
ffl A function f is sensitive to its i-th variable on assignment Y , if f(Y ) 6= f(Y (i) ).
is the set of variables to which the function f is sensitive on assignment Y .
ffl The sensitivity of a function f , denoted S(f), is S(f) 4
ffl The average sensitivity of a function f , denoted AS(f), is the average of jS f (Y )j. That is,
Y 2f0;1g n jS f (Y )j.
ffl The set of variables on which f depends, denoted D(f), is D(f) 4
)g.
we say that f depends on its i-th variable.
The following claim gives a lower bound on the degree of error if we evaluate a function f by
means of another function g, in terms of their average sensitivities. We use this property in our
proofs.
1: Consider any two functions f; for at most
Proof: Consider the n-dimensional hypercube. An f -good edge is an edge ~y) such that
f(~y). By the definitions, the number of f-good edges is exactly 2 n AS(f). Therefore, there
are at least 2 n AS(f)\GammaAS(g)edges which are f-good but not g-good. For each such edge
either f(~x) 6= g(~x) or f(~y) 6= g(~y). Since the degree of each vertex in the hypercube is n there
must be at least 2 n \Delta AS(f)\GammaAS(g)
inputs on which f and g do not agree.
Next, we prove a lemma that bounds the growth of the sensitivity of a combination of func-
tions. This lemma plays a central role in the proofs of our lower bounds, and any improvement
on it will immediately improve our lower bounds.
Lemma 2: Let be a set of m functions
Assume j. Define the function F (Y ) 4
F assumes at
most 2 d different values (different vectors), then the sensitivity of F is at most C \Delta
5 An obvious bound is S(F m. However, for reasons that will become clear soon we are interested in
bounds which are independent of m.
Proof: Let Y be the assignment on which F has the largest sensitivity, i.e. jS f (Y )j - jS f (Y 0 )j
for any assignment Y 0 . Without loss of generality assume that F (Y Consider the
set of neighbors of Y on which F has a value different than (the cardinality of this set
is the sensitivity of F ). There are at most 2 d \Gamma 1 values of F attained on the assignments in
this set. Consider one such value q 2 f0; 1g m . There is at least one index j such that q
and since the sensitivity of f j is at most C, there can be at most C assignments Y (i) with the
value q. We get that the total number of assignments Y (i) for which F has a value other than
is at most C \Delta
4.3 Lower Bound on the Number of Rounds
In this subsection we prove the following theorem.
Theorem 3: Let A be an r-round d-random (d - 2) private protocol to compute a boolean
function f . Then, r
The first step of our proof uses the d-randomness property of the protocol to show that the
number of views a player can see on a fixed input ~x is at most 2 d (over the different random
tapes of all the players). Note that this is not obvious; although only d coins are tossed during
every execution, the identity of the players that toss these coins may depend on the outcome of
previous coin tosses.
Lemma 4: Consider a private d-random protocol to compute a boolean function f . Fix an
input ~x. Let C k
be the communication string seen by player P i up to round k on input ~x
and vector of random tapes ~r. Then, for every player P i , C k
can assume at most 2 d different
values (over the different vectors of random tapes ~r).
Proof: For each execution we can order the coin tosses (i.e., readings from the local random
tapes) according to the rounds of the protocol and within each round according to the index of
the players that toss them. The identity of the player to toss the first coin is fixed by ~x. The
identity of the player to toss any next coin is determined by ~x, and the outcome of the previous
coins. Therefore, the different executions on input ~x can be described using the following binary
tree: In each node of the tree we have a name of a player P j that tosses a coin. The two outgoing
edges from this node, labeled 0 and 1 according to the outcome of the coin, lead to two nodes
labeled P k and P ' respectively (k; ', and j need not be distinct) which is the identity of the player
to toss the next coin. If no additional coin toss occurs, the node is labeled "nil"; there are no
outgoing edges from a nil node. By the d-randomness property of the protocol, the depth of the
above tree is at most d, hence it has at most 2 d root-to-leaf paths. Every possible run of the
protocol is described by one root-to-leaf path. Such a path determines all the messages sent in
the protocol, which player tosses coins and when, and the outcome of these coins. In particular
each path determines for any P i the value of C k
k). Hence, C k
at most 2 d different values.
In the following proof we restrict our attention to a specific deterministic protocols derived
from the original protocol by fixing specific vector of random tapes ~
players. In such a deterministic protocol the views of the players are functions of only the input
assignment ~x.
Lemma 5: Consider a private d-random protocol to compute a boolean function f . Fix random
tapes ~
is the view of player P i at round k on input ~x
and vector of random tapes ~r. Then, for any P i , V iew k
R) can assume at most 2 d+2 different
values (over the values of ~y).
Proof: Partition the input assignments ~x into 4 groups according to the value of x i (0 or 1),
and the value of f(~x) (0 or 1). We argue that the number of different values the view can assume
within each such group is at most 2 d . Fix an input ~x in one of these 4 groups and consider any
other input ~y pertaining to the same group. Recall that C k
R) is the communication string
seen by player P i until round k on input ~y and when the random tapes of the players are ~
R.
If the value of C k
R) is some communication string C i , then by the privacy requirement 6 ,
communication C i must also occur by round k when the input is ~x, and the vector of random
tapes is some ~
(R
Thus, the value of C k
R) must also appear as
some vector of random tapes. However, by Lemma 4, C k
can assume at most
values (over the values of ~r). Thus, C k
R) can assume at most 2 d values over the possible
input assignments that pertain to the same group.
Now, observe that V iew k
determined by the input bit y i , the communication string
and the random tape r i . Therefore, on ~
R and on two input assignments ~y and ~y 0 of the
same group (in particular y
R) then C k
R).
Thus, V iew k
R) can assume at most 2 d different values over the input assignments that pertain
to the same group.
The following lemma gives an upper bound on the sensitivity of the view of a player at a
given round, in terms of the number of random bits and the round number. This will enable us
to give a lower bound on the number of necessary rounds.
Lemma Consider a private d-random protocol to compute a boolean function f , and consider
specific vector of random tapes ~
R, and the deterministic protocol derived by it. Then for every
player
R) (as a function of ~x only) has sensitivity of at most Q(k) 4
Proof: First note that since we fix the random tapes, the views of the players are functions
of the input assignment ~x only. We prove the lemma by induction. For the view of any
player depends only on its single input bit. Thus, the claim is obvious. For k ? 1 assume the
claim holds for any ' ! k. This implies in particular that all messages received by player P i
and included in the view under consideration have sensitivity of at most Q(k \Gamma 1). Clearly the
6 The privacy requirement is defined on the final communication string, but this clearly implies the same
requirement on any prefix of it.
input bit itself has sensitivity 1 which is at most Q(k \Gamma 1). Thus the view under consideration
is composed of bits each having sensitivity at most Q(k \Gamma 1). Moreover, by Lemma 5 the view
can assume at most 2 d+2 values. It follows from Lemma 2 that the sensitivity of the view under
consideration is at most Q(k Q(k). (Note that Lemma 2 allows us to give a
bound which does not depend on the number of messages received by P i .)
We can now give the lower bound on the number of rounds, in terms of the sensitivity of the
function and the number of random bits.
Theorem 7: Given a private d-random protocol (d - 2) to compute a boolean function f ,
consider the deterministic protocol derived from it by any given random tapes ~
R. For any player
there is at least one input assignment ~x such that T i (~x; ~
Proof: Consider a fixed but arbitrary player P i . Denote by t the largest round number in
which outputs a value, i.e.,
R)g. We claim that as long as the sensitivity of
the view of P i does not reach S(f ), there is at least one input assignment for which P i cannot
output the correct value of f . Let Y be an input assignment on which the sensitivity S(f) is
obtained. That is, the value of F (Y ) is different than the value of F on S(f) of Y 's ``neighbors''.
Hence, if the sensitivity of the view of P i is less than S(f ), then the output of P i must be wrong
on either Y or on at least one of these "neighbors" (as the sensitivity of the view is an upper
bound on the sensitivity of the output). Thus, t is such that S(V iew t
6, we get 2 (d+2)(t\Gamma1) - S(f ), i.e., t - log S(f)
1.
This proves Theorem 3; moreover, it shows not only that there is an input assignment ~x and
random tapes ~
R for which the protocol runs "for a long time", but also that for each vector of
random tapes ~
R there is such input assignment. The following corollary follows for the function
xor (using the fact that
Corollary 8: Let A be an r-round d-random private protocol (d - 2) to compute xor of n bits.
4.3.1 Lower Bound for a Single Random Bit
For the case of a single random bit (d = 1), we have the following lower bound:
Theorem 9: Let A be an r-round 1-random private protocol to compute a boolean function f .
To prove the theorem, we restrict our attention to one of the two deterministic protocols
derived from the original protocol by a fixing the value of the random bit 7 . The messages and
views in this protocol are functions of the input vector, ~x, only. Let Y be an assignment on
which S(f ), the sensitivity of f , is obtained. For a given function m, a variables x j is called
good for m on Y if both m and f are sensitive to x j on Y . We denote by Gm (Y ) the set of good
variables on Y , i.e., Gm (Y ) 4
first prove the following two lemmas:
7 We let the identity of the player that tosses this coin to possibly depend on the input ~x. However note that
if we want that the privacy and 1-randomness properties hold, this cannot be the case.
Lemma 10: Consider any player P i . Denote by m 1 a message that P i receives such that
1. Then for any other message m 2 received by P i such that jG m 2
either (a) Gm 1
or (b) jG m 1
Proof: Assume towards a contradiction, that both (a) and (b) do not hold. First, since
are two variables x k 2 Gm 1
such that k 6= i and ' 6= i. Moreover, by the assumption that (a) does not hold, we can assume
without loss of generality (as to the names of m 1 and
By the assumption that (b) does not hold, there is a variable x j (j 6= i) such that f is sensitive
to x j on Y , but both m 1 and m 2 are not sensitive to x j on Y . Now consider the following three
input assignments:
Consider V iew i on the above 3 inputs and assume, without loss of generality, that m 1 (Y
are not sensitive to x j on Y , then m 1 (Y (j)
sensitive to x k on Y , then m 1 (Y sensitive
to x ' on Y , but m 1 is not, then m 1 (Y (')
different values for Y 0 , Y 1 , and Y 2 . The function f is sensitive on Y to all of j; k and ', therefore,
is equal in all three assignments. However, in the proof of Lemma
5, it is shown that the number of values of V iew i corresponding to inputs with the same value
of f and the same value of x i is at most 2
The following lemma gives an upper bound on the sensitivity of the view of the player in
terms of the round number. We then use this lemma to give a lower bound on the number of
necessary rounds.
Lemma 11: Let t - (S(f)\Gamma1)=2 be a round number and P i be any player. Then, jG V iew t
t.
Proof: We prove the claim by induction on t. For
getting any messages, the view depends only on x i ). For assume the claim
holds for any k ! t. Denote by M the set of messages received by P i and included in the view
under consideration. Clearly G V iew t
There could be one of three
cases:
1. For any message In this case the claim clearly holds.
2. Any two messages , such that jG m 1
g. It follows that jG V iew t
by the induction hypothesis jG
t, then jG V iew t
t.
3. There are two messages , such that jG m 1
but Gm 1
g. By Lemma 10, jG m 1
and (without loss of
This contradicts the induction
hypothesis as received in some round k ! t - (S(f) \Gamma 1)=2, and therefore generated
by a view of round k. By the induction hypothesis jG
We can now give the proof of Theorem 9.
Proof of Theorem 9. Consider any player P i . Denote by t the largest round number in which
outputs a value, i.e. 0)g. As in the proof of Theorem 7, it must be that
For the function xor we have the following corollary.
Corollary 12: Let A be an r-round 1-random private protocol to compute xor of n bits. Then
5 Lower Bounds on the Expected Number of Rounds
As the protocols we consider are randomized, it is possible that for the same input ~x, different
random tapes for the players will result in executions that run for different number of rounds.
Hence, it is natural to consider not only the worst case running time but also the expected running
time. Usually, saying that a protocol has expected running time r means that for every input ~x
the expected time until all players finish the execution is bounded by r (where the expectation is
over the choices of the random tapes of the players). Here we consider a weaker definition, which
requires only the existence of a player i whose expected running time is bounded by r. As we are
proving a lower bound, this only makes our result stronger: It would mean that for every player
there is an input assignment for which the expected running time is high. Note that it is not
necessarily the case that the first player that computes the value of the function can announce
this value (and thus all players compute the value within one round). The reason is that the fact
that a certain player computes the function at a certain round may reveal some information on
the inputs, and hence such announcing may violate the privacy requirement (see [CGK90]). We
first define the expected rounds complexity of a protocol.
Definition 8: (Expected Rounds Complexity) An expected r-round protocol to compute
a function f is a protocol to compute f such that there exists a player P i such that for all ~x,
The lower bound that we prove in this section is in terms of the average sensitivity of the
computed function. In particular, we prove
an\Omega\Gamma/28 n=d) lower bound on the expected number
of rounds required by protocols that privately compute xor of n bits. We will prove the following
theorem:
Theorem 13: Let f be a boolean function and let A be an expected r-round d-random private
protocol (d - 2) to compute the function f . Then,
To prove the theorem we consider a protocol A and fix any player P i . We say that the
protocol is late on input ~x and vector of random tapes ~
1. We define
to be 1 if and only if the protocol is late on ~x and ~r. For the
purpose of our proofs in this section we define a uniform distribution on the 2 n input assignments
(this is not to say that the input are actually drawn by such distribution). Moreover, note that
the domain of vectors of random tapes is enumerable.
We first show that for any deterministic protocol derived from a private protocol to compute
f , not only there is at least one input on which the protocol is late, but that this happens for a
large fraction of the inputs.
Lemma 14: Consider a player P i and any fixed vector of random tapes ~
Then
AS(f)
Proof: Consider the views of P
given the vector of random tapes ~
R. For any round
t such that t ! log AS(f)
function g computed from such a view can have at most the same sensitivity, and thus clearly an
average sensitivity of at most
AS(f). By Claim 1, such a function g can have the correct value
for the function f for at most 2 n
AS(f)
assignments. Since we assume that A
is correct for all input assignments, it follows that at least 2 n AS(f)\Gamma
AS(f)
input assignments are
late.
We can now give a lower bound on the expected number of rounds.
Lemma 15: Consider a player P i . There is at least one input assignment ~x for which
AS(f)
log AS(f)
Proof: By Lemma 14, E ~r;~x [L(~x; ~r)] - AS(f)\Gamma
AS(f)
. Hence, there is at least one input assignment
~x for which E ~r [L(~x; ~r)] - AS(f)\Gamma
AS(f)
. For such ~x we get
AS(f)
log AS(f)
as needed.
Theorem 13 follows from the above lemma. The following corollary applies to the function
xor.
Corollary 16: Let A be an expected r-round d-random private protocol (d - 2) to compute
xor of n bits. Then, r
Proof: Follows from Theorem 13 and the fact that
5.1 Weakly Correct Protocols
In this section we consider protocols that are allowed to make a certain amount of errors. Given
a protocol A, denote by A i (~x; ~r) the output of the protocol in player P i , given input assignment
~x and vector of random tape
Definition 9: For ffi)-correct protocol to compute a function f is a protocol that
for every player P i and every input vector ~x satisfies P r ~r [A i (~x;
Note that while designing a protocol one usually wants a stronger requirement; that is, with high
probability all players compute the correct value. With the above definition, it is possible that
in every execution of the protocol at least one of the players is wrong. However, as our aim now
is to prove a lower bound this weak definition only makes our result stronger.
In the following theorem we give lower bounds on the number of rounds and on the expected
number of rounds for weakly correct protocols.
Theorem 17: Let f be a boolean function.
ffl Let A be a (1 \Gamma ffi)-correct r-round d-random private protocol (d - 2) to compute f . If
AS(f)
then r
AS(f)=d).
ffl Let A be a expected r-round d-random private protocol (d - 2) to compute
f . Then
AS(f)
Proof: We first prove the lower bound on the number of rounds, and then turn our attention
to the expected number of rounds. The correctness requirement implies that for any player P i ,
This implies that there exists a vector of random tapes ~
R
such that for at least 2 n (1 \Gamma ffi) input assignments ~x, A i (~x; ~
As in the proof of Lemma
14 (using Claim 1), it follows that before round number log AS(f)
1, the protocol can be correct
on at most 2 n
AS(f)
inputs (with random tapes ~
R). Since we require that at least
are correct, we have that at least
AS(f)
AS(f)
inputs are late. To get a lower bound on r for an r-round protocol, it is sufficient to have a single
input vector ~x such that the execution on (~x; ~
R) is "long". For this, note that if
AS(f)
then (for random tapes ~
R) the number of late inputs is greater than 0. This gives us a lower
bound of r =
\Omega\Gamma112 AS(f)=d) for any (1 \Gamma ffi)-correct r-round d-random protocol, with ffi as above.
We now turn to the lower bound on the expected number of rounds of (1 \Gamma ffi)-correct protocols.
Consider a player P i . Define a to be 1 if and only if A i (~x;
Then, the correctness requirement implies that E ~r [G(~x; ~r)] all ~x. It follows that
for any ~
R the probability that ~
ffiis at least 1 \Gamma
2ffi. 8 For any
such vector of random tapes, ~
R, consider the deterministic protocol derived from it. In such a
protocol there are at least
s
AS(f)
AS(f)
s
A
late input assignments; that is, E ~x [L(~x; ~
AS(f)
ffi). Thus
AS(f)
s
It follows that there is at least one input assignment ~x for which
AS(f)
s
which implies that
AS(f)
s
A \Delta
log AS(f)
d
as claimed.
The following gives the lower bounds for the function xor.
Corollary 18: For fixed A be a (1 \Gamma ffi)-correct d-random expected r-round private
protocol to compute xor of n bits. Then r
n=d). (Obviously the same lower bound holds
for r-round protocols.)
Proof: Follows from Theorem 17 and the fact that n. Note that the expression
) is greater than 0 for any ffi ! 1=2 (and sufficiently large n).
. Thus there is at
least one input assignment ~x such that E ~r [G(~x; which is a contradiction to the protocol being
correct.
6 Conclusion
In this paper we initiate the quantitative study of randomness in private computations. As
mentioned in the introduction, our work was already followed by additional work on this topic
[BDPV95, KM96, KOR96].
We give upper bounds and lower bounds on the number of rounds required for computing
xor privately with a given number of random bits. Alternatively, we give bounds on the number
of random bits required for computing xor privately within a given number of rounds. Our lower
bounds extend to other functions in terms of their sensitivity (and average sensitivity).
An obvious open problem is to close the gap between the upper bound and the lower bound
for computing xor using d random bits. One possible way of doing this is to improve the bound
given by Lemma 2.
Acknowledgments
We thank G'abor Tardos for improving the constant and simplifying the
proof of Lemma 2, and Demetrios Achlioptas for his help in simplifying the proof of Lemma 19.
We also thank Benny Chor for useful comments.
--R
"Simple constructions of almost k-wise independent random variables"
"Non-Cryptographic Fault-Tolerant Computing in a Constant Number of Rounds"
"Perfect Privacy for Two-Party Protocols"
"Security with Low Communication Overhead"
"Randomness in Interactive Proofs"
"Completeness Theorems for Non-Cryptographic Fault-Tolerant Distributed Computation"
"How to Generate Cryptographically Strong Sequences Of Pseudo-Random Bits"
"On the Dealer's Randomness Required in Secret Sharing Schemes"
"Randomness in Distribution Protocols"
"On the Number of Random Bits in Totally Private Computations"
"Bounds on Tradeoffs between Randomness and Communication Complexity"
"Multiparty Unconditionally Secure Protocols"
"A Zero-One Law for Boolean Privacy"
"A Communication-Privacy Tradeoff for Modular Addition"
"Unbiased Bits from Sources of Weak Randomness and Probabilistic Communication Complexity"
"Dispersers, Deterministic Amplification, and Weak Random Sources"
"Communication Complexity of Secure Computation"
"How to Recycle Random Bits"
"(De)randomized Construction of Small Sample Spaces in NC"
"Constructing Small Sample Spaces Satisfying Given Con- straints"
"On Construction of k-wise Independent Random Variables"
"A Time-Randomness Tradeoff for Oblivious Routing"
"Privacy and Communication Complexity"
"Randomness in Private Computations"
"Reducibility and Completeness in Multi-Party Private Computations"
"Characterizing Linear Size Circuits in Terms of Privacy"
"Small-Bias Probability Spaces: Efficient Constructions and Appli- cations"
"Pseudorandom Generator for Space Bounded Computation"
"Memory vs. Randomization in On-Line Algorithms"
"Sample Spaces Uniform on Neighborhoods"
"Random Polynomial Time is Equal to Slightly-Random Polynomial Time"
"Theory and Applications of Trapdoor Functions"
"Simulating BPP Using a General Weak Random Source"
--TR
--CTR
Balogh , Jnos A. Csirik , Yuval Ishai , Eyal Kushilevitz, Private computation using a PEZ dispenser, Theoretical Computer Science, v.306 n.1-3, p.69-84, 5 September
Anna Gal , Adi Rosen, Lower bounds on the amount of randomness in private computation, Proceedings of the thirty-fifth annual ACM symposium on Theory of computing, June 09-11, 2003, San Diego, CA, USA
Anna Gl , Adi Rosn, A theorem on sensitivity and applications in private computation, Proceedings of the thirty-first annual ACM symposium on Theory of computing, p.348-357, May 01-04, 1999, Atlanta, Georgia, United States
Eyal Kushilevitz , Rafail Ostrovsky , Adi Rosn, Amortizing randomness in private multiparty computations, Proceedings of the seventeenth annual ACM symposium on Principles of distributed computing, p.81-90, June 28-July 02, 1998, Puerto Vallarta, Mexico | lower bounds;randomness;private distributed computations;sensitivity |
288866 | The Number of Intersection Points Made by the Diagonals of a Regular Polygon. | We give a formula for the number of interior intersection points made by the diagonals of a regular n-gon. The answer is a polynomial on each residue class modulo 2520. We also compute the number of regions formed by the diagonals, by using Euler's formula 2. | Introduction
We will find a formula for the number I(n) of intersection points formed inside
a regular n-gon by its diagonals. The case depicted in Figure 1. For a
generic convex n-gon, the answer would be
, because every four vertices would
be the endpoints of a unique pair of intersecting diagonals. But I(n) can be less,
because in a regular n-gon it may happen that three or more diagonals meet at an
interior point, and then some of the
intersection points will coincide. In fact,
if n is even and at least 6, I(n) will always be less than
, because there will be
diagonals meeting at the center point. It will result from our analysis that
4, the maximum number of diagonals of the regular n-gon that meet at a
point other than the center is
3 if n is even but not divisible by 6;
5 if n is divisible by 6 but not 30, and;
7 if n is divisible by 30:
with two exceptions: this number is 2 if In particular, it is
impossible to have 8 or more diagonals of a regular n-gon meeting at a point other
than the center. Also, by our earlier remarks, the fact that no three diagonals meet
when n is odd will imply that
for odd n.
Date: January 30, 1995.
1991 Mathematics Subject Classification. Primary 51M04; Secondary 11R18.
Key words and phrases. regular polygons, diagonals, intersection points, roots of unity, adventi-
tious quadrangles.
The first author is supported by an NSF Mathematical Sciences Postdoctoral Research Fellowship.
Part of this work was done at MSRI, where research is supported in part by NSF grant DMS-9022140.
Figure
1. The 30-gon with its diagonals. There are 16801 interior
intersection points: 13800 two line intersections, 2250 three line inter-
sections, 420 four line intersections, 180 five line intersections, 120 six
line intersections, seven line intersections, and 1 fifteen line intersection
DIAGONALS OF A REGULAR POLYGON 3
A careful analysis of the possible configurations of three diagonals meeting will
provide enough information to permit us in theory to deduce a formula for I(n). But
because the explicit description of these configurations is so complex, our strategy
will be instead to use this information to deduce only the form of the answer, and
then to compute the answer for enough small n that we can determine the result
precisely.
In order to write the answer in a reasonable form, we define
Theorem 1. For n - 3,
Further analysis, involving Euler's formula
the number R(n) of regions that the diagonals cut the n-gon into.
Theorem 2. For n - 3,
These problems have been studied by many authors before, but this is apparently
the first time the correct formulas have been obtained. The Dutch mathematician
Gerrit Bol [1] gave a complete solution in 1936, except that a few of the coefficients in
his formulas are wrong. misprints and omissions in Bol's paper are mentioned
in [11].)
The approaches used by us and Bol are similar in many ways. One difference
(which is not too substantial) is that we work as much as possible with roots of
unity whereas Bol tended to use more trigonometry (integer relations between sines
of rational multiples of -). Also, we relegate much of the work to the computer,
whereas Bol had to enumerate the many cases by hand. The task is so formidable
that it is amazing to us that Bol was able to complete it, and at the same time not
so surprising that it would contain a few errors!
Bol's work was largely forgotten. In fact, even we were not aware of his paper
until after deriving the formulas ourselves. Many other authors in the interim solved
special cases of the problem. Steinhaus [14] posed the problem of showing that no
three diagonals meet internally when n is prime, and this was solved by Croft and
Fowler [3]. (Steinhaus also mentions this in [13], which includes a picture of the
23-gon and its diagonals.) In the 1960s, Heineken [6] gave a delightful argument
which generalized this to all odd n, and later he [7] and Harborth [4] independently
enumerated all three-diagonal intersections for n not divisible by 6.
The classification of three-diagonal intersections also solves Colin Tripp's problem
[15] of enumerating "adventitious quadrilaterals," those convex quadrilaterals
for which the angles formed by sides and diagonals are all rational multiples of -.
See Rigby's paper [11] or the summary [10] for details. Rigby, who was aware of
Bol's work, mentions that Monsky and Pleasants also each independently classified
all three-diagonal intersections of regular n-gons. Rigby's papers partially solve
Tripp's further problem of proving the existence of all adventitious quadrangles using
only elementary geometry; i.e., without resorting to trigonometry.
All the questions so far have been in the Euclidean plane. What happens if we
count the interior intersections made the diagonals of a hyperbolic regular n-gon?
The answers are exactly the same, as pointed out in [11], because if we use Bel-
trami's representation of points of the hyperbolic plane by points inside a circle in
the Euclidean plane, we can assume that the center of the hyperbolic n-gon corresponds
to the center of the circle, and then the hyperbolic n-gon with its diagonals
looks in the model exactly like a Euclidean regular n-gon with its diagonals. It is
equally easy to see that the answers will be the same in elliptic geometry.
2. When do three diagonals meet?
We now begin our derivations of the formulas for I(n) and R(n). The first step
will be to find a criterion for the concurrency of three diagonals. Let A; B; C; D; E; F
be six distinct points in order on a unit circle dividing up the circumference into arc
lengths u; x; v; z and assume that the three chords AD;BE;CF meet at P (see
Figure
2).
By similar triangles,
Multiplying these together yields
(AF
and so
Conversely, suppose six distinct points A; B; C; D; E; F partition the circumference
of a unit circle into arc lengths u; x; v; z and suppose that (1) holds. Then the
three diagonals AD;BE;CF meet in a single point which we see as follows. Let lines
DIAGONALS OF A REGULAR POLYGON 5
z
y
A
x
F
Figure
2.
AD and BE intersect at P 0 . Form the line through F and P 0 and let C 0 be the other
intersection point of FP 0 with the circle. This partitions the circumference into arc
lengths As shown above, we have
and since we are assuming that (1) holds for u; x; v; z we get
sin(y 0 =2)
Substituting above we get
sin(y 0 =2)
and so
cot(y
Now . Thus, the three diagonals
AD;BE;CF meet at a single point.
So (1) gives a necessary and sufficient condition (in terms of arc lengths) for the
chords AD;BE;CF formed by six distinct points A; B; C; D; E; F on a unit circle to
meet at a single point. In other words, to give an explicit answer to the question in
6 BJORN POONEN AND MICHAEL RUBINSTEIN
the section title, we need to characterize the positive rational solutions to
(Here This is a trigonometric diophantine equation in the sense
of [2], where it is shown that in theory, there is a finite computation which reduces
the solution of such equations to ordinary diophantine equations. The solutions to
the analogous equation with only two sines on each side are listed in [9].
If in (2), we substitute multiply both sides by (2i) 3 , and
expand, we get a sum of eight terms on the left equalling a similar sum on the right,
but two terms on the left cancel with two terms on the right since U
leaving
\Gammae i-(V +W \GammaU
\Gammae i-(Y +Z \GammaX
If we move all terms to the left hand side, convert minus signs into e \Gammai- , multiply by
we obtainX
in which
Conversely, given rational numbers
necessarily positive) which sum to 1 and satisfy (3), we can
recover U; V; W;X;Y;Z, (for example, but we must check
that they turn out positive.
3. Zero as a sum of 12 roots of unity
In order to enumerate the solutions to (2), we are led, as in the end of the last
section, to classify the ways in which 12 roots of unity can sum to zero. More
generally, we will study relations of the form
a
DIAGONALS OF A REGULAR POLYGON 7
where the a i are positive integers, and the j i are distinct roots of unity. (These
have been studied previously by Schoenberg [12], Mann [8], Conway and Jones [2],
and others.) We call
a i the weight of the relation S. (So we shall be
particularly interested in relations of weight 12.) We shall say the relation (4) is
minimal if it has no nontrivial subrelation; i.e., if
implies either b By induction on the weight, any
relation can be represented as a sum of minimal relations (but the representation
need not be unique).
Let us give some examples of minimal relations. For each n - 1, let
exp(2-i=n) be the standard primitive n-th root of unity. For each prime p, let R p be
the relation
Its minimality follows from the irreducibility of the cyclotomic polynomial. Also
we can "rotate" any relation by multiplying through by an arbitrary root of unity
to obtain a new relation. In fact, Schoenberg [12] proved that every relation (even
those with possibly negative coefficients) can be obtained as a linear combination
with positive and negative integral coefficients of the R p and their rotations. But we
are only allowing positive combinations, so it is not clear that these are enough to
generate all relations.
In fact it is not even true! In other words, there are other minimal relations. If we
subtract R 3 from R 5 , cancel the 1's and incorporate the minus signs into the roots
of unity, we obtain a new relation
which we will denote (R 5 In general, if S and are relations, we
will use the notation to denote any relation obtained by rotating
the T i so that each shares exactly one root of unity with S which is different for each i,
subtracting them from S, and incorporating the minus signs into the roots of unity.
For notational convenience, we will write (R
example. Note that although (R 5 unambiguously (up to rotation) the
relation listed in (5), in general there will be many relations of type
up to rotational equivalence. Let us also remark that including R 2 's in the list of T 's
has no effect.
It turns out that recursive use of the construction above is enough to generate all
minimal relations of weight up to 12. These are listed in Table 1. The completeness
and correctness of the table will be proved in Theorem 3 below. Although there are
107 minimal relations up to rotational equivalence, often the minimal relations within
Weight Relation type Number of relations of that type
6 (R 5
(R
9 (R
(R
(R
(R
(R
(R
(R
(R
(R
(R
(R
Table
1. The 107 minimal relations of weight up to 12.
one of our classes are Galois conjugates. For example, the two minimal relations of
type (R are conjugate under Gal(Q (i 15
)=Q ), as pointed out in [8].
The minimal relations with defined as in (4)) had been previously catalogued
in [8], and those with k - 9 in [2]. In fact, the a i in these never exceed 1, so
these also have weight less than or equal to 9.
Theorem 3. Table 1 is a complete listing of the minimal relations of weight up to
12 (up to rotation).
The following three lemmas will be needed in the proof.
Lemma 1. If the relation (4) is minimal, then there are distinct primes
so that each j i is a -th root of unity, after the relation has
been suitably rotated.
Proof. This is a corollary of Theorem 1 in [8].
Lemma 2. The only minimal relations (up to rotation) involving only the 2p-th roots
of unity, for p prime, are R 2 and R p .
DIAGONALS OF A REGULAR POLYGON 9
Proof. Any 2p-th root of unity is of the form \Sigmai i . If both +i i and \Gammai i occurred in
the same relation, then R 2 occurs as a subrelation. So the relation has the form
By the irreducibility of the cyclotomic polynomial,
are independent
over Q save for the relation that their sum is zero, so all the c i must be equal. If
they are all positive, then R p occurs as a subrelation. If they are all negative, then
rotated by -1 (i.e., 180 degrees) occurs as a subrelation.
Lemma 3. Suppose S is a minimal relation, and are picked as
in Lemma 1 with (or a rotation) is
of the form (R ps are minimal relations not equal to R 2
and involving only roots of unity, such that
Proof. Since every p 1 -th root of unity is uniquely expressible as a p 1
th root of unity and a p s -th root of unity, the relation can be rewritten as
ps
where each f i is a sum of p 1 roots of unity, which we will think of as a
sum (not just its value).
Let Km be the field obtained by adjoining the p 1 roots of unity to Q .
the only linear
relation satisfied by ps
ps over K s\Gamma1 is that their sum is zero. Hence (6)
forces the values of the f i to be equal.
The total number of roots of unity in all the f i 's is w(S) ! 2p s , so by the pigeonhole
principle, some f i is zero or consists of a single root of unity. In the former case,
each f j sums to zero, but at least two of these sums contain at least one root of
unity, since otherwise s was not minimal, so one of these sums gives a subrelation
of S, contradicting its minimality. So some f i consists of a single root of unity. By
rotation, we may assume f sums to 1, and if it is not simply the
single root of unity 1, the negatives of the roots of unity in f i together with 1 form
a relation T i which is not R 2 and involves only roots of unity, and it
is clear that S is of type (R ps
If one of the T 's were not minimal,
then it could be decomposed into two nontrivial subrelations, one of which would not
share a root of unity with the R ps , and this would give a nontrivial subrelation of S,
contradicting the minimality of S. Finally, w(S) must equal the sum of the weights
of R ps and the T 's, minus 2j to account for the roots of unity that are cancelled in
the construction of (R ps
Proof of Theorem 3. We will content ourselves with proving that every relation of
weight up to 12 can be decomposed into a sum of the ones listed in Table 1, it then
being straightforward to check that the entries in the table are distinct, and that
none of them can be further decomposed into relations higher up in the table.
Let S be a minimal relation with w(S) - 12. Pick
In particular, p s - 12, so
Case 1:
Here the only minimal relations are R 2 and R 3 , by Lemma 2.
Case 2:
If w(S) ! 10, then we may apply Lemma 3 to deduce that S is of type (R
Each T must be R 3 (since p by the last
equation in Lemma 3. The number of relations of type (R 5 : jR 3 ), up to rotation,
is
=5. (There are
ways to place the R 3 's, but one must divide by 5 to avoid
counting rotations of the same relation.)
as in (6). If some f i consists of zero or one
roots of unity, then the argument of Lemma 3 applies, and S must be of the form
(R which contradicts the last equation in the Lemma. Otherwise
the numbers of (sixth) roots of unity occurring in f must be 2,2,2,2,2
or 2,2,2,2,3 or 2,2,2,3,3 or 2,2,2,2,4 in some order. So the common value of the f i
is a sum of two sixth roots of unity. By rotating by a sixth root of unity, we may
assume this value is 0, 1, or 1 If it is 0 or 1, then the arguments in the proof of
Lemma 3 apply. So assume it is 1 . The only way two sixth roots of unity can
sum to 1+ i 6 is if they are 1 and i 6 in some order. The only ways three sixth roots of
unity can sum to 1 are
6 or i 6
6 . So if the numbers of roots
of unity occurring in f are 2,2,2,2,2 or 2,2,2,2,3, then S will contain R 5
or its rotation by i 6 , and the same will be true for 2,2,2,3,3 unless the two f i with
three terms are 1
6 , in which case S contains (R 5
Finally, it is impossible to as a sum of sixth roots of unity without using
1 or i 6 , so if the numbers are 2,2,2,2,4, then again S contains R 5 or its rotation by
. Thus there are no minimal relations S with
Case 3:
7, we can apply Lemma 3. Now the sum of w(T
required to be w(S) \Gamma 7 which at most 5, so the T 's that may be used are R 3 , R 5 ,
(R and the two of type (R
and 5, respectively. So the problem is reduced to listing the partitions of w(S) \Gamma 7
into parts of size 1, 3, 4, and 5.
If all parts used are 1, then we get (R 7, and there are
=7 distinct relations in this class. Otherwise exactly one part of size 3, 4, or 5 is
used, and the possibilities are as follows. If a part of size 3 is used, we get (R 7
DIAGONALS OF A REGULAR POLYGON 11
Partition Relation type
(R
(R
(R
(R
(R
(R
(R
(R
(R
(R
Partition Relation type
(R
R 7 +R 5
(R
Table
2. The types of relations of weight 12.
(R weights 10, 11, 12 respectively. By rotation, the
R 5 may be assumed to share the 1 in the R 7 , and then there are
ways to place the
R 3 's where i is the number of R 3 's. If a part of size 4 is used, we get (R
of weight 11 or (R 7 By rotation, the (R 5 may be
assumed to share the 1 in the R 7 , but any of the six roots of unity in the (R 5
may be rotated to be 1. The R 3 can then overlap any of the other 6 seventh roots
of unity. Finally, if a part of size 5 is used, we get (R There are two
different relations of type (R that may be used, and each has seven roots of
unity which may be rotated to be the 1 shared by the R 7 , so there are 14 of these all
together.
Case 4:
Applying Lemma 3 shows that the only possibilities are R 11 of weight 11, and
(R
Now a general relation of weight 12 is a sum of the minimal ones of weight up to 12,
and we can classify them according to the weights of the minimal relations, which form
a partition of 12 with no parts of size 1 or 4. We will use the notation (R
for example, to denote a sum of three minimal relations of type (R
and R 3 .
Table
2 lists the possibilities. The parts may be rotated independently,
so any category involving more than one minimal relation contains infinitely many
relations, even up to rotation (of the entire relation). Also, the categories are not
mutually exclusive, because of the non-uniqueness of the decomposition into minimal
relations.
Figure
3. A surprising trivial solution for the 16-gon. The intersection
point does not lie on any of the 16 lines of symmetry of the 16-gon.
4. Solutions to the trigonometric equation
Here we use the classification of the previous section to give a complete listing of
the solutions to the trigonometric equation (2). There are some obvious solutions
to (2), namely those in which U; V; W are arbitary positive rational numbers with
sum 1=2, and X; Y; Z are a permutation of U; V; W . We will call these the trivial
solutions, even though the three-diagonal intersections they give rise to can look
surprising. For example, see Figure 3 for an example on the 16-gon.
The twelve roots of unity occurring in (3) are not arbitrary; therefore we must
go through Table 2 to see which relations are of the correct form, i.e., expressible
as a sum of six roots of unity and their inverses, where the product of the six is -1.
Because of the large number of cases, we perform this calculation using Mathematica.
Each entry of Table 2 represents a finite number of linearly parameterized (in the
exponents) families of relations of weight 12. For each parameterized family, we check
to see what additional constraints must be put on the parameters for the relation to
be of the form of (3). Next, for each parameterized family of solutions to (3), we
calculate the corresponding U; V; W;X;Y;Z and throw away solutions in which some
of these are nonpositive. Finally, we sort U; V; W and X; Y; Z and interchange the
two triples if U ? X, in order to count the solutions only up to symmetry.
The results of this computation are recorded in the following theorem.
DIAGONALS OF A REGULAR POLYGON 13
U
Table
3. The nontrivial infinite families of solutions to (2).
Theorem 4. The positive rational solutions to (2), up to symmetry, can be classified
as follows:
1. The trivial solutions, which arise from relations of type 6R 2 .
2. Four one-parameter families of solutions, listed in Table 3, which arise from
relations of type
3. Sixty-five "sporadic" solutions, listed in Table 4, which arise from the other types
of weight 12 relations listed in Table 2. The only duplications in this list are
that the second family of Table 3 gives a trivial solution for 1=12, and that
the first and fourth families of Table 3 give the same solution when
both.
Some explanation of the tables is in order. The last column of Table 3 gives
the allowable range for the rational parameter t. The entries of Table 4 are sorted
according to the least common denominator of U; V; W;X;Y;Z, which is also the
least n for which diagonals of a regular n-gon can create arcs of the corresponding
lengths. The reason 11 does not appear in the least common denominator for any
sporadic solution is that the relation (R 11 : R 3 ) cannot be put in the form of (3) with
the ff j summing to 1, and hence leads to no solutions of (2). (Several other types of
relations also give rise to no solutions.)
Tables
3 and 4 are the same as Bol's tables at the bottom of page 40 and on page 41
of [1], in a slightly different format.
The arcs cut by diagonals of a regular n-gon have lengths which are multiples of
2-=n, so U , V , W , X, Y and Z corresponding to any configuration of three diagonals
meeting must be multiples of 1=n. With this additional restriction, trivial solutions
to (2) occur only when n is even (and at least 6). Solutions within the infinite families
of
Table
occur when n is a multiple of 6 (and at least 12), and there t must be a
multiple of 1=n. Sporadic solutions with least common denominator d occur if and
only if n is a multiple of d.
5. Intersections of more than three diagonals
Now that we know the configurations of three diagonals meeting, we can check
how they overlap to produce configurations of more than three diagonals meeting.
14 BJORN POONEN AND MICHAEL RUBINSTEIN
1=15 1=6 4=15 1=10 1=10 3=10
1=15 1=15 7=15 1=15 1=10 7=30
1=42 3=14 5=14 1=21 1=6 4=21
1=42 1=6 19=42 1=14 2=21 4=21
1=42 1=6 13=42 1=21 1=14 8=21
1=42 1=21 13=21 1=42 1=14 3=14
1=12 2=15 19=60 1=10 3=20 13=60
1=15 11=60 13=60 1=12 1=10 7=20
1=60 4=15 23=60 1=12 1=10 3=20
1=60 4=15 3=10 1=20 1=12 17=60
1=60 13=60 9=20 1=12 1=10 2=15
1=60 13=60 5=12 1=20 2=15 1=6
1=60 1=6 31=60 1=15 1=10 2=15
1=60 1=6 5=12 1=20 1=15 17=60
84 1=12 3=14 19=84 11=84 13=84 4=21
1=14 11=84 23=84 1=12 2=21 29=84
1=42 1=12 7=12 1=21 1=14 4=21
1=84 25=84 5=14 5=84 1=12 4=21
1=84 5=21 5=12 5=84 1=14 17=84
1=84 3=14 37=84 1=21 1=12 17=84
1=84 1=6 43=84 1=21 1=14 4=21
90 1=18 13=90 7=18 11=90 2=15 7=45
1=90 23=90 31=90 2=45 1=15 5=18
1=90 17=90 47=90 1=18 4=45 2=15
1=12 19=120 29=120 1=10 13=120 37=120
1=60 13=120 73=120 1=20 1=12 2=15
1=120 7=20 43=120 7=120 11=120 2=15
1=120 13=60 61=120 1=20 1=12 2=15
1=35 2=15 97=210 1=14 17=210 47=210
Table
4. The solutions to (2).
DIAGONALS OF A REGULAR POLYGON 15
We will disregard configurations in which the intersection point is the center of the
n-gon, since these are easily described: there are exactly n=2 diagonals (diameters)
through the center when n is even, and none otherwise.
When k diagonals meet, they form 2k arcs, whose lengths we will measure as a
fraction of the whole circumference (so they will be multiples of 1=n) and list in
counterclockwise order. (Warning: this is different from the order used in Tables 3
and 4.) The least common denominator of the numbers in this list will be called the
denominator of the configuration. It is the least n for which the configuration can be
realized as diagonals of a regular n-gon.
Lemma 4. If a configuration of k - 2 diagonals meeting at an interior point other
than the center has denominator dividing d, then any configuration of diagonals meeting
at that point has denominator dividing LCM(2d; 3).
Proof. We may assume 2. Any other configuration of diagonals through the
intersection point is contained in the union of configurations obtained by adding one
diagonal to the original two, so we may assume the final configuration consists of
three diagonals, two of which were the original two. Now we need only go through
our list of three-diagonal intersections.
It can be checked (using Mathematica) that removing any diagonal from a sporadic
configuration of three intersecting diagonals yields a configuration whose denominator
is the same or half as much, except that it is possible that removing a diagonal from a
three-diagonal configuration of denominator 210, yields one of denominator 70, which
proves the desired result for this case. The additive group generated by 1=6 and the
normalized arc lengths of a configuration obtained by removing a diagonal from a
configuration corresponding to one of the families of Table 3 contains 2t where t is the
parameter, (as can be verified using Mathematica again), which means that adding
that third diagonal can at most double the denominator (and throw in a factor of 3, if
it isn't already there). Similarly, it is easily checked (even by hand), that the subgroup
generated by the normalized arc lengths of a configuration obtained by removing one
of the three diagonals of a configuration corresponding to a trivial solution to (2) but
with intersection point not the center, contains twice the arc lengths of the original
configuration.
Corollary 1. If a configuration of three or more diagonals meeting includes three
forming a sporadic configuration, then its denominator is 30, 42, 60, 84, 90, 120,
168, 180, 210, 240, or 420.
Proof. Combine the lemma with the list of denominators of sporadic configurations
listed in Table 4.
For k - 4, a list of 2k positive rational numbers summing to 1 arises this way if
and only if the lists of length 2k \Gamma 2 which would arise by removing the first or second
diagonal actually correspond to intersecting diagonals. Suppose 4. If we
Range
Table
5. The one-parameter families of four-diagonal configurations.
specify the sporadic configuration or parameterized family of configurations that arise
when we remove the first or second diagonal, we get a set of linear conditions on the
eight arc lengths. Corollary 1 tells us that we get a configuration with denominator
among 30, 42, 60, 84, 90, 120, 168, 180, 210, 240, and 420, if one of these two is
sporadic. Using Mathematica to perform this computation for the rest of possibilities
in Theorem 4 shows that the other four-diagonal configurations, up to rotation and
reflection, fall into 12 one-parameter families, which are listed in Table 5 by the eight
normalized arc lengths and the range for the parameter t, with a finite number of
exceptions of denominators among 12, 18, 24, 30, 36, 42, 48, 60, 84, and 120.
We will use a similar argument when 5. Any five-diagonal configuration containing
a sporadic three-diagonal configuration will again have denominator among
30, 42, 60, 84, 90, 120, 168, 180, 210, 240, and 420, again. Any other five-diagonal
configuration containing one of the exceptional four-diagonal configurations will have
denominator among 12, 18, 24, 30, 36, 42, 48, 60, 72, 84, 96, 120, 168, and 240, by
Lemma 4. Finally, another Mathematica computation shows that the one-parameter
families of four-diagonal configurations overlap to produce the one-parameter families
listed (up to rotation and reflection) in Table 6, and a finite number of exceptions of
denominators among 12, 18, 24, and 30.
For six-diagonal configuration containing a sporadic three-
diagonal configuration will again have denominator among 30, 42, 60, 84, 90, 120,
168, 180, 210, 240, and 420. Any six-diagonal configuration containing one of the
exceptional four-diagonal configurations will have denominator among 12, 18, 24,
30, 36, 42, 48, 60, 72, 84, 96, 120, 168, and 240. Any six-diagonal configuration
containing one of the exceptional five-diagonal configurations will have denominator
among 12, 18, 24, 30, 36, 48, and 60. Another Mathematica computation shows that
DIAGONALS OF A REGULAR POLYGON 17
Range
Table
6. The one-parameter families of five-diagonal configurations.
the one-parameter families of five-diagonal configurations cannot combine to give a
six-diagonal configuration.
Finally for k - 7, any k-diagonal configuration must contain an exceptional configuration
of 3, 4, or 5 diagonals, and hence by Lemma 4 has denominator among 12,
We summarize the results of this section in the following.
Proposition 1. The configurations of k - 4 diagonals meeting at a point not the
center, up to rotation and reflection, fall into the one-parameter families listed in
Tables
5 and 6, with finitely many exceptions (for fixed of denominators among
12, 18, 24, 30, 36, 42, 48, 60, 72, 84, 90, 96, 120, 168, 180, 210, 240, and 420.
In fact, many of the numbers listed in the proposition do not actually occur as
denominators of exceptional configurations. For example, it will turn out that the
only denominator greater than 120 that occurs is 210.
6. The formula for intersection points
Let a k (n) denote the number of points inside the regular n-gon other than the center
where exactly k lines meet. Let b k (n) denote the number of k-tuples of diagonals
which meet at a point inside the n-gon other than the center. Each interior point
at which exactly m diagonals meet gives rise to
such k-tuples, so we have the
relationship
m-k
am (n) (7)
Since every four distinct vertices of the n-gon determine one pair of diagonals which
intersect inside, the number of such pairs is exactly
, but if n is even, then
of these are pairs which meet at the center, so
(Recall that ffi m (n) is defined to be 1 if n is a multiple of m, and 0 otherwise.)
We will use the results of the previous two sections to deduce the form of b k (n)
and then the form of a k (n). To avoid having to repeat the following, let us make a
definition.
Definition . A function on integers n - 3 will be called tame if it is a linear combination
(with rational coefficients) of the functions n 3 ,
Proposition 2. For each k - 2, the function b k (n)=n on integers n - 3 is tame.
Proof. The case handled by (8), so assume k - 3. Each list of 2k normalized
arc lengths as in Section 5 corresponding to a configuration of k diagonals meeting
at a point other than the center, considered up to rotation (but not reflection),
contributes n to b k (n). (There are n places to start measuring the arcs from, and
these n configurations are distinct, because the corresponding intersection points
differ by rotations of multiples of 2-=n, and by assumption they are not at the
center.) counts such lists.
3. When n is even, the family of trivial solutions to the trigonometric
equation (2) has are positive integers
with sum n=2, and X, Y , and Z are some permutation of U , V , W . Each permutation
gives rise to a two-parameter family of six-long lists of arc lengths, and the number
of lists with each family is the number of partitions of n=2 into three positive parts,
which is a quadratic polynomial in n. Similarly each family of solutions in Table 3
gives rise to a number of one-parameter families of lists, when n is a multiple of 6,
each containing dn=6e \Gamma 1 or dn=12e \Gamma 1 lists. These functions of n (extended to be
when 6 does not divide n) are expressible as a linear combination of nffi 6 (n), ffi 6 (n),
and Finally the sporadic solutions to 2 give rise to a finite number of lists,
having denominators among 30, 42, 60, 84, 90, 120, and 210, so their contribution to
3 (n)=n is a linear combination of ffi
But these families of lists overlap, so we must use the Principle of Inclusion-Exclusion
to count them properly. To show that the result is a tame function, it
suffices to show that the number of lists in any intersection of these families is a tame
function. When two of the trivial families overlap but do not coincide, they overlap
where two of the a, b, and c above are equal, and the corresponding lists lie in one of
the one-parameter families (t; t; t; t;
(with 1=4), each of which contain dn=4e \Gamma 1 lists (for n even). This function
of n is a combination of nffi 2 (n), hence it is tame. Any other intersection
of the infinite families must contain the intersection of two one-parameter
families which are among the two above or arise from Table 3, and a Mathematica
computation shows that such an intersection consists of at most a single list of denominator
among 6, 12, 18, 24, and 30. And, of course, any intersection involving a
single sporadic list, can contain at most that sporadic list. Thus the number of lists
DIAGONALS OF A REGULAR POLYGON 19
within any intersection is a tame function of n. Finally we must delete the lists which
correspond to configurations of diagonals meeting at the center. These are the lists
within the trivial two-parameter family (t; u;
number is also a tame function of n, by the Principle of Inclusion-Exclusion again.
Thus b 3 (n)=n is tame.
Next suppose 4. The number of lists within each family listed in Table 5, or
the reflection of such a family, is (when n is divisible by 6) the number of multiples
of 1=n strictly between ff and fi, where the range for the parameter t is ff
This number is dfine the table shows that ff and fi are always
multiples of 1=24, this function of n is expressible as a combination of nffi 6 (n) and a
function on multiples of 6 depending only on n mod 24, and the latter can be written
as a combination of ffi 6 (n), so it is tame. Mathematica
shows that when two of these families are not the same, they intersect in at most a
single list of denominator among 6, 12, 18, and 24. So these and the exceptions of
Proposition 1 can be counted by a tame function. Thus, again by the Principle of
Inclusion-Exclusion, b 4 (n)=n is tame.
The proof for identical to that of 4, using Table 6 instead of Table 5,
and using another Mathematica computation which shows that the intersections of
two one-parameter families of lists consist of at most a single list of denominator 24.
The proof for k - 6 is even simpler, because then there are only the exceptional
lists. By Proposition 1, b k (n)=n is a linear combination of ffi m (n) where m ranges
over the possible denominators of exceptional lists listed in the proposition, so it is
tame.
Lemma 5. A tame function is determined by its values at
10, 12, 18, 24, 30, 36, 42, 48, 54, 60, 66, 72, 84, 90, 96, 120, 168, 180, 210, and
420.
Proof. By linearity, it suffices to show that if a tame function f is zero at those
values, then f the zero linear combination of the functions in the definition of a tame
function. The vanishing at forces the coefficients of n 3 ,
1 to vanish, by Lagrange interpolation. Then comparing the values at
shows that the coefficient of ffi 4 (n) is zero. The vanishing at
forces the coefficients of to vanish. Comparing the values
at shows that the coefficient of nffi 6 is zero. Comparing the values
at shows that the coefficient of
At this point, we know that f(n) is a combination of ffi m (n), for
30, 36, 42, 48, 60, 72, 84, 90, 96, 120, 168, 180, 210, and 420. For each m in turn,
now implies that the coefficient of ffi m (n) is zero.
Proof of Theorem 1. Computation (see the appendix) shows that the tame function
b 8 (n)=n vanishes at all the numbers listed in Lemma 5. Hence by that lemma, b 8
0 for all n. Thus by (7), a k (n) and b k (n) are identically zero for all k - 8 as well.
By reverse induction on k, we can invert (7) to express a k (n) as a linear combination
of b m (n) with m - k. Hence a k (n)=n is tame as well for each k - 2. Computation
shows that the equations
a
a 3
a 4
a 5
a 6
a 7
hold for all the n listed in Lemma 5, so the lemma implies that they hold for all
- 3. These formulas imply the remarks in the introduction about the maximum
number of diagonals meeting at an interior point other than the center. Finally
a k (n)
a k (n);
which gives the desired formula. (The in the expression for I(n) is to account
for the center point when n is even, which is the only point not counted by the
a k .)
7. The formula for regions
We now use the knowledge obtained in the proof of Theorem 1 about the number
of interior points through which exactly k diagonals pass to calculate the number of
regions formed by the diagonals.
Proof of Theorem 2. Consider the graph formed from the configuration of a regular
n-gon with its diagonals, in which the vertices are the vertices of the n-gon together
with the interior intersection points, and the edges are the sides of the n-gon together
DIAGONALS OF A REGULAR POLYGON 21
with the segments that the diagonals cut themselves into. As usual, let V denote the
number of vertices of the graph, E the number of edges, and F the number of regions
formed, including the region outside the n-gon. We will employ Euler Formula's
2.
I(n). We will count edges by counting their ends, which are 2E
in number. Each vertex has the center (if n is even) has n edge
ends, and any other interior point through which exactly k diagonals pass has 2k
edge ends, so
So the desired number of regions, not counting the region outside the n-gon, is
Substitution of the formulas derived in the proof of Theorem 1 for a k (n) and I(n)
yields the desired result.
Appendix: computations and tables
In
Table
7 we list I(n); R(n); a To determine
the polynomials listed in Theorem 1 more data was needed especially for
The largest n for which this was required was 420. For speed and memory conser-
vation, we took advantage of the regular n-gon's rotational symmetry and focused
our attention on only 2-=n radians of the n-gon. The data from this computation is
found in Table 8. Although we only needed to know the values at those n listed in
Lemma 5 of Section 6, we give a list for so that the nice patterns
can be seen.
The numbers in these tables were found by numerically computing (using a C
program and 64 bit precision) all possible
intersections, and sorting them by
their x coordinate. We then focused on runs of points with close x coordinates,
looking for points with close y coordinates.
Several checks were made to eliminate any fears (arising from round-off errors) of
distinct points being mistaken as close. First, the C program sent data to Maple
which checked that the coordinates of close points agreed to at least 40 decimal
places. Second, we verified for each n that close points came in counts of the form
diagonals meeting at a point give rise to
close points. Hence, any run
whose length is not of this form indicates a computational error).
A second program was then written and run on a second machine to make the
computations completely rigorous. It also found the intersection points numerically,
sorted them and looked for close points, but, to be absolutely sure that a pair of close
22 BJORN POONEN AND MICHAEL RUBINSTEIN
points were actually the same, it checked that for the two pairs of diagonals
respectively, the triples l 1
each divided the circle into arcs of lengths consistent with Theorem 4. Since this test
only involves comparing rational numbers, it could be performed exactly.
A word should also be said concerning limiting the search to 2-=n radians of
the n-gon. Both programs looked at slightly smaller slices of the n-gon to avoid
problems caused by points near the boundary. More precisely, we limited our search
to points whose angle with the origin fell between [c 1
of made sure not to include the origin in the count. Here
" was chosen to be :00000000001 and c 1 was chosen to be :00000123 would
have led to problems since there are many intersection points with angle 0 or 2-=n).
To make sure that no intersection points were omitted, the number of points found
(counting multiplicity) was compared with (
Acknowledgements
We thank Joel Spencer and Noga Alon for helpful conversations. Also we thank
Jerry Alexanderson, Jeff Lagarias, Hendrik Lenstra, and Gerry Myerson for pointing
out to us many of the references below.
--R
Beantwoording van prijsvraag No.
Trigonometric Diophantine equations (On vanishing sums of roots of unity)
On a problem of Steinhaus about polygons
Diagonalen im regularen n-Eck
Number of intersections of diagonals in regular n-gons
On linear relations between roots of unity
Rational products of sines of rational angles
Adventitious quadrangles: a geometrical approach
Multiple intersections of diagonals of regular polygons
A note on the cyclotomic polynomial
Mathematical Snapshots
Problem 225
Adventitious angles
--TR | intersection points;diagonals;adventitious quadrangles;regular polygons;roots of unity |
288868 | Rankings of Graphs. | A vertex (edge) coloring $\phi:V\rightarrow \{1,2,\ldots ,t\}$ ($\phi':E\rightarrow \{1,2,\ldots,$ $t\}$) of a graph G=(V,E) is a vertex (edge) t-ranking if, for any two vertices (edges) of the same color, every path between them contains a vertex (edge) of larger color. The {\em vertex ranking number} $\chi_{r}(G)$ ({\em edge ranking number} $\chi_{r}'(G)$) is the smallest value of t such that G has a vertex (edge) t-ranking. In this paper we study the algorithmic complexity of the {\sc Vertex Ranking} and {\sc Edge Ranking} problems. It is shown that $\chi_{r}(G)$ can be computed in polynomial time when restricted to graphs with treewidth at most k for any fixed k. We characterize the graphs where the vertex ranking number $\chi_{r}$ and the chromatic number $\chi$ coincide on all induced subgraphs, show that $\chi_{r}(G)=\chi (G)$ implies $\chi (G)=\omega (G)$ (largest clique size), and give a formula for $\chi_{r}'(K_n)$. | Introduction
In this paper we consider vertex rankings and edge rankings of graphs. The vertex
ranking problem, also called the ordered coloring problem [15], has received
much attention lately because of the growing number of applications. There
are applications in scheduling problems of assembly steps in manufacturing systems
[19], e.g., edge ranking of trees can be used to model the parallel assembly
of a product from its components in a quite natural manner [6, 12, 13, 14].
Department of Computer Science, Utrecht University, P.O. Box 80.089, 3508
the Netherlands. Email: [email protected]. This author was partially supported by the ESPRIT
Basic Research Actions of the EC under contract 7141 (project ALCOM II)
y Department of Computer Science and Engineering, University of Nebraska - Lincoln,
Lincoln, NE 68588-0115, U.S.A. This author was partially supported by the Office of Naval
Research under Grant No. N0014-91-J-1693
z Institut f?r Informatik, TU M-unchen, 80290 M-unchen, Germany. On leave from Univer-
sit-at Trier, Germany
x Department of Mathematics and Computing Science, Eindhoven University of Technology,
P.O.Box 513, 5600 MB Eindhoven, the Netherlands
- IRISA, Campus Universitaire de Beaulieu, 35042 Rennes Cedex, France. On leave
from Friedrich-Schiller-Universit?t, Jena. This author was partially supported by Deutsche
under Kr 1371/1-1
k Fakult?t f?r Mathematik und Informatik, Friedrich-Schiller-Universit?t, Universit?ts-
hochhaus, 07740 Jena, Germany
Computer and Automation Institute, Hungarian Academy of Sciences, H-1111 Budapest,
Kende u. 13-17, Hungary. This author was partially supported by the "OTKA" Research
Fund of the Hungarian Academy of Sciences, Grant No. 2569
Furthermore the problem of finding an optimal vertex ranking is equivalent to
the problem of finding a minimum-height elimination tree of a graph [6, 7].
This measure is of importance for the parallel Cholesky factorization of matrices
[3, 9, 18]. Yet other applications lie in the field of VLSI-layout [17, 26].
The vertex ranking problem 'Given a graph G and a positive integer t,
decide whether - r (G) - t ' is NP-complete even when restricted to cobipartite
graphs since Pothen has shown that the equivalent minimum elimination tree
height problem remains NP-complete on cobipartite graphs [20]. A short proof
of the NP-completeness of vertex ranking is given in Section 3. Much work
has been done in finding optimal rankings of trees. For trees there is a linear-time
algorithm finding an optimal vertex ranking [24]. For the closely related
edge ranking problem on trees a O(n 3 ) algorithm was given in [8]. Recently,
Zhou and Nishizeki obtained an O(n log n) algorithm for optimally edge ranking
trees [28] (see also [29]). Efficient vertex ranking algorithms for permutation,
trapezoid, interval, circular-arc, circular permutation graphs, and cocompara-
bility graphs of bounded dimension are presented in [7]. Moreover, the vertex
ranking problem is trivial on split graphs and it is solvable in linear time on
cographs [25].
In [15], typical graph theoretical questions, as they are known from the
coloring theory of graphs, are investigated. This also leads to a O(
n) bound
for the vertex ranking number of a planar graph and the authors describe a
polynomial-time algorithm which finds a vertex ranking of a planar graph using
only O(
n) colors. For graphs in general there is an approximation algorithm
of performance ratio O(log 2 n) for the vertex ranking number [3, 16]. In [3]
it is also shown that one plus the pathwidth of a graph is a lower bound for
the vertex ranking number of the graph (hence a planar graph has pathwidth
O(
n), which is also shown in [16] using different methods).
Our goal is to extend the known results in both the algorithmic and graph
theoretic directions. The paper is organized as follows. In Section 2 the necessary
notions and preliminary results are given. We study the algorithmic
complexity of determining whether a graph G fulfills - r (G) - t and - 0
respectively, in Sections 3, 4, and 5. In Section 6 we characterize those graphs
for which the vertex ranking number and the chromatic number coincide on
every induced subgraph. Those graphs turn out to be precisely those containing
no path and cycle on four vertices as an induced subgraph; hence, we
obtain a characterization of the trivially perfect graphs [11] in terms of rankings.
Moreover we show that implies that the chromatic number of
G is equal to its largest clique size. In Section 7 we give a recurrence relation
allowing us to compute the edge ranking number of a complete graph.
Preliminaries
We consider only finite, undirected and simple graphs E). Throughout
the paper n denotes the cardinality of the vertex set V and m denotes that of the
edge set E of the graph E). For graph-theoretic concepts, definitions
and properties of graph classes not given here we refer to [4, 5, 11].
E) be a graph. A subset U ' V is independent if each pair
of vertices nonadjacent. A graph E) is bipartite if there is
a partition of V into two independent sets A and B. The complement of the
graph E) is the graph G having vertex set V and edge set ffv; wg j v 6=
Eg. For W ' V we denote by G[W ] the subgraph of E)
induced by the vertices of W , and for X ' E we write G[X] for the graph
and edge set X.
E) be a graph and let t be a positive integer. A
(vertex) t-ranking, called ranking for short if there is no ambiguity, is a coloring
such that for every pair of vertices x and y with
and for every path between x and y there is a vertex z on this path with c(z) ?
c(x). The vertex ranking number of G, - r (G), is the smallest value t for which
the graph G admits a t-ranking.
By definition adjacent vertices have different colors in any t-ranking, thus any t-
ranking is a proper t-coloring. Hence - r (G) is bounded below by the chromatic
number -(G). A vertex - r (G)-ranking of G is said to be an optimal (vertex)
ranking of G.
The edge ranking problem is closely related to the vertex ranking problem.
E) be a graph and let t be a positive integer. An
edge t-ranking is an edge coloring c 0 such that for every pair
of edges e and f with c 0 and for every path between e and f there is
an edge g on this path with c 0 (g) ? c 0 (e). The edge ranking number - 0
r (G) is
the smallest value of t such that G has an edge t-ranking.
Remark 3 There is a one-to-one correspondence between the edge t-rankings
of a graph G and the vertex t-rankings of its line graph L(G). Hence - 0
An edge t-ranking of a graph G is a particular proper edge coloring of G. Hence
r (G) is bounded below by the chromatic index - 0 (G). An edge - 0
r (G)-ranking
of G is said to be an optimal edge ranking of G.
As shown in [7], the vertex ranking number of a connected graph is equal
to its minimum elimination tree height plus one. Thus (vertex) separators and
edge separators are a convenient tool for investigating rankings of graphs. A
subset of a graph E) is said to be a separator if G[V n S] is
disconnected. A subset R ' E of a graph E) is said to be an edge
separator (or edge cut) if G[E n R] is disconnected.
In this paper we use the separator tree for studying vertex rankings. This
concept is closely related to elimination trees (cf.[3, 7, 18]).
Definition 4 Given a vertex t-ranking of a connected
graph E), we assign a rooted tree T (c) to it by an inductive construction,
such that a separator of a certain induced subgraph of G is assigned to each
internal node of T (c) and the vertices of each set assigned to a leaf of T (c) have
different colors:
1. If no color occurs more than once in G, then T (c) consists of a single
vertex r (called root), assigned to the vertex set of G.
2. Otherwise, let i be the largest color assigned to more than one vertex by c.
Then has to be a separator S of G. We create a root r of
T (c) and assign S to r. (The induced subgraph of G corresponding to the
subtree of T rooted at r will be G itself.) Assuming that a separator tree
has already been defined for each connected component
G i of the graph G[V n S], the children of r in T (c) will be the vertices r i
and the subtree of T (c) rooted at r i will be T i (c).
The rooted tree T (c) is said to be a separator tree of G.
Notice that all vertices of G assigned to nodes of T (c) on a path from a leaf to
the root have different colors.
3 Unbounded ranking
It is still unknown whether the edge ranking problem 'Given a graph G and
a positive integer t, decide whether - 0
Remark 3 this problem is equivalent to the vertex ranking problem 'Given
a graph G and a positive integer t, decide whether - r (G) - t ' when restricted
to line graphs.
On the other hand, it is a consequence of the NP-completeness of the minimum
elimination tree height problem shown by Pothen in [20] and the equivalence
of this problem with the vertex ranking problem [6, 7] that the latter
is NP-complete even when restricted to graphs that are the complement of
bipartite graphs, the so-called cobipartite graphs.
For reasons of self-containedness, we start with a short proof of the NP-completeness
of vertex ranking, when restricted to cobipartite graphs. The
following problem, called balanced complete bipartite subgraph (abbre-
viated bcbs) is NP-complete. This is problem [GT24] of [10].
Instance: A bipartite graph E) and a positive integer k.
Question: Are there two disjoint subsets such that
and such that u implies that
Theorem 5 vertex ranking is NP-complete for a cobipartite graphs.
Proof: Clearly the problem is in NP. NP-hardness is shown by reduction from
bcbs.
Let a bipartite graph E) and a positive integer k be given. Let
G be the complement of G, thus G is a cobipartite graph.
We claim that G has a balanced complete bipartite subgraph with 2 \Delta k
vertices if and only if G has a (n \Gamma k)-ranking.
Suppose we have sets W 1
and such that for all u We now construct a k-
ranking of G. Write W
n\Gamma2\Deltak g. We define a vertex ranking c of G as follows:
c(v (1)
One easily observes that c is a vertex (n \Gamma k)-ranking.
Next, let c be a (n\Gammak)-ranking for G. Since G is a cobipartite graph, for each
color, there can be at most two vertices with that color, one lying in V 1 and the
other in V 2 . Therefore, we have k pairs v (1)
with c(v (1)
can assume that W
Now we show that the subgraph induced by the set W 1 [W 2 forms a balanced
complete bipartite subgraph in G. To show this, we prove that each pair of
vertices not adjacent in G. Suppose v (1)
are
adjacent in G. Then, the colors of these vertices must be different. Furthermore,
assume w.l.o.g., that c(v (1)
we have a path (v (1)
c(v (1)
contradicting the fact that c is a ranking. Hence the
subgraph induced by W 1 [W 2 is indeed a balanced complete bipartite subgraph.
This proves the claim, and the NP-completeness of vertex ranking. 2
We show that the analogous result holds for bipartite graphs as well.
Theorem 6 vertex ranking remains NP-complete for bipartite graphs.
Proof: The transformation is from vertex ranking for arbitrary graphs
without isolated vertices. Given the graph G, we construct a graph G
and
Clearly, the constructed graph G 0 is a bipartite graph. Now we show that G
has a t-ranking if and only if G 0 has a (t 1)-ranking.
Suppose G has a t-ranking tg. We construct a coloring -
c for
G 0 in the following way. For the vertices
the vertices
c is a 1)-ranking of
G 0 .
On the other hand, let +1g be a (t +1)-ranking of G 0 . We
show that - c(v) ? 1 for every vertex . Suppose not and let v be a vertex
of V with - wg be an edge incident to v in G. Hence v is
adjacent to (e; 1);
1)-ranking, there are l; l 0 with l 6= l 0 such
that -
implying a path (e;
the assumption that - c is a ranking. This proves that - c(v) ? 1 holds for every
vertex As a consequence, for each edge there is a
vertex
to - c((e; 1)-ranking of G 0 . Now
we define
. The coloring c is a t-ranking of
G since the existence of a path between two vertices v and w of G such that
c(w) and all inner vertices have smaller colors implies the existence of a
path from v to w in G 0 with -
c(w) and all inner vertices having smaller
colors, contradicting the fact that -
c is a 1)-ranking of G 0 . 2
4 Bounded ranking
We show that the 'bounded' ranking problems 'Given a graph G, decide whether
are solvable in linear time for any fixed t. This will be
done by verifying that the corresponding graph classes are closed under certain
operations.
Definition 7 An edge contraction is an operation on a graph G replacing two
adjacent vertices u and v of G by a vertex adjacent to all vertices that were
adjacent to u or v. An edge lift is an operation on a graph G replacing two
adjacent edges fv; wg and fu; wg of G by one edge fu; vg.
Definition 8 A graph H is a minor of the graph G if H can be obtained from
G by a series of the following operations: vertex deletion, edge deletion, and
edge contraction. A graph class G is minor closed if every minor H of every
graph G 2 G also belongs to G.
Lemma 9 The class of graphs satisfying - r (G) - t is minor closed for any
fixed t.
Proof: Since vertex/edge deletion cannot create new paths between
monochromatic pairs of vertices, we only have to show that edge contraction
does not increase the ranking number. Let E) be a graph with
t, and assume is obtained from G by contracting the edge
new vertex c
uv. Suppose c is a t-ranking of G. We construct
of H as follows.
uv
Suppose - c is not a t-ranking of H. Then there is a path
c is a t-ranking of G the vertex c
uv must occur in the path. Depending on its
neighbors in P we can 'decontract' c
uv in the path P into u, v,
getting a path P 0 of G violating the ranking condition, in contradiction to the
choice of c. 2
Corollary 10 For each fixed t, the class of graphs satisfying - r (G) - t can be
recognized in linear time.
Proof: In [1], using results from Robertson and Seymour [22, 23], it is shown
that every minor closed class of graphs that does not contain all planar graphs,
has a linear time recognition algorithm. The result now follows directly from
Lemma 9. 2
As regards edge rankings, a simple argument yields a much stronger assertion
as follows.
Theorem 11 For each fixed t, the class of connected graphs satisfying - 0
t can be recognized in constant time.
Proof: For any fixed t, there are only a finite number of connected graphs G
with
t, as necessary conditions are that the maximum degree of G is
at most t, and the diameter of G is bounded by 2 t \Gamma 1. 2
Certainly, the above theorem immediately implies that the graphs G satisfying
can be recognized in linear time, by inspecting the connected
components separately. This result might have also been obtained via more
involved methods, by using results of Robertson and Seymour on graph immersions
[21]. Similarly, one can show that for fixed t and d, the class of connected
graphs with - r (G) - t and maximum vertex degree d can be recognized in
constant time.
Definition 12 A graph H is an immersion of the graph G if H can be obtained
from G by a series of the following operations: vertex deletion, edge deletion
and edge lift. A graph class G is immersion closed if every immersion H of a
graph G 2 G also belongs to G.
The proof of the following lemma is similar to the one of Lemma 9 and therefore
omitted.
Lemma 13 The class of graphs satisfying - 0
closed for
any fixed t.
Linear-time recognizability of the class of graphs satisfying - 0
now also
follows from Lemma 13, the results of Robertson and Seymour, and the fact
that graphs with - 0
have treewidth at most 2t 2.
5 Computing the vertex ranking number on graphs
with bounded treewidth
In this section, we show that one can compute - r (G) of a graph G with
treewidth at most k in polynomial time, for any fixed k. Such a graph is
also called a partial k-tree. This result implies polynomial time computability
of the vertex ranking number for any class of graphs with a uniform upper
bound on the treewidth, e.g., outerplanar graphs, series-parallel graphs, Halin
graphs.
The notion of treewidth has been introduced by Robertson and Seymour
(see e.g., [22]).
Definition 14 A tree-decomposition of a graph E) is a pair (fX
a collection of subsets of V , and
tree, such that
ffl for all edges fv; wg 2 E there is an i 2 I with v; w
ffl for all is on the path from i to k in T , then
The width of a tree-decomposition (fX
1. The treewidth of a graph E) is the minimum width over all tree-decompositions
of G.
We often abbreviate (fX When the
treewidth of E) is bounded by a constant k, one can find in O(n)
time a tree-decomposition (X; T ) of width at most k, such that I = O(n) and
T is a rooted binary tree [1]. Denote the root of T as r. We say (X; T ) is a
rooted binary tree-decomposition.
Definition 15 A terminal graph is a triple (V; E; Z), with (V; E) an undirected
graph, and Z ' V a subset of the vertices, called the terminals.
To each node i of a rooted binary tree-decomposition (X; T ) of graph
E), we associate the terminal graph G
is a descendant of ig, and g. As
shorthand notation we write p(v; w; G; c; ff), iff there is a path in G from v to
w with all internal vertices having colors, smaller than ff under coloring c. If
G; c; ff), we denote with P (v; w; G; c; ff) the set of paths in G from v to
w with all internal vertices having colors (using color function c), smaller than
ff. In the following, suppose t is given.
be a terminal graph, and let c
be a vertex t-ranking of (V; E). The characteristic of c, Y (c), is
the quadruple (cj Z
ffl cj Z is the function c, restricted to domain Z.
defined true if and only
or there is a vertex x 2 V with G; c; i).
defined
and only if there is a vertex x 2 V with G; c; i) and
G; c; i).
t; 1g is defined by: f 3 (v; w) is the smallest integer
t 0 such that p(v; w; G; c; t 0 ). If there is no path from v to w in G, then
Definition 17 A set of characteristics S of vertex t-rankings of a terminal
graph G is a full set of characteristics of vertex t-rankings for G (in short: a
full set for G), if and only if for every vertex t-ranking c of G, Y (c) 2 S.
set C of vertex t-rankings of a terminal graph G is an example set of
vertex t-rankings for G (in short: an example set for G), if and only if for every
vertex t-ranking c of G, there is an c 0 2 C with Y
the set of characteristics of the elements of C forms a full set of characteristics
of vertex t-rankings for G.
then a full set of characteristics of vertex t-rankings of
(with jZj polynomial in V : there
are O(log k+1 n) possible values for cj Z , 2 O((k+1) log n) possible values for f 1 ,
log n) possible values for f 2 , and there are O(log 1k(k+1) n) possible values
for f 3 . The following lemma, given in [3], shows that we can ensure this
property for graphs with treewidth at most k for fixed k.
Lemma If the treewidth of E) is at most k, then - r
log n).
Let (X; T ) be a rooted binary tree-decomposition of G. Suppose j 2 I is a
descendant of i 2 I in T . Suppose c is a vertex t-ranking of G i . The restriction of
c to G j is the function cj G j
defined by 8v
Clearly, cj G j
is a vertex t-ranking of G j . If c 0 is another vertex t-ranking of G j ,
we define the function R(c; c
Lemma 19 Let (X; T ) be a rooted binary tree-decomposition of E).
Let j be a descendant of i. Let c be a vertex t-ranking of G i , and c 0 be a vertex
t-ranking of G j . If Y (cj G j
vertex t-ranking of G i ,
and Y
Proof: For brevity, we write c
and we write Y (cj G j
We start with proving two claims.
Proof: Let v; w 2 W 1 , and suppose we have a path
We consider those parts of the path p that are part of G
such that each p ff (0 - ff - r) is a path
with all vertices in W 1 , and each p 0
a path in G j .
(Each path is a collection of successive edges, i.e., the last vertex of a path
is the first vertex of the next path.) Write v ff for the first vertex on path
ff and w ff for the last vertex on path p 0
1). Note that
. We now have that there also
exists a path p 00
(In words: there exists a path from v
to w in G j such that all colors of internal vertices are smaller than t 0 , using
coloring c (or, equivalently cj G j
As cj G j
and c 0 have the same characteristics,
there also exists such a path using color function c 0 .) Now, the path formed by
the sequence (p
is a path in G i between v and w
with all colors of internal vertices smaller than t 0 , hence p(v; w; G
shows: p(v; w; G
can be shown in the same way. 2
there exists a vertex
only if there exists a vertex w
(w
claim 20, we have p(v; w; G x be the last vertex on a path
that belongs to W 1 . Write is the last
vertex of p 0 and the first vertex of p 00 . p exists
a path q
Using equality of the characteristics of cj G j
and c 0 , we have that there exists a
vertex
a path from v to w 0 in G i with all internal vertices of color (under color
function c 00 ) smaller that t 0 , hence p(v; w The reverse implication of
the claim can be shown in a similar way. 2
We now show that c 00 is a vertex t-ranking, or, equivalently, that for all
We consider four cases:
1. v; w
c is not a vertex ranking,
contradiction.
2. there exists a
again c is not a
vertex ranking, contradiction.
3. w . Similar to Case 2.
4. v; w all vertices on p belong to W 2 ,
then p is a path in G j , and hence c 0 was not a vertex ranking of G j ,
contradiction. So, there exist vertices on p that belong to W 1 .
Let x be the first vertex on p that belongs to W 1 . Then
must exist vertices
with
a path from v 0 to w 0 with all internal vertices of color (with color function
c) less than t 0 . Hence c is not a vertex ranking, contradiction.
It remains to show that Y
. Suppose
3 ). It follows directly from
Consider v; w
be the vertex with
true. If x
can
and y 2 X j . (Let y be the last vertex in X j on p 1 .) Similarly, we can write
This implies that f 2 (y; z; t 0 ) is true. Hence, there is a vertex x 0 with paths
Also, by Lemma 20, we
have paths p 0
using
path (p 0
12 ) from v to x 0 and path (p 0
22 ) from w to x 0 , it follows that
almost identical argument
shows
.
Finally, it follows directly from Claim 20 that g
We now describe our algorithm. After a rooted binary tree-decomposition
E) has been found (in linear time [1]), the algorithm computes
a full set and an example set for every node i 2 I, in a bottom-up order. Clearly,
when we have a full set for the root node of T , we can determine whether G
has a vertex t-ranking, as we only have to check whether the full set of the root
is non-empty. If so, any element of the example set of the root node gives us a
vertex t-ranking of G.
It remains to show that we can compute for any node i 2 I a full set and
an example set, given a full set and an example set for each of the children
of i 2 I. This is straightforward for the case that i is a leaf node: enumerate
all functions c for each such function c, test whether it is a
vertex t-ranking of G i , and if so, put c in the example set, and Y (c) in the full
set of characteristics.
Next suppose i 2 I has two children j 1 and j 2 . (If i has one child
we can add another child j 2 , which is a leaf in T and has
we have example sets
. We compute a full set S and an
example set Q for G i in the following way:
Initially, we take S and Q to be empty.
For each triple is an element of Q 1 , c 2 is an element of
3 is an arbitrary function c
do the
ffl Check whether for all
(v). If not, skip the
following steps and proceed with the next triple.
ffl Compute the function c : defined as follows:
ffl Check whether c is a vertex t-ranking of G i . If not, skip the following
steps and proceed with the next triple.
ffl If Y (c) 62 S, then put Y (c) in S and put c in Q.
We claim that the resulting sets S and Q form a full set and an example
set for G i . Consider an arbitrary vertex t-ranking c 0 of G i . Let c 1 2 Q 1 be the
vertex t-ranking of Y
that has the same characteristic as c 0
. By definition
of example set, c 1 must exist. Similarly, let c
be defined by c 3
). When the algorithm processes the triple first
test will hold. Suppose c is the function, computed in the second test. Now
note that Hence, by Lemma 19, c is a vertex t-ranking
and has the same characteristic as c 0 . Hence, S will contain Y (c), and Q will
contain a vertex t-ranking of G i with the same characteristic as c and c 0 .
As the size of a full set, and hence of an example set for graphs G i , i 2 I is
polynomial, it follows that the computation of a full set and example set from
these sets associated with the children of the node, can be done in polynomial
time. (There are a polynomial number of triples For each triple, the
computation given above costs polynomial time.) As there are a linear number
of nodes of the tree-decomposition, computing whether there exists a vertex
t-ranking costs polynomial time (assuming testing for each
applicable value of t (see Lemma 18) for the existence of vertex t-rankings of
G, we obtain the following result:
Theorem 22 For any fixed k, there exists a polynomial time algorithm, that
determines the vertex ranking number of graphs G with treewidth at most k,
and finds an optimal vertex ranking of G.
6 The equality -
In this section we consider questions related to the equality of the chromatic
number and the vertex ranking number of graphs.
Theorem 23 If - r holds for a graph G, then G also satisfies
Proof: Suppose that E) has a vertex t-ranking
with -(G). We are going to consider the separator tree T (c) of this t-
ranking. Recall that T (c) is a rooted tree and that every internal node of
T (c) is assigned to a subset of the vertex set of G which is a separator of the
corresponding subgraph of G, namely more than one component arises when
all subsets on the path from the node to the root are deleted from the graph.
Furthermore, all vertices assigned to the nodes of a path from a leaf to the root
of T (c) have pairwise different colors.
The goal of the following recoloring procedure is to show that either
!(G) or we can recolor G to obtain a proper coloring with a smaller number of
colors. However, the latter contradicts the choice of the -(G)-ranking c.
We label the nodes of the tree T (c) according to the following marking rules:
1. Mark a node s of T (c) if the union U(s) of all vertex sets assigned to all
nodes on the path from s to the root is not a clique in G.
2. Also, mark a leaf l of T (c) if the union U(l) of all vertex sets assigned to
all nodes on the path from l to the root is a clique in G, but jU(l)j ! t.
Case 1: There is an unmarked leaf l.
We have and U(l) is a clique. Hence,
Case 2: There is no unmarked leaf.
We will show that this would enable us to recolor G saving one color, contradicting
the choice of c.
Since every leaf of T (c) is marked, every path from a leaf to the root consists
of marked nodes eventually followed by unmarked nodes. Consequently, there
is a collection of marked branches of T (c), i.e., subtrees of T (c) induced by one
node and all its descendants for which all nodes are marked and the father of
the highest node of each branch is unmarked or the highest node is the root of
T (c) itself.
If the root of T (c) is marked then we have exactly one marked branch,
namely T (c) itself. Then, by definition, the separator S assigned to the root is
not a clique. However, none of its colors is used by the ranking for vertices in
Simply, any coloring of the separator S with fewer than j S j colors will
produce a coloring of G with fewer than -(G) colors; contradiction.
If the root is unmarked, then we have to work with a collection of b marked
branches, b ? 1. Notice that all color-1 vertices of G are assigned to leaves of
T (c) and that any leaf of T (c) belongs to some marked branch B. We are going
to recolor the graph G by recoloring the marked branches one by one such that
the new coloring of G does not use color 1. Let us consider a marked branch
B. Let h be its highest node in T (c), and S(h) the set assigned to h. Since h
is marked but the root is unmarked, there must exist a vertex x of S(h) and a
vertex y belonging to U(h) which are nonadjacent. Then c(x) 6= c(y) since all
vertices of U(h) have pairwise different colors.
Assume a leaf of T (c). Hence, x and y,
respectively, is the only color-1 vertex of G assigned to a node of B. We simply
recolor x and y with max(c(x); c(y)).
Finally consider the case c(x) 6= 1 and c(y) 6= 1. All color-1 vertices in the
subgraph of G corresponding to B are recolored with c(x) and x is recolored
with c(y). By the construction of T (c), this does not influence other parts of
the graph, since they are separated by vertex sets with higher colors.
Having done this operation in every marked branch, eventually we get a
new color assignment of G which is still a proper coloring (though usually
not a ranking). Since all leaves of T (c) are marked, and no internal node
of T (c) contains color-1 vertices, color 1 is eliminated from G, contradicting
the assumption - r Consequently, Case 2 cannot occur, implying
This completes the proof. 2
does not imply that G is a perfect graph. (Trivial
counterexamples are of the form is an arbitrary
imperfect graph.) On the other hand, if we require the equality on all induced
subgraphs, then we remain with a relatively small class of graphs that is also
called 'trivially perfect' in the literature (cf. [11]).
Theorem 24 A graph E) satisfies - r
if and only if neither P 4 nor C 4 is an induced subgraph of G.
Proof: The condition is necessary since - r (P 4
2.
Now let G be a P 4 -free and C 4 -free graph. The graphs with no induced
are precisely those in which every connected induced subgraph H
contains a dominating vertex w, i.e., w is adjacent to all vertices of H [27].
Hence, the following efficient algorithm produces an optimal ranking in such
graphs: If then we assign the color !(H) to a
dominating vertex w. Clearly, -(H[V 0
it is easily seen that - r (H[V 0 thus, induction can
be applied. On the other hand, if H is disconnected, then an optimal ranking
can be generated in each of its components separately. 2
7 Edge rankings of complete graphs
While obviously - r (K n not easy to give a closed formula for the edge
ranking number of the complete graph. The most convenient way to determine
r (K n ) seems to introduce a function g(n) by the rules
In terms of this g(n), the following statement can be proved.
Theorem 25 For every positive integer n,
r (K n
Proof: The assertion is obviously true for 3. For larger values of n
we are going to apply induction.
Similarly to vertex t-rankings, the following property holds for every edge
t-ranking of a graph is the largest color occurring more than
once, then the edges with colors form an edge separator of G.
Moreover, doing an appropriate relabeling of these colors
get a new edge t-ranking of G with the property that there is a color j ? i such
that all edges with colors form an edge separator of G which is
minimal under inclusion.
We have to show that the best way to choose this edge separator R with
respect to an edge ranking in a complete graph is by making the two components
of G[E n R] as equal-sized as possible. Let us consider a K n ,
2 be the numbers of vertices in the components, hence and the
corresponding edge separator has size n 1 n 2 . Every edge ranking starting with
this separator has at least
r (K n1
r (K
r (K maxfn1 ;n2 g )
colors, and there is indeed one using exactly that many colors. Defining a 1 :=
repeating the same argument for n 0 so on, we
eventually get a sequence of positive integers a s, such that
a
i!j-s
a j for all s: (1)
Notice that at least the last two terms of any such sequence are equal to 1.
It is easy to see that the number of colors of any edge ranking represented by
a s is equal to
1-i!j-s a i a j , consequently
r (K n
1-i!j-s
a i a
s
a i!
subject to the condition (1). Since a decreasing sort of the sequence maintains
(1) we may assume a 1 - a 2 - a s . Thus, for each value of
1-i!j-s a i a j is attained precisely by the unique sequence satisfying
a
i-j-s a j c for all In particular, we obtain
r (K n
r (K dn=2e
Applying this recursion, it is not difficult to verify that, indeed, - 0
r (K n ) can be
written in the form 1
is the function defined above. 2
Observing that g(2 n we obtain the following interesting
result.
Corollary 26
Conclusions
We studied algorithmic and graph-theoretic properties of rankings of graphs.
For many special classes of graphs, the algorithmic complexity of vertex ranking
is now known. However the algorithmic complexity of vertex ranking
when restricted to chordal graphs or circle graphs is still unknown. Furthermore
it is not even known whether the edge ranking problem is NP-complete.
We started a graph-theoretic study of vertex ranking and edge ranking as
a particular kind of proper (vertex) coloring and proper edge coloring, respec-
tively. Much research has to be done in this direction. It is of particular interest
which of the well-known problems in the theory of vertex colorings and edge
colorings are also worth studying for vertex rankings and edge rankings.
--R
A linear time algorithm for finding tree-decompositions of small treewidth
A tourist guide through treewidth.
Approximating treewidth
Graph Theory with Applications.
Edge ranking of trees.
The multifrontal solution of indefinite sparse symmetric linear equations.
Computers and Intractability: A Guide to the Theory of NP-completeness
Algorithmic Graph Theory and Perfect Graphs.
Optimal node ranking of trees.
Parallel assembly of modular products-an analysis
On an edge ranking problem of trees and graphs.
Ordered colourings.
Area efficient graph layouts for VLSI.
The role of elimination trees in sparse factorization.
Concurrent Design of Products and Processes.
The complexity of optimal elimination trees.
Graph minors.
Graph minors.
Graph minors.
Node ranking and searching on graphs (Abstract).
On a graph partition problem with application to VLSI layout.
The comparability graph of a tree.
Finding optimal edge-rankings of trees
An efficient algorithm for edge-ranking trees
--TR
--CTR
Tak Wah Lam , Fung Ling Yue, Optimal edge ranking of trees in linear time, Proceedings of the ninth annual ACM-SIAM symposium on Discrete algorithms, p.436-445, January 25-27, 1998, San Francisco, California, United States
Kazuhisa Makino , Yushi Uno , Toshihide Ibaraki, Minimum edge ranking spanning trees of split graphs, Discrete Applied Mathematics, v.154 n.16, p.2373-2386, 1 November 2006
Shin-ichi Nakayama , Shigeru Masuyama, A polynomial time algorithm for obtaining minimum edge ranking on two-connected outerplanar graphs, Information Processing Letters, v.103 n.6, p.216-221, September, 2007
Keizo Miyata , Shigeru Masuyama , Shin-ichi Nakayama , Liang Zhao, Np-hardness proof and an approximation algorithm for the minimum vertex ranking spanning tree problem, Discrete Applied Mathematics, v.154 n.16, p.2402-2410, 1 November 2006
Md. Abul Kashem , M. Ziaur Rahman, An optimal parallel algorithm for c-vertex-ranking of trees, Information Processing Letters, v.92 n.4, p.179-184, November 2004
Dariusz Dereniowski , Adam Nadolski, Vertex rankings of chordal graphs and weighted trees, Information Processing Letters, v.98 n.3, p.96-100, 16 May 2006
Chung-Hsien Hsu , Sheng-Lung Peng , Chong-Hui Shi, Constructing a minimum height elimination tree of a tree in linear time, Information Sciences: an International Journal, v.177 n.12, p.2473-2479, June, 2007 | edge ranking;treewidth;vertex ranking;graph algorithms;ranking of graphs;graph coloring |
288976 | Directions of Motion Fields are Hardly Ever Ambiguous. | If instead of the full motion field, we consider only the direction of the motion field due to a rigid motion, what can we say about the three-dimensional motion information contained in it? This paper provides a geometric analysis of this question based solely on the constraint that the depth of the surfaces in view is positive. The motivation behind this analysis is to provide a theoretical foundation for image constraints employing only the sign of flow in various directions and justify their utilization for addressing 3D dynamic vision problems.It is shown that, considering as the imaging surface the whole sphere, independently of the scene in view, two different rigid motions cannot give rise to the same directional motion field. If we restrict the image to half of a sphere (or an infinitely large image plane) two different rigid motions with instantaneous translational and rotational velocities(t<math>_1 cannot give rise to the same directional motion field unless the plane through t_1 and t_2 is perpendicular to the plane through _1 and _2 (i.e., (t_1 t_2) (_1 In addition, in order to give practical significance to these uniqueness results for the case of a limited field of view, we also characterize the locations on the image where the motion vectors due to the different motions must have different directions.If (_1 _2) (t_1 t_2) = 0 and certain additional constraints are met, then the two rigid motions could produce motion fields with the same direction. For this to happen the depth of each corresponding surface has to be within a certain range, defined by a second and a third order surface. Similar more restrictive constraints are obtained for the case of multiple motions. Consequently, directions of motion fields are hardly ever ambiguous. A byproduct of the analysis is that full motion fields are never ambiguous with a half sphere as the imaging surface. | addition, in order to give practical significance to these uniqueness results for the case of
a limited field of view, we also characterize the locations on the image where the motion
vectors due to the different motions must have different directions.
additional constraints are met, then the two rigid
motions could produce motion fields with the same direction. For this to happen the depth
of each corresponding surface has to be within a certain range, defined by a second and a
third order surface. Finally, as a byproduct of the analysis it is shown that if we also consider
the constraint of positive depth the full motion field on a half sphere uniquely constrains 3D
motion independently of the scene in view.
The support of the Advanced Research Projects Agency (ARPA Order No. 8459) and the U.S. Army
Topographic Engineering Center under Contract DACA76-92-C-0009, the Office of Naval Research under
Contract N00014-93-1-0257, National Science Foundation under Grant IRI-90-57934, , and the Austrian
"Fonds zur F-orderung der wissenschaftlichen Forschung", project No. S 7003, is gratefully acknowledged.
1 Introduction and Motivation
The basis of the majority of visual motion studies has been the motion field, i.e., the projection
of the velocities of 3D scene points on the image. Classical results on the uniqueness of
motion fields [6, 9, 10] as well as displacement fields [8, 12, 14] have formed the foundation
of most research on rigid motion analysis that addressed the 3D motion problem by first
approximating the motion field through the optical flow and then interpreting the optical
flow to obtain 3D motion and structure [2, 7, 13, 15].
The difficulties involved in the estimation of optical flow have recently given rise to a
small number of studies considering as input to the visual motion interpretation process
some partial optical flow information. In particular the projection of the optical flow on
the gradient direction, the so-called normal flow [5, 11], and the projections of the flow on
different directions [1, 3] have been utilized. In [3] constraints on the sign of the projection
of the flow on various directions were presented. These constraints on the sign of the flow
were derived using only the rigid motion model, with the only constraint on the scene being
that the depth in view has to be positive at every point-the so-called "depth-positivity"
constraint. In the sequel we are led naturally to the question of what these constraints,
or more generally any constraint on the sign of the flow, can possibly tell us about three-dimensional
motion and the structure of the scene in view. Thus we would like to investigate
the amount of information in the sign of the projection of the flow. Since knowing the sign
of the projection of a motion vector in all directions is equivalent to knowing the direction of
the motion vector, our question amounts to studying the relationship between the directions
of 2D motion vectors and 3D rigid motion.
We next state the well-known equation for rigid motion for the case of a spherical imaging
surface. We describe the constraints and discuss the information exploited when using the
full flow as opposed to the information employed when using only the direction of flow. As
will be shown, whereas full flow allows for derivation of the direction of translation and the
complete rotation, from the orientation of the flow only the direction of translation and the
direction of rotation can be obtained.
The 2D motion field on the imaging surface is the projection of the 3D motion field of
the scene points moving relative to that surface. Suppose the observer is moving rigidly
with instantaneous translation
Figure
1); then each scene point measured with respect to a coordinate system
OXY Z fixed to the camera moves relative to the camera with velocity -
R, where
If the center of projection is at the origin and the image is formed on a sphere with radius
1, the relationship between the image point r and the scene point R under perspective
projection is
R
with jRj being the norm of the vector R.
R
f
O
Figure
1: Image formation on a spherical retina under perspective projection.
If we now differentiate r with respect to time and substitute for -
R, we obtain the following
equation for -
r:
(r \Theta (t \Theta
The first term v tr (r) corresponds to the translational component which depends on the depth
the distance of R to the center of projection. The direction of v tr
(r) is along great
circles (longitudes) pointing away from the Focus of Expansion (t) and towards the Focus of
Contraction (\Gammat). The second term v rot (r) corresponds to the rotational component which
is independent of depth. Its direction is along latitudes around the axis of rotation (coun-
around ! and clockwise around \Gamma!). See Figure 2a, b and c for translational,
rotational, and general motion fields on the sphere.
(a) (b) (c)
Figure
2: Example of (a) a rotational, (b) a translational, (c) a general motion field on a
sphere.
As can be seen, without additional constraints there is an ambiguity in the computation
of shape and translation. It is not possible to disentangle the effects of t and jRj, and thus
we can only derive the direction of translation. If all we have is the direction of the flow we
can project -
r on any unit vector n i on the image and obtain an inequality constraint:
r
From this inequality we certainly cannot recover the magnitude of translation, since the
optical flow already does not allow us to compute it.
In addition we are also restricted in the computation of the rotational parameters. If
we multiply ! by a positive constant, leave t fixed, but multiply 1
by the same positive
constant, the sign of the flow is not affected. Thus from the direction of the flow we can at
most compute the axis of rotation and, as discussed before, the axis of translation. Hereafter,
for the sake of brevity, we will refer to the motion field also as the flow field or simply flow,
and to the direction of the motion field as the directional flow field or simply directional
flow.
Between the Orientation of the Flow and the Depth-positivity
Constraint
If we have the flow -
r, we know the value of the projection of -
r on any direction and we set all
the possible information by choosing two directions
and
(usually orthogonal). Thus
we have
r
We can solve equation (1) for the depth,
r
Knowing the value in both directions n 1
and
we know that the inverse depth has to
be the same, and also has to be positive; thus
r
r
If on the other hand we do not use the value of the flow but only its direction and thus the
sign of the projection of the flow on n i , then the only constraint that can be utilized is the
inequality, which comes from the fact that the depth is positive. Using only the orientation
of the flow we obtain for every direction
r
This inequality provides inequality constraints on the rotational and translational compo-
nents, which are independent of the scene: If we consider the sign of the translational
component ((t and the sign of the rotational component (! \Theta r) \Delta n i and assume
that each of them is either positive or negative, there are 2 \Theta combinations of signs.
But once we know the sign of the flow -
r one of these four combinations is no longer
possible. This observation has been used in the development of global constraints for 3D
motion estimation. Choosing directions n i in particular ways the signs of ( -
r
patterns of positive and negative areas on the image [3-5]. These patterns, whose location
and form encodes information about 3D motion, were successfully used in the recovery of
egomotion. In this paper, by pursuing a theoretical investigation of the amount of information
present in directional flow fields, we demonstrate the power of the qualitative image
measurements already used empirically, and justify their utilization in global constraints for
three-dimensional dynamic vision problems.
The organization of this paper is as follows: In Section 3 we develop the preliminaries-
constraints that will be used in the uniqueness analysis. Given two rigid motions, we study
what the constraints are on the surfaces in view for the two motion fields to have the same
direction at every point. From these constraints, we investigate for which points of the image
one of the surfaces must have negative depth. The locations where negative depth occurs
are described implicitly in the form of constraints on the signs of functions depending on the
image coordinates and the two three-dimensional motions. The existence of image points
whose associated depth is negative ensures that the two rigid motions cannot produce motion
fields with the same direction. In Section 4, which contains the main uniqueness proof, we
study conditions under which two rigid flow fields could have the same direction at every
point on a half sphere (i.e., conditions under which there do not exist points of negative
depth), and we visualize the locations of negative depth on the sphere. Section 5 is devoted
to the treatment of special cases. As a byproduct of the analysis, in Section 6 we investigate
the ambiguity of rigid motion for full flow assuming that depth has to be positive, and show
that any two different motions can be distinguished on a hemispherical image from full flow.
Section 7 summarizes the results. Appendix A studies whether more than two rigid motions
could produce the same directional flow field, and the rest of the Appendices (B-F) describe
and prove a number of geometric properties used in the main part of the paper.
3 Critical Surface Constraints
Let us assume that two different rigid motions yield the same direction of flow at every point
in the image. Let t 1 and ! 1 be translational and rotational velocities of the first motion,
and let t 2
be translational and rotational velocities of the second motion. Since from
the direction of flow we can only recover the directions of the translation and rotation axes,
we assume all four vectors t 1
to be of unit length. Let Z 1
(r) and Z 2
(r) be the
mapping points r on the image into the real numbers, that represent the depths
of the surfaces in view corresponding to the two motions. In the future we will refer to Z 1
and Z 2
as the two depth maps. In this section we investigate the constraints that must be
satisfied by Z 1
and Z 2
in order for the two flow fields to have the same direction.
We assume that the two depths are positive, and allow Z 1
or Z 2
to be infinitely large.
Thus we assume 1=Z 1
3.1 Notation
We start by defining some notation:
(2)
c denotes the triple product of vectors a, b and c.
These functions have a simple geometric meaning. If ! 1
any r. If ! 1
is zero for points r lying on a geodesic passing through
. In this case f ! defines the locus of points r where v rot 1
(r), the rotational
component of the first motion, is parallel to v rot 2
(r), the rotational component of the second
motion. Similarly f t (r) is either zero everywhere, or it is zero for points lying on a geodesic
passing through t 1
and t 2
. In this case f t is the locus of points r where v tr 1
(r), the
translational component of the first motion, is parallel to v tr 2
(r), the translational component
of the second motion.
r. If they are non-zero, then g ij (r) is
zero at points lying on a second order curve consisting of two closed curves on the sphere,
the so-called zero motion contour of motion defines the locus
of points where v rot i
(r) is parallel to v tr j
(r) (see
Appendix
B). Throughout the paper the
functions and the curves defined by their zero crossings will play very
important roles.
To simplify the notation we will usually drop r and write only f i and g ij where the index
i in f i can take values t and !. There is a simple relationship between f i and g ij . Let
\Theta r)) \Theta (! 1
\Gammag 11
r
\Theta r)) \Theta (! 2
\Gammag 22
r
Since we assume (r \Delta
22
Let
\Theta r) \Theta (r \Theta (! 2
\Theta
Then
From equations (3) and (4) we get
3.2 Conditions for ambiguity
Assuming that motion
and motion
give rise to flow fields with the same direction at any point; then there exists - ? 0 such
that
(r \Theta (t 1
\Theta
\Theta
(r \Theta (t 2
\Theta
\Theta r) (6)
By projecting the vector equation (6) on directions t 2 \Theta r and r \Theta (! 2 \Theta r) we obtain two
scalar
\Theta r) \Delta (t 2
\Theta
\Theta r) \Delta (t 2
\Theta r) (7)Z 1
Since - is positive, from (7) and (8) we get constraints on 1
22
where sgn(\Delta) denotes the sign function.
Let us define s 1
\Gammag
. At any point, f i and g ij are constant, so
equations simple constraints on 1
. We call them the s 1
-constraint and
the s 0-constraint respectively.
Similarly we can project equation (6) on vectors t 1
\Theta r and r \Theta (! 1
\Theta r) and obtain
constraints on 1
We define s 2
. Equations (11) and (12) provide constraints on 1
and we thus call them the s 2
-constraint and the s 0-constraint.
Let us now interpret these constraints: 1
is always non-negative; thus, if the two motions
corresponding depth maps Z 1
and Z 2
produce flow with the same
direction, the depth Z 1
must satisfy
either 1
or 1
Thus Z 1
has a relationship to the surfaces:
and
Equations (13) and (14) provide hybrid definitions of scene surfaces. To express the surfaces
in scene coordinates R, we substitute in the above equations Dividing (13) by
Z(r) and replacing Z(r) 2 by R 2 in (14) we obtain
\Theta t 2
\Theta
and (! 2
Thus we see that Z 1 is constrained through (15) by a second order surface and through (16)
by a third order surface. At some points it has to be inside the first surface and at some
points it has to be outside the first surface. In addition, at some points it has to be inside the
(a) (b)
Figure
3: Two rigid motions (t 1
constrain the possible depth Z 1
of the first
surface by a second and a third order surface. The particular surfaces shown in the coordinate
system of the imaging sphere, projected stereographically, correspond to the motion
configuration of Figure 7.
second surface and at some points it has to be outside the second surface. Figure 3 provides
a pictorial description of the two surfaces constraining Z 1
Analogous to the above derivation, from equations (11) and (12) we obtain a further
second and third order surface pair, which constrain the depth map Z 2
3.3 Interpretation of surface constraints
We next describe the s 1 - and s 0-constraints in detail. For convenience, we express these
constraints for 1
then the s 1
-constraint is sgn(g 12
does not depend on Z 1
Thus it is either satisfied by any Z 1
, or it cannot be satisfied by any Z 1
If f t 6= 0, then we get the s 1
-constraint
constraint is
, if sgn(g 22
, if sgn(g 22
, if sgn(g 22
If g 21
0, then the s 0-constraint does not depend on Z 1
22
Since we assume 1
22
is either 0, or sgn(g 22
). So the s 0-constraint is
At each point we have the additional constraint 1
- 0. If all three constraints can
be satisfied simultaneously at a point in the image, then there is an interval (bounded or
unbounded) of values of Z 1
satisfying them. If the constraints cannot be satisfied, this means
that the two flows at this point cannot have the same direction and we say that we have a
contradictory point.
In the following table we summarize the three constraints on 1
. According to the
inequality relationships from the s 1
-constraint and the s 0-constraint we classify the image
points into four categories (type I-IV). The table analyzes the general case, at a point where
Type s 1
-constraint s 0-constraint 1
solution interval Solution exists if
I
If some of f i are zero at a point, we may obtain constraints that do not depend on
, or equality constraints.
In the table above each image point is assigned to one of four categories (see Figure 4 for
an example). Whether, for a given image point, there actually exists a value for 1
satisfying
the constraints depends on whether the solution interval at that point is empty or not. Thus
we classify all image points on the sphere into three categories, A, B, and C, depending on
the kind of solution interval that exists for Z 1 :(A) there exists no solution for there
exists a solution, and the interval for Z 1
is (B) bounded, or (C) unbounded. In the latter
two cases, we can also check whether the interval has a lower bound greater than 0.
Figure
4: Classification of image points by s 1
-constraint and s 0-constraint.
The classification of a point into one of the categories (I-IV) depends on the signs of f t ,
, and g 22
. The existence of a solution interval at a point also depends on the signs of f !
and g 12
at that point and also on the relative values of s 1
and s 0, i.e., on the sign of s 1
are polynomial functions of r. To find out where they change
sign, it is enough to find points where they are zero. The sign of s 1
since s 1
(r) and s 0(r) do not have to be continuous. However their discontinuities occur at
points where f t
can change at those points and at
points where s 1
Using (5), we can write
22
So we see that sgn(s 1
change only at points where at least one of f i , g ij is zero.
At points where g 22
we have the s 1
-constraint 1
and the s 0-constraint 1
thus at these points the depth Z 1 is uniquely defined.
Let us consider the implicit curves f i In the general case, these
equations describe two geodesics and four zero motion contours. Each of the curves divides
the sphere into areas where the solution interval for Z 1
could be different (areas of class A,
B, C). However, not every point on the curves separates different areas. Inside any of the
areas, all the points have the same classification (for example an infinite solution interval for
with a positive lower bound). Figure 5 shows an example of this classification, although
the derivation of how to actually obtain the areas where there does not exist a solution is
deferred to the next subsection.
A
Figure
5: Classification of image points according to the solution interval. (The corresponding
motion configuration is displayed in Figure 7.)
Up to this point we have been discussing only the constraints for Z 1 . Similarly, from (11)
and (12) we have at any point the s 2
-constraint
, or 1
), and the
We obtain the same curves dividing the sphere
into areas such that inside any of the areas, the type of solution interval for Z 2 is the same.
Now we can summarize the results. The curves f i separate the
sphere into a number of areas. Each of the areas is either contradictory (i.e., containing only
contradictory points), or ambiguous (i.e., containing points where the two motion vectors
can have the same direction). Two different rigid motions can produce ambiguous directions
of flow if the image contains only points from ambiguous areas. There are also two scene
surfaces constraining depth Z 1
and two surfaces constraining depth Z 2
. If the depths do not
satisfy the constraints, the two flows are not ambiguous.
3.4 Contradictory points
In this section we investigate conditions that must be satisfied when a point is contradictory.
Since the type of solution for Z 1
and Z 2
depends on the signs of f i and g ij , we want to
describe sign combinations that yield a contradiction. We investigate the general case, i.e.,
we assume f i 6= 0, and g ij 6= 0 and use the resulting constraints in Section 4. Special cases
are treated separately in Section 5.
There are two simple conditions yielding contradiction for Z 1
, one for the s 1
-constraint
and one for the s 0-constraint. There is no solution for Z 1
and s 1
- 0. This
happens under the following condition
which is derived from equation (9). Similarly, from (10) we get a contradiction if 1
We get similar conditions for Z 2
. There is no solution for Z 2
and s 2
- 0, or ifZ 2
This happens under conditions C 3
and C 4
and
We call these four constraints (C 1
Contradictory Point conditions, or CP-conditions for
short. Next we show that a point (where f i 6= 0 and g ij 6= 0) is contradictory if and only if
at least one of the four conditions is satisfied.
Let us assume that conditions (18) and (19) are not satisfied at some point, but we have
a contradiction for Z 1 . Then the point must be of type II or III, since there is always a
solution for points of type I, and a point of type IV is contradictory only if (18) or (19)
holds.
For a point of type II, 1
s 0, but (19) is not satisfied, so we have s 0- 0. A contradiction
is possible only if s 1
happens when sgn(g 22
sgn(g 22
and s 1
), from (17) we obtain
Thus in this case condition (20) holds.
We obtain the same result for points of type III. Since (18) is not satisfied, we have
a contradiction is possible only if s happens when
sgn(g 22
and
and again condition (20) holds.
Thus if there is no solution for Z 1 , at least one of conditions (18), (19) and (20) must
hold. Similarly if there is no solution for Z 2
, at least one of conditions (18), (20) and (21)
must hold.
By examination of all the possibilities, we can show that at any point, either none of
the CP-conditions holds (and the point is ambiguous), or exactly two of the conditions hold
(and the point is contradictory).
Antipodal pairs of points
In this section we investigate constraints for a point r and its antipodal point \Gammar to be both
ambiguous or to be both contradictory.
Again we describe a general case, i.e., assume f i 6= 0 and
holds either at r, or at \Gammar. We get similar results for the remaining three
CP-conditions. Thus both point r and point \Gammar are ambiguous only if
Point r and point \Gammar can also both be contradictory. As shown in Appendix F, this
happens when
4 The Geometry of the Depth-Positivity Constraint
In the last section we found that if the CP-conditions hold at a point on the imaging surface,
then one of the depth values has to be negative and thus the point is contradictory. In this
section we investigate these constraints further; in particular we would like to know under
what conditions two rigid motions cannot be distinguished if our imaging surface is a half
sphere or an image plane, and we are interested in studying and visualizing the locations of
areas where the CP-conditions are met.
Considering as imaging surface the whole sphere, two different rigid motions cannot
produce flow of the same direction everywhere. As shown in Section 3.5, two antipodal
points r and \Gammar are ambiguous only if (22) holds. Thus for any point on curve
the sign of g ij is positive on one side of the curve and negative on the other, there must exist
a neighborhood either around r or around \Gammar where there is a contradiction.
We are now ready, using the machinery already developed, to study uniqueness properties.
As in the previous section, we assume that vectors t 1 , are of unit length.
4.1 Half sphere image: The general case
Let us assume that the image is a half of the sphere. Let us also assume that
\Theta t 2
We show that under this condition the two rigid motions cannot produce motion fields with
the same direction everywhere in the image.
Let us consider the projections of ! 1
on a geodesic n connecting t 1
and t 2
Projection onto the geodesic is well defined for all points r such that r \Theta (t 1 \Theta t 2
we assume (24), the projections of both
are well defined. The proof is given in
parts A and B.
A: Let us first assume that one of
does not lie on geodesic n. Without loss of
generality, let it be ! 1
Figure
Possible sign combinations of g 11
and g 12
in the neighborhood of r 1
The projection of ! 1
onto n is
\Theta t 2
\Theta t 2
where the sign is chosen so that r 1
is in the image. Then
\Theta t 2
\Theta t 2
\Theta t 2
and g 11
the s 0-constraint is sgn( 1
Clearly, this constraint cannot be satisfied, so r 1
is a contradictory point.
We can also show that at least one of the areas around point r 1
is contradictory. Point r 1
lies on zero motion contours g 11
If the two contours cross at this point
Appendix
C shows that g 11 cannot be tangent), we obtain four
areas in the neighborhood of r 1
, and all four possible sign combinations of g 11
and g 12
. If we
look at points close enough to r 1
(so that f ! does not change sign), then condition (21) is
satisfied in one of the areas, and that area is contradictory. For an illustration see Figure 6.
B: Now we need to consider the situation where both
lie on geodesic n, i.e.,
us consider point
. We know f t (! 1
is parallel to (t i \Theta ! 1
) is zero. However, from (24) we
have
\Theta t 2
or g 22
is non-zero at
If g 21
cannot be satisfied and ! 1
is a contradictory point.
Again, it is not a singular point. The line tangent to g 11
at
has direction ! 1
(and
6= 0, since g 21
is perpendicular to n at this point. Since f t is
identical to n, curves
with all possible sign
combinations. Thus in one of the areas, condition (20) holds, and we obtain a contradictory
area.
If g 22
cannot be satisfied at
. Again, at least one area
is contradictory, since contour g 12
is perpendicular to n at this point. This
concludes the proof that if (24) is satisfied there exist contradictory areas on the half sphere.
Section 4.2 discusses the case when (24) is not satisfied.
The rest of this section describes properties of the contradictory areas in order to provide
a geometric intuition.
Just as we projected ! 1
on geodesic n connecting t 1
and t 2
to obtain r 1
, we project ! 2
on n to obtain r 2
, and we project t 1
and t 2
on geodesic l, connecting
, to obtain
and r 4
(see
Figure
7). Point r 2
is at the intersection of f
22
is
at the intersection of f
is at the intersection of f
22
By the same argumentation as before, at each of the points we can
choose two of the contours f passing through the point and we obtain four
areas of different sign combinations in the corresponding terms f i and g ij around the point;
it can be shown that one of these areas is contradictory because one of the CP-conditions is
met.
The CP-conditions are constraints on the signs of the terms f i and g ij . Thus the boundaries
of the contradictory areas are formed by the curves f As we have
shown the contradictory area and its boundaries must contain the points r 1
, and r 4
For some motion configurations the boundaries also might contain t 1
. It
can, however, be verified that no neighborhood around t 1
needs to be con-
tradictory. It can also be verified, by examining all the possibilities for the signs of terms f i
ij in the CP-conditions, that points t 1 , lie inside a contradictory
area, since at least one of their neighboring areas is ambiguous. Figures 8 and 9 show the
contradictory areas for both halves of the sphere for two different motion configurations.
r
r
r
l1222
x
Figure
7: Separation of the sphere through curves f Each of t 1
, and r 4
lies at the intersection of three curves.
(a) (b)
Figure
8: Contradictory areas for both halves of the sphere for the two motions shown in
Figure
7.
Finally, let us consider the boundaries of the contradictory areas. As defined in Section 3,
we allow the depths of the surfaces in view to take any value greater than zero (including
infinity). Thus at any point r the motion vector -
r could be in the direction of v rot (r), but not
in the direction of v tr
(r). This allows us to describe the depth values at possible boundaries
of a contradictory area: At points on curve f
and Z 2
can be infinite, thus
boundary points on this curve are not elements of the contradictory area. Boundary points
on all other curves (f are contradictory, since one of the depths Z 1
and Z 2
(a) (b) (c)
Figure
9: (a) Motion configuration. (b) and (c) Contradictory areas for both halves of the
sphere.
would have to be zero.
4.2 Half sphere image: The case when (t 1
\Theta t 2
is perpendicular to (! 1
In this section it is shown that there could exist (t 1
\Theta t 2
to (! 1
), such that there exist no contradictory areas in one hemisphere.
First we investigate possible positions of points t 1
on the hemisphere,
bounded by equator q. Then we describe additional conditions on the orientation of vectors
respect to the hemisphere.
As shown in Section 3.5, two antipodal points r and \Gammar can be ambiguous only if (22)
holds. Thus if the border of the area defined by (22) intersects q, there will be a contradiction
in the image.
If curve intersects q at point p, at least one of the areas around p does not satisfy
condition (22). Unless all are on the boundary of the hemisphere (and then
the motions are not ambiguous), there is a contradictory area in the image (either around
or around \Gammap).
be the normal to the plane of q. By intersecting the zero motion contour
with the border q of the hemisphere (see Appendix D), we find that real solutions for the
intersection point are obtained only if
A half sphere contains for each of the translation vectors t i and the rotation vectors
exactly one of the vectors +t i or \Gammat i and +! i or \Gamma! i . Let us refer to the vectors
in the considered hemisphere as ~ t i and ~
From equation (27), taking into account that
we see that l ? 0, either if for any ~
(i.e., ~
an angle greater than 90 ffi ), or ( ~
are such
that
which means that ~
must be close to the
border.
is perpendicular to f the projections of ! 1
on f and the
projections of t 1
and t 2
on
. Point r 1
lies at the intersection of all six curves f
Any three curves f intersect only in r 1
and one of
the points ~ t 1
, or ~
. Furthermore, since all the zero motion contours have to be closed
curves on the hemisphere, we conclude that if there exists a contradictory area, it also has to
be in a neighborhood of r 1
. It thus suffices to consider all possible sign combinations of terms
. It can be verified that, for a hemisphere to contain only ambiguous
areas, the two translations have to have the same sign, that is sgn(t 1
Also
the two rotations have to have the same sign, i.e., sgn(! 1
Furthermore,
the relative positions of t 1 , have to be such that
\Theta t 2
Intuitively this means, when rotating in the orientation given by the rotations in
order to make f then the order of points t 1
and t 2
on f
opposite to the order of points ~
and ~
on (moving along the same direction along
\Gamma1, the order of points
and \Gammat 2
on f must be the same as the order of points ~
and ~
on
In summary, we have shown that two rigid motions could be ambiguous on one hemi-
sphere, if (t 1
\Theta t 2
is perpendicular to (! 1
but only if certain sign and certain distance
conditions on t 1 , are met. In addition, as shown in Section 3, the two surfaces
in view are constrained by a second and a third order surface (as shown in equations (15)
and (16)).
Figure
gives an example of such a configuration.
x
(a) (b)
Figure
10: Both halves of the sphere showing two rigid motions for which there do not exist
contradictory areas in one hemisphere. (a) Hemisphere containing only ambiguous areas.
(b) Contradictory areas for the other hemisphere.
In the next section we discuss the special cases and show that they do not allow for
ambiguity. Thus the case of (t 1 \Theta t 2 ) being perpendicular to (! 1 \Theta ! 2 ) is the only case where
two motions can produce the same direction of the motion field on a hemisphere. An analysis
concerned with ambiguities due to more than two rigid motions is given in Appendix A.
5 Special Cases
In previous sections, we assumed that t 1
\Theta t 2
Here we show that if
these conditions do not hold, then the two motions are not ambiguous.
In Section 3 we assumed all four vectors t 1
to be of unit length. Here the
four vectors can also be zero. Thus we have two different motions (i.e. t 1
such that t 1
\Theta t 2
To cover all possible cases we are required to make a minor assumption about the depth
and Z 2
for the case where ! 1
. Then we have t 1
, and f From (10) we obtain the
constraint
22
). So at points where g 21
and g 22
have different signs, the
only possible solution is 1
Infinite values for both depths in these areas would
result in pure rotational flow fields in these areas and thus in an ambiguity. The same kind
of ambiguity would occur if we considered the full flow. Therefore it seems reasonable to
assume that at least at one point in the areas where g 11 g 22 ! 0, depths Z 1
and Z 2
are not
both infinite. Under this assumption there does not exist ambiguity for the case of ! 1
In the following we thus assume
Next we provide a lemma that will be of use in the following proofs concerned with special
cases as well as in the proof for full flow in Section 6.
As in the previous section, let the image be a half sphere
with equator q, let n 0
be a unit vector normal to the plane of q. Then equation
Z
can be satisfied everywhere in the image only if t j \Theta n 0
are non-zero, there are points in the image where
\Theta t 2
must be non-zero and geodesic n connecting t 1
and t 2
is well defined. Equation (28)
can be satisfied only if the zero motion contour is degenerate, i.e., ! i \Deltat (as in Figure 14b).
Then the contour consists of two great circles. One of the circles must be identical to the
geodesic n, and the other circle must be identical to q, the border of the image. This is
possible only if t j \Theta n 0
Figure
Figure
11: If sgn( 1
Z
everywhere, the zero motion contour consists of
two great circles, one identical to the border of the hemisphere, the other identical to f
We now consider two special cases in parts A and B.
A: Let us assume that all t i and ! i are non-zero.
everywhere. Thus from condition (9) we obtain sgn(g 12
sgn(g 22
four vectors are non-zero, this is possible only if ! 1
So we only need to consider the case ! 1
0, and since we also assume ! 1
, we
have
. Then at any point in the image, g 2j
\Gammag 1j
(r). Thus (9) can be satisfied
only if sgn( 1
). According to the lemma, this is possible only if t 2
\Theta n 0
Similarly (11) can be satisfied only if sgn( 1
). So from the lemma we obtain
\Theta n 0
Therefore we have t 1
\Theta t 2
everywhere, and the motions
are contradictory.
B: If one of the motion parameters is zero, we obtain either a pure translational or a pure
rotational flow field. By considering all the possible cases, it can be verified that the two
motions are not ambiguous. Here we just consider one of the more difficult cases.
Then at any point, g 11
So from (9) we obtain sgn( 1
), from (11) we have
). From the lemma, this is possible only if t 2
\Theta n 0
\Theta n 0
thus again we obtain t 1
\Theta t 2
and the motions are contradictory.
If two of the motions are zero, that is if either t equivalently
either two rotational, or one translational and one rotational field,
which obviously cannot have the same direction.
6 Ambiguities of the Full Flow
Next we investigate the question whether there can be any ambiguities at all if we consider
the complete flow. Horn has shown in [6] that two motions can produce ambiguous flow
fields only if the observed surfaces are certain hyperboloids of one sheet. We show that if we
also consider the depth positivity constraint and if the image is a half of the sphere, then
any two different motions can be distinguished.
Let the image be a hemisphere bounded by equator q. Let n 0
be a unit vector normal to
the plane of q. As in [6], let us assume that a motion (t 1
along with a depth map Z 1
and a motion
along with a depth map Z 2
, yield the same flow field. At each point
we obtain a vector equation
(r \Theta (t 1
\Theta
\Theta
(r \Theta (t 2
\Theta
\Theta r (29)
Projecting on directions t 1
\Theta r and t 2
\Theta r, we obtain equations for the two critical
\Theta
\Theta
then according to the lemma in the previous section, these
equations can be satisfied everywhere in the image only if ffi!
and t 2
\Theta n 0
Thus we we obtain t 1
\Theta t 2
case corresponds to Section 4.5 in [6],
that is, to the case when both critical surfaces consist of intersecting planes.) Therefore we
are left only with special cases: Ambiguity can occur only if t 1
\Theta t 2
\Theta t 2
0, from constraint (30) we get for any r
(ffi! \Theta r) \Delta (t 2
\Theta
must be zero. Similarly from constraint (31) we get t Thus we
have a pair of rigid motions with different rotations and zero translations. Clearly these two
motions are not ambiguous.
There is one special case left, At each point we get a vector equation
(r \Theta (t 1
\Theta
(r \Theta (t 2
\Theta r)) (33)
Since we have two different motions and
. So the equation can be
satisfied only when 1
all points not lying on geodesic n passing through t 1
and t 2
. If we do not allow infinite depth, the motions are not ambiguous.
Conclusions
In this paper we have analyzed the amount of information inherent in the directions of rigid
flow fields. We have shown that in almost all cases there is enough information to determine
up to a multiplicative constant both the 3D-rotational and 3D-translational motion from
a hemispherical image. Ambiguities can result only if the surfaces in view satisfy certain
inequality and equality constraints. Furthermore, for two 3D motions to be compatible the
two translation vectors must lie on a geodesic perpendicular to the geodesic through the two
rotation vectors. With this analysis we have also shown that visual motion analysis does not
necessarily require the intermediate computation of optical flow or exact correspondence.
Instead, many dynamic vision problems might be solved with the use of more qualitative
flow estimates if appropriate global constraints are found.
Appendix
Appendix
A Ambiguity due to more than two motions
In this appendix we investigate whether three or more different rigid motions and their
corresponding surfaces could possibly produce the same direction of the motion field on a
hemisphere. We present proofs contradicting the ambiguity of almost all combinations of
three rigid motions.
Let us consider any three different rigid motions
), such that
any two of the directional motion fields produced are the same, i.e.
j. In the following proofs it will be shown that in
general there exist areas in the image where the corresponding depth Z 3
cannot at the same
time allow motions (t 1
to produce the
same directional flow.
Let us consider the intersections of the zero motion contours g ii = 0. In the sequel we
consider separately in part A the general case where two of the zero motion contours intersect
in at least two points, and in part B the case where any two zero motion contours are tangent
to each other. (Appendix E describes the conditions on the motion parameters for two zero
motion contours to be tangential.)
A: Let us assume that two of the zero motion contours are not tangential; let these be
22
be the intersection point where
\Theta t 2
are parallel. Let p 12
be another intersection point where g 11
and g 22
cross. Vectors v rot 1
(p 12
(p 12
are
not parallel, and v tr 1
(p 12
(p 12
(p 12
(p 12
positive - 1
or - 2
were negative, point p 12
would be contradictory). Unless g 33
(p 12
we have v tr 3
(p 12
(p 12
Figure
12a shows a possible configuration of the motion
vectors at p 12
We next consider the directions of v rot i
(r) and v tr i
(r) for points r in the neighborhood
of p 12
. Let n 0
be a unit vector in the direction v tr 3
(p 11
(p 12
the
sign of (v tr i
(r) \Theta v rot i
changes from inside g ii = 0 to outside g ii = 0 (that is, for
example, the angle between v tr 1
(r) and v rot 1
(r) is greater than 180 ffi inside g 11
smaller than 180 ffi outside g 11
vice versa). The sign of (v tr 3
(r) \Theta v rot 3
is
the same in a sufficiently small neighborhood around p 12
. Since g 11
22
at
there are four neighborhoods around p 12
with all four possible sign combinations of
(v tr 1
\Theta v rot 1
and (v tr 2
\Theta v rot 2
. Thus for points r in one of the neighborhoods, in order
for
(r) to have the same direction as v tr 3
must lie in an interval
[a; b], and for v tr 2
(r) to have the same direction as v tr 3
must lie
in an interval [c; d], where the intersection of [a; b] and [c; d] is empty. Therefore the three
motions cannot give rise to the same direction at r. For an example see Figure 12b.
B: We next consider the case where all three zero motion contours are tangent to each
other. For the case where not all three are tangential at the same point, using arguments
similar to those used before, we prove that there cannot be an ambiguity.
For at least two of the zero motion contours, say g 11
22
0, we have that
at the intersection point r 12
the two curvature vectors - g 11
of g 11
22
of g 22
sign. Also, the translational and rotational components are such
that v tr 1
Figure 13a and b for an
illustration). Let n 0 be a unit vector in the direction v tr 3
(r 12 ). At points r in
the neighborhood of r 12
we obtain three of the four possible sign combinations for the signs
of (v rot 1
(r) \Theta v tr 1
and (v tr 2
(r) \Theta v rot 2
. In both areas outside g 11
outside
(r) \Theta v tr 1
(r) \Theta v rot 2
in one of
(a) (b)
Figure
12: (a) Possible motion configuration at point p 12
. (b) There must exist a neighborhood
around
with points r, such that for n 0
(p 12
(p 12
)=jv tr 3
(p 12
(p 12
)j,
(v rot 1
\Theta v tr 1
\Theta v rot 2
In order for u 3
(v tr 3
(r)),
has to be in the sector S 1
and Z 3
has to take values in the interval
(0; b]. In order for u 3
(v tr 3
(r)),
has to be in the sector
3 has to take values in the interval [c; 1] with b ! c.
the areas (v tr 2
(r) \Theta v rot 1
and in the other (v tr 2
(r) \Theta v rot 1
(v tr 3
(r) \Theta v rot 3
doesn't change sign in the neighborhood of r 12
, in one of the two areas
the depth Z 3 of the third surface cannot be compatible with both the first and the second
motion (see Figure 13c).
r 12
tr
tr
r 12
(a) (b) (c)
Figure
13: (a) Intersection of zero motion contours g 11
22
with
22
motion configuration at point r 12
. (c) At point r in one of
the areas outside g 11
22
) the depth of Z 3
cannot be compatible
with both (v tr 1
(r)) and (v tr 2
(r)).
Thus, in summary we have shown that more than two different rigid motions can hardly
ever give rise to the same direction of flow at every point on a hemisphere. The only possible
configurations of motions that may be contradictory, provided the surfaces in view satisfy
the constraints described in Section 3, are:
a: three or more motions such that the corresponding zero motion contours
in the same point
b: three or more motions, such that all corresponding zero motion contours g ii = 0 are
tangential at the same point r 12
, which as described in Appendix E, can occur only if
tan
tan
Appendix
motion contours
Let us consider the following question: What is the locus of points where the flow due to
the given rigid motion can possibly be zero? As in [4] we can show that such points are
constrained to lie on a second order curve on the sphere.
The flow at point r can be zero only if the rotational and translational components at
r are parallel to each other. Let t and ! be translational and rotational velocity of the
observer. Then the flow at point r can be zero only if
(r \Theta (t \Theta r)) \Theta (! \Theta
By simple vector manipulation, from (34) we obtain
Since r 6= 0, the flow at point r can be zero only if
(! \Theta r) \Delta (t \Theta
Equation (36) describes a second order curve on the sphere, which we will call the zero
motion contour of the rigid motion (t; !). The zero-motion contour consists of two closed
curves on the sphere. As shown in Figure 14, if (! one of the curves contains t
and one contains \Gammat 0 and \Gamma! the two curves become great
circles, one orthogonal to t, the other orthogonal to !; if (! \Delta one of the two curves
and \Gamma! 0
and the other through \Gammat 0
-t
-w
-t
-w
(a) (b) (c)
Figure
14: The zero motion contour (the locus of points r where -
r could be zero) consists
of two closed curves on the sphere. Three possible configurations are (a) (!
Appendix
C Zero motion contours are not tangent
To show that zero motion contours g 11 and g 12 are not tangent at r 1 (see Section 4.1, part A),
let us compute tangent lines to the contours at r 1
. Let the direction of the line tangent to
at r 1
be
. The line lies in the plane tangent to the sphere, so
Directional derivative of g 11
along
must be zero. Let r
dg 11
d"
\Theta u 1
\Theta r 1
\Theta r 1
\Theta u 1
\Theta u 1
\Theta u 1
must also satisfy
Thus we obtain
We can compute the tangent direction from (40) and (37) as
\Theta ((! 1
Similarly, the direction tangent to g 12
is
\Theta ((! 1
lies on geodesic n, we get
\Theta u 2
\Theta t 2
\Theta t 2
\Theta (! 1
\Theta r 1
Also
\Theta t 2
and
\Theta t 2
\Theta (! 1
\Theta r 1
\Theta t 2
\Theta t 2
\Theta u 2
is not zero, and the two zero motion contours cross at point r 1
Appendix
D Zero motion contour crossing the border of the image
Let the half sphere image be bounded by equator q, let n 0
be a unit vector normal to the
plane of q. We would like to know whether the zero motion contour of motion (t; !) intersects
equator q.
Let us choose a Cartesian coordinate system such that n 0
Points on equator q can be written as [cos OE; sin OE; 0]. Thus the zero
motion contour (! \Theta r) \Delta (t \Theta r) = 0 intersects q if equation
has a solution.
Writing tan OE, we obtain a quadratic equation
This equation has a real solution if
After some manipulation, we obtain
Appendix
Intersections of zero motion contours
) be two ambiguous motions. Let us investigate possible intersection
points of the zero motion contours of the two motions.
Since ambiguity is possible only when
\Theta t 2
\Theta t 2
we can choose a Cartesian coordinate system such that
\Theta t 2
\Theta t 2
Y
In this coordinate system, we can write t 1
Clearly, both zero motion contours pass through point [0; 0; 1]. We would like to know
whether this is the only intersection point.
were zero, we would have t 1
thus the zero motion contour g 11
would be
degenerate. This is not possible if the motions are ambiguous. Thus W 1
6= 0, and similarly
are non-zero.
Since the zero motion contour does not depend on the size (and direction) of vectors t
and !, we can re-scale vectors t multiplying by - i 6= 0 such that
Let us consider point z] such that (! \Theta r) \Delta (t \Theta r) = 0. If z 6= 0, point
the equation. Thus it is enough to consider two possible
sets of points: points of the form points lying in the plane tangent to the
sphere at [0; 0; 1]), and points (these points correspond to points at infinity on
the tangent plane).
A: To obtain the possible intersection points we express the zero motion
contours as
\Theta r) \Delta
\Theta r) \Delta
We can compute y from the difference of the two equations as
Substituting (54) into (52), we obtain a polynomial equation of degree 4 in x. One
solution is (both zero motion contours pass through point [0; 0; 1]). The remaining
equation of degree 3 has at least one real solution.
another solution Otherwise the two
contours intersect in two different points. Since ! 1
6= 0, we know -
the two zero motion contours are tangent only if -
. If this is the case, we obtain
an equation of degree 2 in x. Its discriminant is
Thus if j -
2, the two zero motion contours are tangent at [0; 0; 1], but
intersect at two other points.
B: Now we compute intersection points
we obtain equations xy -
so such intersection point exists only if
In the previous part we have shown that if -
, the two zero motion contours
have more than one intersection point. So it is enough to check the tangential case here.
If the two contours are tangent at [0; 0; 1], from -
j. Since t 1
\Theta t 2
6= 0, this is possible only if -
and -
Writing sin OE, we obtain equation
so again there is an intersection point if j -
2.
Therefore the two zero motion contours have only one intersection point if -
2. If we denote the intersection point of g 11
and g 22
as r 12
, this can
be written as
tan
tan
tan
tan
where
6 (\Delta; \Delta) denotes the angle between two vectors.
This relationship can also be expressed as
Furthermore from (55) we obtain the constraint
\Theta t 2
Appendix
F Antipodal contradictory points
Here for the purpose of providing a description of the areas where two motion fields are
ambiguous, the conditions are developed for point r and its antipodal point \Gammar both to be
contradictory.
Clearly, if one of the CP-conditions holds for r, it cannot be true for \Gammar. So if both r
and \Gammar are contradictory, two of the conditions must hold at r and the other two at \Gammar.
If (18) and (19) hold at r and (20), (21) at \Gammar, we get
at r and
at \Gammar. Thus
If (18) and (20) hold at r and (19), (21) at \Gammar, we get sgn(g 12
at r and
22 ) at \Gammar, so this case cannot occur.
If (18) and (21) hold at r and (19), (20) at \Gammar, we get
at r and
at \Gammar. So we get
Thus point r and point \Gammar are both contradictory if and only if
--R
Optical flow from 1-D correlation: Application to a simple time-to-crash detector
Motion and structure from motion from point and line matches.
Passive navigation as a pattern recognition problem.
On the geometry of visual correspondence.
Qualitative egomotion.
Motion fields are hardly ever ambiguous.
Relative orientation.
A computer algorithm for reconstruction of a scene from two projections.
Theory of Reconstruction from Image Motion.
Critical surface pairs and triplets.
Direct passive navigation.
Structure from motion using line correspondences.
Dynamic aspects in active vision.
Uniqueness and estimation of three-dimensional motion parameters of rigid objects with curved surfaces
Robust and fast computation of edge characteristics in image sequences.
--TR
Direct passive navigation
Structure from motion using line correspondences
Relative orientation
Dynamic aspects in active vision
Robust and fast computation of edge characteristics in image sequences
Qualitative egomotion
Optical flow from 1-D correlation
Passive navigation as a pattern recognition problem
On the Geometry of Visual Correspondence
--CTR
Tom Svoboda , Tom Pajdla, Epipolar Geometry for Central Catadioptric Cameras, International Journal of Computer Vision, v.49 n.1, p.23-37, August 2002
Fermller , Yiannis Aloimonos, Ambiguity in Structure from Motion: Sphere versus Plane, International Journal of Computer Vision, v.28 n.2, p.137-154, June 1998
Tom Brodsk , Cornelia Fermller , Yiannis Aloimonos, Structure from Motion: Beyond the Epipolar Constraint, International Journal of Computer Vision, v.37 n.3, p.231-258, June 2000 | motion field;egomotion;qualitative vision;optic flow |
289045 | A Nested FGMRES Method for Parallel Calculation of Nuclear Reactor Transients. | A semi-iterative method based on a nested application of Flexible Generalized Minimum Residual)FGMRES) was developed to solve the linear systems resulting from the application of the discretized two-phase hydrodynamics equations to nuclear reactor transient problems. The complex three-dimensional reactor problem is decomposed into simpler, more manageable problems which are then recombined sequentially by GMRES algorithms. Mathematically, the method consists of using an inner level GMRES to solve the preconditioner equation for an outer level GMRES. Applications were performed on practical, three-dimensional models of operating Pressurized Water Reactors (PWR). Serial and parallel applications were performed for a reactor model with two different details in the core representation. When appropriately tight convergence was enforced at each GMRES level, the results of the semi-iterative solver were in agreement with existing direct solution methods. For the larger model tested, the serial performance of GMRES was about a factor of 3 better than the direct solver and the parallel speedups were about 4 using 13 processors of the INTEL Paragon. Thus, for the larger problem over an order of magnitude reduction in the execution time was achieved indicating that the use of semi-iterative solvers and parallel computing can considerably reduce the computational load for practical PWR transient calculations. | Introduction
. The analysis of nuclear reactor transient behavior has always
been one of the most difficult computational problems in nuclear engineering. Because
the computational load to calculate detailed three-dimensional solutions of the field
equations is prohibitive, variations of the power, flow, and temperature distributions
are treated approximately in the reactor calculation resulting in considerable conservatism
in reactor operation. Some researchers have estimated that using existing
methods, the computational load to calculate three-dimensional distributions would
exceed a Teraflop per time step. [16] [6]
Over the last several years computer speed and memory have increased dramatically
and has motivated a rethinking of the limitations of the existing reactor transient
analysis codes. Researchers have begun to adapt three-dimensional hydrodynamics
and neutron kinetics codes to advanced computer architectures and have begun to investigate
advanced numerical methods which can take full advantage of the potential
of high performance computing.
The overall goal of the work reported here was to reduce the computational burden
for three-dimensional reactor core models, thereby enabling high fidelity reactor
system modeling. The specific objective was to investigate Krylov Sub-Space methods
for the parallel solution of the linear systems resulting from the reactor hydrodynamics
equations.
The following section provides a brief description of the hydrodynamic model and
reactor problem used in the work here. The nested GMRES method and preconditioner
developed in the work here are described in section 3 and serial and parallel
applications are presented in sections 4 and 5, respectively.
This work was supported by the Electric Power Research Institute
y School of Nuclear Eng., Purdue University, W. Lafayette, IN 47907
2. Hydrodynamic Model and Reactor Problem. The nuclear reactor analysis
problem involves the solution of the coupled neutron kinetics, heat conduction,
and hydrodynamics equations. Of these, two-phase flow hydrodynamics is generally
the most computationally demanding and was the focus of the work here.
The hydrodynamic method used here is consistent with the reactor systems code
RETRAN-03 [7] which is widely used in the nuclear industry for the analysis of
reactor transient behavior. The method is based on a semi-implicit solution of the
mass and momentum equations for each phase and the energy equation for the fluid
mixture. The solution scheme uses finite difference representations for all of the
fluid-flow balance equations. All convective quantities and most source terms are
linearized, i.e. expanded using a first order Taylor series. As a result, the coupling
between the finite difference equations is implicitly included in the system of coupled
equations. The method is considered only semi-implicit since linearized equations
are used rather than the original nonlinear partial differential equations. A standard
Newton-Raphson technique is used to solve the nonlinear equations.
2.1. Hydrodynamic Model. The application of spatial finite differencing to
the set of governing partial differential equations is performed using the concept of
volumes and connecting junctions. This results in a system of ordinary differential-
difference equations which may be expressed as
dY
dt
(1)
where Y is a column vector of nodal variables and F is a column vector of functions
is the number of dependent/solution variables. The solution
vector, Y consists of N j junction mass flow rates (W ) and slip velocities (V SL ), and
volume total mass (M ), total energy (U ), and vapor mass (M g ) inventories.
(2)
The vector F(Y) is linearized and a first order time difference approximation is
used, resulting in the following linear system:
where I is the identity matrix and J is the matrix Jacobian. \DeltaY
and \Deltat are the values of Y at time levels t n+1
and t n respectively.
Because of the semi-implicit nature of the formulation, the linear system that
arises after linearizing and discretizing is tightly coupled. Also, the high degree of
stability of this formulation imposes a less stringent limit on the size of the time step.
As indicated in Equation 3, larger time steps further reduce diagonal dominance and
results in a more ill-conditioned linear system. Several authors have noted [5] [3] that
such ill-conditioned linear systems with no well-defined structure are very difficult to
solve efficiently and in many cases special handling is required to obtain an acceptable
solution.
The present linear solver in RETRAN-03 is a direct method based on Gaussian
elimination which has a \Theta(N 3 ) execution time complexity where N is the number of
unknowns. For models dominated by one-dimensional flow, direct solutions complemented
by some type of matrix reduction technique can be very efficient. However,
in the case of a high fidelity model of a reactor with a three dimensional core rep-
resentation, direct methods become inefficient. This can be demonstrated using the
model of a standard 4-loop Presurized Water Reactor.
A
A
A
A
A
A
A
AAA
AAA
AAA
AAA
AAA
AAA
AAA
A
A
A
A
A
A
AAA
AAA
AAA
AAA
AAA
AAA
A
A
A
A
A
A
A
AAA
AAA
AAA
AAA
AAA
AAA
AAA
A
A
A
A
A
A
AAA
AAA
AAA
AAA
AAA
AAA
System Model
Model
see
Model
below
Side View Top View
channel
Upper Plenum
26 33
28 14 31 17
43 23
48
36 343111 355769 7177737579656 4
by pass
Fig. 1. Nodalization for the PWR Model
2.2. Pressurized Water Reactor Problem. The reactor system model used
in this work consists of 4-loop pressurized water reactor model with a detailed three
dimensional representation of the reactor core. A schematic is shown in Figure 1.
Each loop of the system contains a steam generator in which heat is transferred from
the pressurized primary loop to a secondary loop containing the steam turbines. The
core model consists of several volumes stacked one upon the other along each channel.
Assuming a layout of C by C channels and A axial volumes, the number of volumes
is AC 2 , the number of cross flow junctions is and the number of vertical
junctions is C 2 (A+1). The first problem used in the work here had a core model with
9 channels and 12 axial levels per channel as shown in the Figure. We constructed
two versions of the core model, one with cross flow (horizontal flow) between volumes
and the other without cross flow. Cross flow is important for reactor transients such
as the break of a steam line in one of the loops which results in the horizontal mixing
of hot and cold water in the core.
The model with cross flow consists of 173 volumes and 339 junctions which results
in a linear system of size 1173. The core of the model with cross flow contains 108
volumes and 261 junctions and forms nearly 70% of the system. The model without
cross flow contains the same number of volumes but has only 183 junctions and results
in a linear system of size 885. The coefficient matrix from the problem with cross flow
is shown schematically in Figure 2. The structure shown in the figure corresponds to
a sequential ordering of the junction and volume variables which is more convenient
for the reduction/elimination method currently used in the code. A reordering that
is more suitable for preconditioning will be discussed in the following section.
Columns
Rows
Fig. 2. PWR Model with 3-D Core: Sparsity Pattern of the Coefficient Matrix
For an initial test problem a simple core reactivity event was modeled by simulating
the insertion and withdrawal of a control rod for 20 seconds. The rod was
inserted into the core at a rate of 0.08 dollars of reactivity per second for 5 seconds
and then immediately withdrawn at the same rate. The computational performance
for this problem on a single processor of the INTEL Paragon is summarized in Table
1 for the models with and without cross flow. The direct linear system solution is
performed in the SOLVE module indicated in the Table.
Table
Computatinal Summary for PWR Example Problem with Direct Linear Solver
Module w/o Cross Flow w/ Cross Flow
CPU Time Percent CPU Time Percent
OTHER 155.89 46.7 196.17 4.9
TOTAL 333.88 100.0 4931.71 100.0
It is immediately apparent from Table 1 that the use of cross flow in the core
increases the computational time by an order of magnitude. This is primarily because
the problem without cross flow results in a linear system containing predominantly
tridiagonal submatrices which lends itself very efficiently to reduction/elimination
methods. This large increase in the execution time discourages the modeling of cross
flow and, in general, high fidelity reactor simulation.
Although a reduction in the operation count required for a direct solution was the
primary motivation for the work here, there were other reasons to consider iterative
linear solution methods. First, the actual problem being solved in RETRAN-03 is
nonlinear and the direct solution of the resultant linearized equations can be a waste
of floating point operations since usually an accuracy considerably less than machine
precision is adequate. Secondly, unlike direct methods, semi-iterative solution methods
can be accelerated with information from the previous time steps, such as an
initial guess or a preconditioner. Finally, direct methods do not lend themselves easily
to parallel computing on distributed memory, MIMD multicomputers, whereas,
many of the new semi-iterative linear solvers can be efficiently parallelized.
3. Semi-Iterative Linear Solvers for 3-D Hydrodynamics. During the last
several years, considerable research has been performed on a non-stationary class of
techniques, collectively known as Krylov subspace methods. These include the classical
Conjugate Gradient (CG) method which has been shown to be very efficient for
symmetric positive definite systems of equations. These methods are called Krylov
methods because they are based on building a solution vector from the Krylov sub-
space: span fr is the residual of the initial solution and
A is the coefficient matrix. The coefficients of the solution vector in the case of the
CG method are based on the minimization of the energy norm of the error. In gen-
eral, the linear systems encountered in hydrodynamics problems are not symmetric
positive definite and therefore can not be solved using the CG method.
Numerous Krylov methods for solving the non-symmetric problem have been
proposed over the years and several were considered in this work to include the Generalized
Minimal Residual(GMRES)[10] method, the BiConjugate Gradient (BiCG)
method [1], the Conjugate Gradient Squared (CGS) method [12], and the BiConjugate
Gradient Stabilized (BiCGS) method [15]. Of particular interest in the work
here is the GMRES method which solves a minimization problem at each iteration
and therefore guarantees a monotonically decreasing residual.
It is well known that the convergence rate of the Krylov methods depends on
the spectral properties of the coefficient matrix and that proper preconditioning can
considerably improve the rate of convergence. Because preconditioning involves some
additional cost, both initially and per iteration, there is a trade-off between the cost
of implementing the preconditioner and the gain in convergence speed. Since many
of the traditional preconditioners have a large sequential component, there is further
trade-off between the serial performance of a preconditioner and its parallel efficiency.
Several alternative preconditioners were examined in the work here.
3.1. Application of Krylov Methods to Reactor Problem. Previous work
on the application of semi-iterative solvers to reactor core hydrodynamic calculations
[13] [8] has focused primarily on the solution of the linearized pressure equation resulting
from single phase, one-dimensional flow problems. Some of the Krylov methods
were found to perform very well on the resulting tridiagonal system of equations.
In particular, Turner achieved excellent convergence with the Conjugate Gradient
Squared method and an ILU preconditioner. While this work provided useful insight,
the linear systems resulting from the two-phase, three-dimensional flow problems are
significantly different and the performance of the various Krylov methods was very
different.
Figure
3 shows the behavior of the absolute residual (L2 norm) for the application
to the PWR problem of various Krylov methods with no preconditioner. The
performance of Bi-CGSTAB, BICG, and CGS are very irregular (CGS is not shown
because the residual increased by several orders of magnitude in the first few iterations
and then continues to increase). Only the GMRES method demonstrated acceptable
behavior with a monotonically decreasingly residual which is expected because of its
basis in a minimization principle. Linear systems from other time steps were examined
and behavior similar to that shown in Figure 3 was observed. Although the floating
point operation count per iteration was higher for GMRES, particularly for larger
numbers of iterations, it was attractive for applications here because of its inherent
robustness. Preconditioning techniques were then examined that could improve the
convergence behavior of GMRES.
3.2. Domain Decomposition Preconditioning. Knowledge of the physical
characteristics of the system is invaluable in choosing a good preconditioner. It is
evident from Figure 1 that the core and the ex-core systems interact only at the
upper and lower plenum. This suggests a natural way to decompose the problem
domain. If the system were to be reordered such that all the solution variables that
belonging to the core are placed contiguously, then the structure of the resultant
matrix is given by
AC U
the blocks L and U represent the interactions between the core and the ex-core vari-
ables. Because the core and the ex-core interact only at the upper and lower plenum,
these matrices have very few non zero elements. Several options exist for using Equation
4 as a preconditioner.
One possible preconditioner is to neglect the L and U blocks entirely, resulting in
a block Jacobi preconditioner given by
AC
GMRES
Iterations
Residual
Fig. 3.
Figure
Performance of Krylov Solvers on PWR 3-D Core Problem
The block AE represents the interactions between variables in the ex-core region
and the block AC represents the interactions between variables in the core region.
Because the preconditioner is Jacobi, the two blocks may be solved individually which
is attractive for parallel computing. As noted earlier, the size of the core block is
considerably larger than the ex-core block and in a later section methods are discussed
for solving the core problem.
The block Jacobi domain decomposition preconditioner was applied to the 3-D
PWR example problem first using a direct solution of the preconditioner equation.
Table
2 contains the results for the linear systems at the first two time steps of the
transient described in section 2.2. The first case corresponds to the problem for which
results were shown in Figure 3. In addition to the number of iterations, two other
measures of the effectiveness of the preconditioner are shown in the Table. First,
the condition of the original and preconditioned matrix is shown in the second and
third columns. In the fourth column is shown another measure of effectiveness of
preconditioning suggested by Dutto[4] which uses the ratio of the Frobenius norm
of the remainder matrix, defined as , to the Frobenius norm of original
matrix. The Frobenius norm is defined as
diagonal
As shown in the Table, both measures indicate the Jacobi preconditioner is very
effective. For a tolerance of 10 \Gamma6 (relative residual), convergence is achieved in less
Table
Results of Application of Domain Decomposition to PWR Problem
Preconditioner
kAkF Iterations
2:19
Gauss Seidel 8:30
2:19
Modified
Gauss Seidel 2:19
In the block Jacobi preconditioner the impact of the submatrices L and U is
completely neglected while solving the preconditioner equation. Another possibility
is to employ a Gauss-Seidel like technique in which the preconditioner equations are
solved sequentially:
z C
z C and -
z C and z E respectively. Two approaches were
examined for solving these equations as a preconditioner.
In one approach, the first equation is solved neglecting the coupling to the other
(i.e. -
z C or -
Because of symmetry, the sequence of solution does not matter
and the preconditioner has the form:
AC
Results of applying this technique are shown as the Gauss-Seidel preconditioner
in
Table
2. As indicated there is only minor change in the measures of effectiveness
of the preconditioning, however, convergence is achieved in fewer iterations.
In the second approach an estimate of the ex-core solution, -
z E , is formed using
the decoupled ex-core equation: AE -
and is then used to solve sequentially the
core and ex-core equations given by Equation 7. The preconditioner for this approach
can be expressed as:
This approach is shown in the Table 2 as the Modified Gauss-Siedel preconditioner
and, as indicated, little differences are observed in the effectiveness of the precondi-
tioning. Because the Block Jacobi method is naturally parallel, it was used in the
work here even though the Gauss-Siedel methods showed slightly better numerical
performance.
For the results shown above the preconditioner equation was solved directly using
Gaussian Elimination. Because the ex-core problem is predominantly one-dimensional
flow, a direct solution using reduction/elimination proves to be very efficient. Con-
versely, the three-dimensional core model does not lend itself to direct solution and
the following section examines the use of a second level GMRES to solve the core
problem.
3.3. Preconditioning the Core Problem. The original matrix ordering shown
in
Figure
2 is not conducive to preconditioning the core problem, AC z
several alternate orderings were examined. Because the junctions are physically between
volumes, reordering the solution vector such that it bears some resemblance to
the physical layout would help in decreasing the profile of the matrix. The goal is
to increase the density of the matrix in the regions around the diagonal (i.e reduce
bandwidth) and then use that portion of the matrix for preconditioning purposes (e.g
Block Jacobi, etc.
The structure existing in the core was exploited by defining a supernode that
consisted of both volumes and junctions. The physical domain was discretized into
several supernodes that introduced homogeneity in the structure of the matrix. The
supernode could be considered a fractal for representing the smallest unit of structure
present in the system. As shown in Figure 4, the supernode for the core problem
consists of a volume and 3 junctions. One of the junctions is the vertical junction
upstream to the volume and the other two are crossflow junctions leading out of the
volume, each in a different direction. It should be noted that the use of supernodes
leads to a problem of extra junctions at some of the exterior channels. These junctions
are dummy junctions and are represented in the matrix but do not appear in the
solution. Hence the size of the system increases but its condition number remains the
same.
Volume
Vertical Junction
Crossflow Junction
Crossflow Junction
Fig. 4. Structure of a Supernode
Several orderings of the supernodes were considered (e.g. Channel-wise, Plane-
wise, Cuthill-McKee, etc) and ordering planewise was found to be the best for the
purpose of preconditioning. Each plane represents a two dimensional grid of the
supernodes, and the planes themselves form a one dimensional structure that are
linked to only two neighbors, resulting in a block tridiagonal matrix as shown in
Figure
5 and Equation 10. As in the case of the outer level preconditioner, several
options exist for using Equation 10 as a preconditioner for the inner level GMRES.
12002006001000Columns
Rows
Fig. 5. Structure of Coefficient Matrix : Plane Wise Ordering
A block diagonal preconditioner which neglects planar coupling was considered
first.
Table
3 shows the results of solving the second level GMRES during the first
outer iteration. Four different block sizes were examined with a convergence criterion
of 10 \Gamma6 . The two cases shown, A 1
C and A 2
C are taken from different time steps.
Table
Results of the Application of Jacobi Preconditioning on the Inner Level
Preconditioner
block size Condition # kRkF
kAkF
Iterations Condition # kRkF
kAkF
Iterations
The results for the other outer iterations were slightly different since the source
and hence the initial residual of the second level GMRES was different. However, the
results from other cases were consistent with the general trend shown in Table 3. As
expected, larger block sizes reduce the number of iterations. However, the cost of solving
each block directly would increase as N 3 which offsets the reduction in iterations.
Also, smaller blocks have the advantage of scalability for parallel computing.
A domain decomposition scheme incorporating the interactions between the diagonal
blocks was also examined. An approximate solution, -
z C , was computed by
solving the block Jacobi system which neglects the coupling between adjacent planes:
z 0
The prime notation is used to distinguish the z C and r C vectors here from those which
occur in the outer iteration, Equation 7. The inner GMRES preconditioner equation
is then solved:
where the preconditioner, MD , is:
As indicated in Table 4, this improved preconditioning does reduce the number
of iterations, but solving the preconditioner becomes more expensive. The number of
iterations for a block size of 81 is reduced by half but more than twice the number of
floating point operations are required to form MD .
Table
Results of the Application of Domain Decomposition Preconditioning on the Inner Level
kAkF Iterations
block size
Preconditioners other than Block Jacobi were investigated for the core problem.
Primarily because of the ill-conditioned nature of the coefficient matrix, popular
schemes such as SSOR and ILU were ineffective. For example, the incomplete LU
factorization scheme was tested on a linear system arising from the 3-D Core prob-
lem. Banded
were constructed and the Frobenius norm of the
ILU preconditioner was compared to the norm of the exact inverse:
As shown, the Frobenius norm of the approximate preconditioned system, M \Gamma1 A
is over three orders of magnitude larger than the Frobenius norm of A \Gamma1 A, which
suggests an ILU preconditioner would not be very effective.
3.4. Nested GMRES. The methods described in the previous sections were
implemented in a nested GMRES algorithm which consists of using an "inner" level
GMRES to solve the preconditioner equation for the "outer" level GMRES. Such a
strategy was suggested by Van der Vorst and demonstrated successfully for several
model problems [14]. The preconditioner for the inner level GMRES itself could
be solved using a third level GMRES, but in the applications here a direct solver
proved most efficient. A schematic of the nested GMRES algorithm is shown in
Figure
6. A more physical interpretation of the algorithm is to view the overall
problem as being decomposed into three simpler, more manageable problems which
are then recombined sequentially by GMRES algorithms. At the highest level we take
advantage of the naturally loose coupling between the core and the ex-core components
and solve separately the linear systems of the core and ex-core regions. These solutions
are then recombined using the highest or "outer" level GMRES. At the second or
"inner" level, GMRES is used to solve the 3-D core flow problem where focus is on
the coupling between the vertical flow channels in the core. And finally at the third
or lowest level, GMRES or a direct solver is used to restore coupling between nodes
in a plane.
CORE PROBLEM
SYSTEM PROBLEM
(GMRES I)
(GMRES II)
(GMRES III
or Direct)
A e z
A c z
Fig. 6. Nested GMRES Algorithm for RETRAN-03 Linear System Solution
3.5. Flexible General Minimum Residual (FGMRES). The solution of
the preconditioning equation in the GMRES method with another GMRES algorithm
poses a potential problem due to the finite precision of the inner level solution. In
the preconditioned GMRES algorithm the solution is first built in the preconditioned
subspace and then transformed into the solution space:
where the matrix is the set of orthonormal vectors. In the case of
an iterative solution for the preconditioner, the transformation is only approximate,
the extent of which is determined by the convergence criterion. In the case of ill-conditioned
matrices the approximations could be especially troublesome and a very
tight convergence is required to minimize error propagation.
The problem of the inexact transformation was alleviated in the work here by
using a slight variant of the GMRES algorithm in which an extra set of vectors is
stored and used to update the solution. This modification of the GMRES algorithm
was suggested by Saad and is called Flexible General Minimum Residual(FGMRES).
[9] This algorithm allows for the complete variation of the preconditioner from one
iteration to the next by storing the result of preconditioning each of the basis vectors,
while they are being used to further the Krylov subspace. Instead of using Equation
15, the final transformation to the solution subspace is then performed using .
where the matrix . The FGMRES algorithm is
given in Appendix A and was implemented in the nested GMRES method.
4. Serial Applications.
4.1. Static Problem. The nested GMRES algorithm was first applied to the
linear systems arising from the first few time steps of the rod withdrawal/insertion
transient. Parametrics were performed on the convergence criterion and number of
iterations for each of the levels. A maximum iteration limit was set on the inner
GMRES since in some cases the rate of convergence was very slow (sometimes termed
"critical slowing down").
The variation of the outer (highest) level residual during the iterations is shown
in
Figure
7 for different maximum number of inner iterations (miter). The results
indicate that the rate of decrease in the residual for
substantially greater than for miter = 20. However, as shown in Figure 8, the results
of subsequent timesteps indicate that the difference in the rate of decrease in the
residual between miter = gradually diminishes.
The residual achieved on the inner (second) level GMRES for the timestep corresponding
to Figure 8 are shown in Figure 9. For each of the maximum iteration
limits, the residual increases after the first few iterations. One possible explanation is
that the most desirable search directions for the outer level GMRES become harder
to resolve, leading to a degradation of the performance of the inner level. However,
the diminished quality of the preconditioning from the inner level GMRES does not
appear to have a deleterious effect on the convergence of the outer iteration.
Based on these convergence results and those from the analysis of other time
steps, a strategy was formulated for the implementation of the nested GMRES in the
transient analysis code RETRAN-03. The following section discusses the results of
applying GMRES to the transient problem.
Outer Iteration Number
Outer
Residual
Fig. 7. Reduction of Outer Residual: First Time Step
Outer Iteration Number
Outer
Residual
Fig. 8. Reduction of Outer Residual: Subsequent Time Step
Outer Iteration Number
Inner
Residual
Fig. 9. Performance of Inner GMRES: Subsequent Time Step
4.2. Transient Problem. In the time-dependent iterative solution of the hydrodynamics
equations, error from incomplete convergence at one time step can be
propagated into the coefficient matrix of subsequent time-steps. A preliminary assessment
of the effect of convergence criteria on the quality of the solution was performed
using a "null" transient in which the steady-state condition is continued for several
seconds with no disturbance to the system. Several outer level convergence criteria
were investigated and acceptable performance was achieved with a tolerance of 1.0E-
09 in the relative residual. At higher tolerances some minor deviation was observed
(e.g. less than 1% relative error) in performance parameters such as the core power
level. This provided initial guidance in setting tolerance and iteration limits for the
transient problems.
The PWR rod withdrawal/insertion transient described in section 2.2 was then
analyzed using RETRAN-03 and nested FGMRES. The model with cross flow in the
core was studied using the iterative solver for a transient time of 20 seconds which
required steps. The performance of the iterative algorithm was analyzed by
first varying the number of outer iterations using a direct solver for the inner (e.g.
problem, and then by varying both the number of outer and inner GMRES
iterations. The results are shown in Table 5 as CPU seconds per time step. The
maximum relative residual error during any of the outer iterations is shown in the
fourth column of the table.
For purposes of comparison, the RETRAN-03 solution from Table 1 which uses a
direct solution of the linear system is repeated as case A.1 in Table 5. In the first two
cases (A.2 and A.3) the inner two GMRES levels (see Figure 6) have been replaced
with a direct solver and GMRES is used only for the outer or highest level iteration.
The objective here was to isolate the impact of the number of outer iterations on the
Table
Serial Performance of RETRAN with GMRES: 9 Channel/ 12 Axial Case
Iterations Number of CPU secs/Time Step
Case Outer Inner max( krk
A.1 Direct Direct \Gamma\Gamma 318 14.89 15.65 -
A.3 12 Direct 4
A.4
A.6
quality of the solution. It can be seen from Table 5 that the maximum residual error
decreases as the number of outer iterations increases. However, in both cases A.2 and
A.3, no significant error was observed in the important physical parameters. When
the number of outer iterations was reduced to four, minor deviations began to occur
in the solution after 15 seconds of the transient. It should be noted that the execution
times for cases A.2 and A.3 are generally comparable to the direct solver.
The GMRES algorithm was then employed for both the outer and inner iterations,
keeping the direct solution for the innermost (third) level. In order to gain some insight
on the relation between convergence of the inner and outer GMRES algorithms, the
number of outer and inner iterations were varied as shown in Table 5 for cases A.4, A.5,
A.6 and A.7. The accuracy of these solutions was examined for important physical
parameters in the solution. The normalized core power and the pressurizer pressure
(volume 58 in Figure 1) are plotted versus time in Figure 10 for the Direct solution
(A.1) and for GMRES solutions (A.4 and A.7). Some minor deviation is observed
in the solution with iterations. The execution times are greater than the
direct (A.1) and GMRES outer with direct inner (A.3) solutions. However, as will be
discussed in the next section, the algorithm with inner level GMRES is more attractive
for larger problems and for parallel computing.
5. Parallel Applications. One of the attractive features of preconditioned Krylov
methods is their potential for parallel computing. The emphasis in this work was on
the use of a distributed memory parallel architecture and applications were performed
on the INTEL Paragon. This section describes the mapping of the nested GMRES
onto the Paragon and the execution time reductions achievable for the PWR sample
problem.
The most natural mapping of processors for the PWR model was one processor
to the ex-core and one to each of the 12 planes in the core model. The matrices were
striped row wise, implying that L and AE of Equation 4 were stored on the processor
that is assigned the ex-core. Ideally, AC and U should be partitioned among the 12
processors but the repartitioning of the data was found to be expensive and hence a
copy of both AC and U were maintained on each PE and only the operations were
distributed among the processors.
One of the primary concerns in distributing data and computation for parallel
processing is the communication overhead incurred in transferring data between pro-
cessors. The time necessary to perform a transfer consists of two parts, the time
necessary to initiate a transfer which is referred to as the latency and the time necessary
to actually transfer the data which depends on the amount of data and the
CORE
POWER
RETRAN SOLUTION
20 inner 20 outer
(psia) RETRAN SOLUTION
20 inner 20 outer
Fig. 10. Results of the RETRAN-03 with GMRES for PWR Transients (GMRES Inner): 40 sec
machine bandwidth. The following sections discuss the parallelization of the inner and
outer levels of the nested algorithm with special emphasis given to the communication
issues.
5.1. Parallelization of the Outer Iteration. One of the dominant operations
for the outer iteration is the matrix vector product. At each iteration the product is
formed of the matrix A in Eq.1 with the residual vector. This operation can be broken
into four parts. The first part involves the product AE v E and involves no communication
since all the data resides on the same processor. The next two parts involve
U v C and L v E . These involve transfer of parts of the vector between processor
0, which is assigned the ex-core, and the processors 1 and 12 which are assigned the
bottommost and the topmost planes in the core, respectively. The fourth part of the
matrix vector product involves AC v C and requires communication of processors 1
through 12 with at most two processors. In general the matrix vector product requires
communication between selected processors and has a specific pattern.
The vector inner products, on the other hand, require global communication which
means that every processor requires some information from all the other processors.
This is because it is necessary to sum the product of each element of the first vector
with the corresponding element from the second vector. The elements of the vectors
are first multiplied and each processor forms a partial sum with the elements that
reside in its domain. Because only the sum of the products is required and not the
individual products, the number of transfers required can be considerably reduced
by employing the well known method of tree-summing for which the communication
costs vary as dlog 2 (N )e (as opposed to N in the case of all to all communication).
The Gram-Schimdt orthogonalization process used in GMRES involves several inner
products during each iteration. However, the result of any of these inner products is
not dependent on any of the others. Therefore, the partial sums of all of the inner
products can be performed at one time, further reducing the number of transfers
required [2].
The least square solution in GMRES involves a reduced linear system and does
not involve sufficient operations to merit parallelization [11]. Communication related
to the least squares problem was avoided entirely by performing the least square
solution simultaneously on all the processors. Also, because the estimate of the residue
in GMRES is a consequence of the least squares solution, the termination criterion
could be evaluated without the need for additional communication.
5.2. Parallelization of the Inner Iteration. Several of the operations required
to parallelize the inner iteration were similar to the outer iteration. The
matrix AC is striped row wise and the row corresponding to each plane is stored
in the corresponding processor. Because the planes are linked only to their immediate
neighbors, the matrix vector product requires two sets of data transfer. The vector
inner products and the least square solution are treated in the same manner as in the
outer level. Since the preconditioner for the second level is a block Jacobi, the preconditioner
equation, could be implemented without any data transfers, irrespective
of whether it is solved using a direct solver or an iterative solver.
5.3. Results: PWR Model with 9 Channel/ 12 Axial Core. The parallelized
version of RETRAN-03 was executed on the Paragon with 13 processors. This
section presents the result of the rod withdrawal case for the PWR model with the
9 channel/12 axial volume core. The transient was analyzed for 20 seconds and the
nested FGMRES solution was performed using a tolerance of 10 \Gamma11 and an iteration
limit of 20 on both the inner and outer iterations.
Table
Serial Performance of the Second Level: 9 Channel/12 Axial
Time step Outer Inner Serial Execution time
Precond Other Total
Table
Parallel Performance of the Second Level: 9 Channel/12 Axial
Time step Outer Inner Parallel Execution time
Precond Comm Other Total Speedup
Tables
6 and 7 show the results of serial and parallel implementation, respectively,
of the inner or second level GMRES. The fourth column in both Tables headed "Pre-
cond" indicates the time required for the preconditioning of the second level (GMRES
III or Direct Solve in Figure 6) which was performed in this application using a direct
solver by means of LU factorization. The first timestep involves additional initialization
costs and consistent comparisons between the parallel and serial versions of
RETRAN were not possible because of differences in code structure for initialization.
The high speedup (6.22) in the first iteration of the second timestep is due to
LU factorization which is performed concurrently in the parallel implementation. It
should be noted that the bulk of the time is spent on the third level and since this part
of the code is naturally parallelizable, one would expect high efficiencies. However,
since the problem size is relatively small, the communicationoverhead in implementing
the remainder of GMRES significantly reduces the efficiency. This would not be the
case for larger problems as will be seen in the next section.
Tables
8 and 9 show the results of serial and parallel implementation of the outer
level GMRES. Again the bulk of the time is spent in the preconditioning. Similar
execution times were obtained for other timesteps. Speedup of slightly greater than
two were obtained for most of the outer iterations.
The execution time for the first four time steps is summarized in Table 10. Again,
a speedup on the order of two was achieved.
Serial Performance of the Outer Level: 9 Channel/12 Axial
Timestep Outer Serial Execution time
Core Ex-core Other Total
Table
Parallel Performance of the Outer Level: 9 Channel/12 Axial
Timestep Outer Parallel Execution time
Core Ex-core Comm Other Total Speedup
5.4. Results: PWR Model with 25 Channel/ 12 Axial Core. In order
to examine the scalability of these results, the same PWR model was used but with
instead of 9 channels in the core. The linear system is nearly three times the size
of that arising form the 9 Channel/ 12 Axial case. As anticipated, and as shown in
Table
11 the time required to solve this linear system using the direct solver was an
order of magnitude larger than that for the 9 Channel/ 12 Axial case. (Compare with
case A.1 in Table 5.)
The problem was first executed using RETRAN with nested FGMRES solver on
a single processor on the Paragon. Because the problem was too large to be executed
on an ordinary node which has 32MB of RAM, a special "fat" node of the Paragon
was used which has additional memory (128 MB) to support larger applications. As
shown in Table 11, the results on a single processor were encouraging since more than
a factor of 2 improvement was achieved with GMRES compared to the direct solver.
The problem was then executed using the parallel version of the nested GMRES
solver. The domain decomposition method was exactly the same as the 9 Channel /12
Axial case with 12 processors assigned to the core and 1 to the ex-core. As expected,
the parallel efficiency was much better and the speedups were about a factor of 4 with
respect to the serial version of GMRES. Table 11 shows the comparison of results for
the first two time steps.
As indicated in the Table 11, the parallel execution of nested GMRES provides
over an order of magnitude reduction in the execution time compared to the serial
execution of the direct solver. Furthermore, since the memory requirement per node
was less than 32MB, the parallel version could be executed using standard sized
nodes of the Paragon. Thus parallelization can be the key to not just execution time
reductions but also alleviating the memory constraints for larger problems.
6. Summary and Conclusions. A nested FGMRES method was developed to
solve the linear systems resulting from three-dimensional hydrodynamics equations.
Applications were performed on practical Pressurized Water Reactor problems with
three-dimensional core models. Serial and parallel applications were performed for
Overall Performance of the Nested FGMRES: 9 Channel/12 Axial
Timestep Number of Execution time Speedup
Outers Serial Parallel
Table
Execution time for the 25 Channel/ 12 Axial case
Case Timestep Execution time per Iteration
Outer Total per
Inner other Timestep
Core Ex-core
Precond other
Direct Solver 1 - 343.310
both a 9 channel and a 25 channel version of the reactor core.
When appropriately tight convergence was enforced at each GMRES level, the
semi-iterative solver performed satisfactorily for the duration of a typical transient
problem. The serial execution time for the 9 channel model was comparable to the
direct solver and the parallel speedup on the INTEL Paragon was a factor of 2-3
when using 13 processors. For the 25 channel model, the serial performance of nested
GMRES was about a factor of 3 better than the direct solver and the parallel speedups
were in the vicinity of 4, again using 13 processors. Thus, for the 25 channel problem
over an order of magnitude reduction in the execution time was achieved.
The results here indicate that the use of semi-iterative solvers and parallel computing
can considerably reduce the computational load for practical PWR transient
calculations. Furthermore, the results here indicate that distributed memory parallel
computing can help alleviate constraints on the size of the problem that can be exe-
cuted. Finally, the methods developed here are scalable and suggest that it is within
reach to model a PWR core where all 193 flow channels are explicitly represented.
7.
Acknowledgements
. The authors appreciate the work of Mr. Jen-Ying Wu
in generating the transient results reported in this paper.
--R
Marching Algorithms for Elliptic Boundary Value Problems.
Reducing the Effect of Global Communication in GMRES(m) and CG on Parallel Distributed Memory Computers
Direct methods for sparse matrices
The Effect of Ordering on Preconditioned GMRES Algorithm
Numerical Methods for Engineers and Scientists
Supercomputing Applied to Nuclear Reactors
An Assessment of Advanced Numerical Methods for Two-Phase Fluid Flow
A Flexible InnerOuter Preconditioned GMRES Algorithm
GMRES: A Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems
A Comparison of Preconditioned Nonsymmetric Krylov Methods on a Large-Scale MIMD Machine
Performance of Conjugate Gradient-like Algorithms in Transient Two-Phase Subchannel Analysis
GMRESR: A Family of Nested GMRES Methods
Some Computational Challenges of Developing Efficient Parallel Algorithms for Data-Dependent Computation in Thermal-Hydraulics Supercomputer Applications
--TR | fluid dynamics;Preconditioned GMRES;parallel computing;nuclear reactor simulation |
289829 | Stochastic Integration Rules for Infinite Regions. | Stochastic integration rules are derived for infinite integration intervals, generalizing rules developed by Siegel and O'Brien [ SIAM J. Sci. Statist. Comput., 6 (1985), pp. 169--181] for finite intervals. Then random orthogonal transformations of rules for integrals over the surface of the unit m-sphere are used to produce stochastic rules for these integrals. The two types of rules are combined to produce stochastic rules for multidimensional integrals over infinite regions with Normal or Student-t weights. Example results are presented to illustrate the effectiveness of the new rules. | Introduction
A common problem in applied science and statistics is to numerically compute integrals in the form
with . For statistics applications the function p(') may be an unnormalized unimodal
posterior density function and g(') is some function for which an approximate expected value is needed.
We are interested in problems where p(') is approximately multivariate normal (' - Nm (-; \Sigma) or
multivariate Student-t (' - t m (-; \Sigma)). In these cases, a standardizing transformation in the form
can be determined (possibly using numerical optimization), where - is the point where
log(p(')) is maximized, \Sigma is the inverse of the negative of the Hessian matrix for log(p(')) at -, and C
is the lower triangular Cholesky factor for \Sigma The transformed integrals then take the form
w(jjxjj)f(x)dx;
or
Student-t). If the approximation to p(') is good, then f(x)
Accepted for publication in SIAM Journal on Scientific Computing.
y Partially supported by NSF grant DMS-9211640.
can be accurately approximated by a low degree polynomial in x, and this motivates our construction
of stochastic multidimensional polynomial integrating rules for integrals I(f).
This type of integration problem has traditionally been handled using Monte-Carlo algorithms (see
the book by Davis and Rabinowitz, 1984, and the more recent paper by Evans and Swartz, 1992). A
simple Monte-Carlo algorithm for estimating I(f) might use
with the points x i randomly chosen with probability density proportional to w(jjxjj). This Monte-Carlo
algorithm, which is an importance sampling algorithm for the original problem of estimating E(g), is
often effective, but in cases where the resulting f(x) is not approximately constant, the algorithm
can have low accuracy and slow convergence. However, an important feature of simple Monte-Carlo
algorithms is the availability of practical and robust error estimates. If we let oe E denote the standard
error for the sample, then
and
ff
R
\Gammaff
dt.
The new methods that we will describe can be considered a refinement of this Monte-Carlo with importance
sampling algorithm. Simple Monte-Carlo with importance sampling results are exact whenever
the importance modified integrand is constant, but our methods will be exact whenever the importance
modified integrand is a low degree polynomial. Our methods will also provide a robust error estimate
from the sample standard error. The new one-dimensional integration rules that we develop are generalizations
of the rules derived for the interval [-1,1], with weight by Siegel and O'Brien
(1985). Their work extends earlier work by Hammersley and Handscomb (1964), who also considered
the construction of stochastic integration rules for finite intervals. Our work is also partly based on
work by Haber (1969), who introduced the word "stochastic" for generalized Monte-Carlo rules.
Our development of stochastic multidimensional integration rules requires an additional change of
variables to a radial-spherical coordinate system. We let
Z
z t z=1
Z 1w(r)r
z t z=1
w(r)jrj
The numerical approximations to I(f) that we propose to use will be products of stochastic integration
rules for the radial interval (\Gamma1; 1) with weight w(r)jrj m\Gamma1 , and stochastic rules ( of the same polynomial
degree ) for the surface of the unit m-sphere. Averages of properly chosen samples of these rules
will provide unbiased estimates for I(f), and standard errors for the samples can be used to provide
robust error estimates for the I(f) estimates. Our development was partially motivated by the work
of De'ak (1990), who used a transformation to a spherical coordinate system combined with random
orthogonal transformations to develop a method for computing multivariate normal probabilities, but
he did not consider using higher degree rules.
The basic radial integration rules that we use are combinations of the symmetric sums
h(\Gammaae))=2. A radial rule R(h) takes the form
Given points fae i g, the weights fw i g will be determined so that R has polynomial degree 2n + 1. The
points fae i g will be randomly chosen so that R is an unbiased estimate for T
For fixed points fae i g, the selection of the weights is a standard integration rule construction problem.
If we want a degree d rule, it is sufficient that
When k is an odd integer, the equation is automatically satisfied because both R and the integration
operator are symmetric. Define P (h; r) by
Y
Now P (h; r) is a even degree Lagrange interpolating polynomial for h, so it follows from standard
interpolation theory, that P (h;
r)), and the weights fw i g that we need to make R degree are just integrals of the even
degree Lagrange basis functions. We have the following theorem:
Theorem 1 If the points fae i g are distinct non-negative real numbers and the weights fw i g are defined
by
Y
is a degree 2n
We now describe how to choose the points fae i g so that R is an unbiased estimate for T (h). In order
to accomplish this, we need to find a joint probability density function p(ae
Z 1Z 1:::
for any integrable h. We will explicitly show how to do this when and conjecture the
general form for p for n - 3. We will let T use the fact thatZ 1r
The case randomly with density
2ae . Then we have
Z 1C(ae)2ae
Z 1ae
For degree three rule for T (h) is
If we choose ae - 0 randomly with density 2ae m+1 w(ae)
Z 1R 3 (ae) 2ae m+1 w(ae)
dae
Z 1ae
Z 1ae
Z 1ae m+1 w(ae)
Z 1ae
A degree five rule for T (h) is
We will choose ae - 0 and randomly with joint density
where K is determined by the condition
1. We now need to show that EfR 5
(h). There are three terms in R 5 to consider, so we start with the first one, and we find
Z 1Z 1(aeffi)
0:
For the second term we find
Z 1Z 1ae
Z 1Z 1ae
Z 1ae
Z 1ae
Now
Z 1Z 1ae
Z 1Z 1ae
so
Because R 5 is symmetric in ae and ffi , the last term in R 5 also has expected value T (h)=2, so we have
shown that EfR 5 our results in this section with Proposition 1.
Proposition 1 If and the points fae i g, the rules R 2n+1 given by
(1) with weights given by (2), are chosen with probability density proportional to
Y
ae m+1
then R is an unbiased degree 2n
We have proved this for 2. The form for the probability density for n ? 2 is a
conjectured natural generalization of the Siegel and O'Brien Theorem 5.1 (1985). Because of practical
problems associated with generating random ae's from this density when n ? 2 we focus on the
cases.
3 Stochastic Spherical Integration Rules
The spherical surface integrals will be approximated by averages of random rotations of appropriately
chosen rules for the spherical surface. Let
~
with z t
be an integration rule that approximates an integral of a function s(z) over the
surface Um of the unit m-sphere defined by z t z = 1. If Q is an m \Theta m orthogonal matrix then
~
is also an integration rule for s over Um , because Furthermore, if S has polynomial degree
d, then so does SQ , because s(Qz) has the same degree as s(z). If Q is chosen uniformly (see Stewart,
1980) and S has polynomial degree d, then SQ is an unbiased random degree d rule for Um .
There are many choices that could be used for S. We consider rules given in the book by Stroud
(1971, pages 294-296) and the review paper by Mysovskikh (1980, pages 236-237). The rules that we
will combine with radial rules have degree 1, 3 or 5, and we now list them. A simple degree 1 rule is
is the surface content of Um , and z is any point on Um . A simple degree
3 rule is
with the "1" in the j th position. This rule uses 2m values of s(z). A
different degree 3 rule (Mysovskikh, 1980) is
is the j th vertex of a regular m-simplex with vertices on Um . The degree 3 rule -
S 3 is slightly
more expensive to use than S 3 , but it leads to an efficient general degree 5 rule (Mysovskikh, 1980)
(s(\Gammay
The points y j are determined by taking the midpoints of edges of the m-simplex with vertices
projecting those midpoints onto the surface of Um . -
values of s(z). A
degree five rule which extends S 3 (Stroud, 1971, page 294) is
where u j is one of the points in the fully symmetric set that is determined by all possible
permutations and sign changes of the coordinates of the point (r;
2.
4 Stochastic Spherical-Radial Integration Rules
In this section we combine stochastic radial rules with stochastic spherical rules to produce random
rules for I(f). There are many ways that this could be done. A natural approach is to form a stochastic
product rule SR Q;ae (f) from a spherical surface rule S and a radial rule R. Such a rule takes the form
~
If S and R both have degree d, then SR Q;ae (f) will also have degree d (Stroud, 1971, Theorem 2.3-1).
If Q is a uniformly random orthogonal matrix and ae is random chosen with the correct density for R,
then SR Q;ae (f) will be an unbiased estimate for I(f). We have the following theorem:
Theorem 2 If ae is random with density given by Proposition 1, S has degree 2n+1 and Q is an m \Theta m
uniform random orthogonal matrix, then
w(jjxjj)f(x)dx
whenever f is a degree 2n
w(jjxjj)f(x)dx
for any integrable f .
We give three examples of SR rules. A degree one rule constructed from S 1 and R 1 is
ae
Here Q is unnecessary, because uniform random vectors z from Um give unbiased rules. A degree three
rule constructed from S 3 and R 3 is
A degree five rule constructed from -
S 5 and R 5 is
(w
(w
with ~
and ~
ae (f), SR 3
Q;ae and -
respectively. A
sample of one of these rules can be generated, and the sample average used to estimate I(f). The
standard error for the sample can be used to provide an error estimate. For comparison purposes with
the examples in Section 6, we will use SR 0 (f) to denote the one point rule f(z), with the components
of z chosen from Normal(0,1). SR 0 (f) is just the simple Monte-Carlo rule for I(f) with multivariate
normal weight.
5 Implementation Details and Algorithms
In this section we focus on integrals of the form
w(jjxjj)f(x)dx;
. For integrals of this type, we have determined explicit formulas for
the radial rule weights, along with explicit methods for generating the random radial rule points. We
will also discuss the multivariate Student-t weight
We first consider the rule SR 1
ae . In the case
so 2. Therefore
ae
The probability density for ae is proportional to ae degrees of freedom.
It is a standard statistical procedure to generate a random ae with this density (Monahan, 1987). A
standard procedure for generating uniformly random vectors z from Um , consists of first generating
x with components x i random from Normal(0,1) and setting z = x=jjxjj. However, this combined
procedure for generating random vectors aez must be equivalent to just generating random z from
w(jjzjj). Therefore, all we need to do is generate the components z i random
from Normal(0,1), and this is a simpler procedure. We propose the following algorithm for random
degree one rules:
Degree One Spherical-Radial Rule Integration Algorithm
1. Input ffl, m, f and Nmax .
2.
3. Repeat
(a)
(b) Generate a random x with x i - Normal(0,1).
(c) I +D and
4. Output I - I(f), oe
V and N .
The input ffl is an error tolerance, the input Nmax provides a limit on the time for the algorithm, and the
output oe E is the standard error for the integral estimate I. The algorithm computes I and V using a
modified version of a stable one-pass algorithm (Chan and Lewis, 1979). The unscaled sample standard
error oe E will usually be an error bound with approximately 68% certainty. Users of this algorithm
who desire a higher degree of confidence can scale oe E appropriately. For example, a scale factor of 2
increases the certainty level to approximately 95%.
The error estimates obtained by scaling oe E with this algorithm (and the other algorithms in this
section) should be used with caution for low N values. These error estimates are based on the use of
the Central Limit Theorem to infer that the sample averages SR are approximately Normal. A careful
implementation of the algorithms in this section could include an Nmin parameter and/or use a larger
scale factor for oe E for small N values. For large N , a scaled oe E should provide a robust, statistically
sound error estimate, as long as the multivariate normal model adequately represents the tails in the
posterior density. Posterior densities with thicker tails are often more efficiently and reliably handled
using a multivariate Student-t model. One technique for monitoring this is discussed by Monahan and
Genz (1996).
If we consider the Student-t weight, we can see that the density for ae is proportional to r
a change of variable shows this to be proportional to a Beta( m
(see Devroye, 1986, for generating methods), so the random ae's and the uniformly random vectors z
from Um , needed for SR 1 can easily be generated. We can also show jU m jT so the formula for
is the same as the formula for the multivariate Normal case. By making appropriate changes
to line 3(b) and 3(c) of the previous algorithm, a modified algorithm could be produced.
Next, we consider the rule SR 3
Q;ae . Integration by parts with
and therefore
The probability density for ae is proportional to ae m+1 e \Gammaae 2 =2 , a Chi density with m+2 degrees of freedom.
We propose the following algorithm for stochastic degree three rules:
Degree Three Spherical-Radial Rule Integration Algorithm
1. Input ffl, m, f and Nmax .
2.
3. Repeat
(a)
(b) Generate a uniformly random orthogonal m \Theta m matrix Q.
(c) Generate a random ae - Chi(m
(d) For
I +D and
4. Output I - I(f), oe
V and N .
The random orthogonal matrices Q can be generated using a product of appropriately chosen random
reflections (see Stewart, 1980). Other methods are discussed by Devroye (1986, p. 607).
If we consider the Student-t weight case, then integration by parts shows that T
therefore require - ? 2. In this case, SR 3 becomes
Further analysis shows that r m+1
proportional to a Beta( m+2
density, so the random ae's for these SR 3 can easily be generated, and by making appropriate changes
to lines 3(c) and 3(e) of the previous algorithm, a modified algorithm could be produced.
Finally, we consider the rule -
Q;ae . For the weight
and a little algebra shows
In order to develop an algorithm for -
Q;ae , we need a set of regular m-simplex unit vertices g.
We use the set given in Stroud (1971, page 345, correcting a minor misprint), where v
The joint probability density for (ae; ffi ) is proportional to (aeffi) m+1 e \Gamma(ae 2
is not a standard probability density, but there is a transformation to standard densities. Consider the
integral
and make the change of variables
R -=2
Finally, let
sin
sin
Z 1p
The function q m+1 is proportional to a standard Beta(m density. The first
inner integral has the resulting ae ! ffi and the second has ae ? ffi . Because these cases are both
equally likely and -
SR 5 is symmetric in ae and ffi , there is no loss of generality in always using ae ! ffi .
Therefore, we choose r from a Chi(2m and q from a Beta(m density, and then
will be distributed with joint probability density proportional
to (aeffi) m+1 e \Gamma(ae 2 We note here that the same changes of variables could also be
used to provide a practical method for generating the corresponding ae and ffi for the Siegel and O'Brien
(1985) finite interval rules. This question was not addressed in their paper.
We propose the following algorithm for stochastic degree five rules:
Degree Five Spherical-Radial Rule Integration Algorithm
1. Input ffl, m, f and Nmax .
2. compute the m-simplex vertices fv j g.
3. Repeat
(a)
(b) Generate a uniformly random orthogonal m \Theta m matrix Q and set f~v j g.
(c) Generate a random r - Chi(2m
and set
(d) For
ffl For
4. Output I - I(f), oe
V and N .
If we consider the Student-t weight, then it can be shown that T
(- \Gamma2)(- \Gamma4) T 0 , and we must have
4. In this case, we could also produce a formula for -
SR 5 . However, we have not found any easy
method for generating the random ae's and ffi 's needed for R 5 , and so we do not consider this further.
Anyway, for large -, \Gamma( -+m
so the rules that we have already
developed for the multivariate Normal weight should be effective.
A possibly significant overhead cost for the SR 3 and -
SR 5 rules is the generation of the random
orthogonal matrices. Using the algorithm given by Stewart (1980), it can be shown that the cost for
generating one such matrix Q is approximately 4m 3 =3 floating point operations (flops) plus the cost
of generating m 2 =2 Normal(0,1) random numbers. For SR 3 rules the columns of Q are used for the
evaluation points for 2m integrand values, so the overhead cost per integrand value is 2m 2 =3 flops
plus the cost of generating m=4 Normal(0,1) random numbers. Once an integrand evaluation point is
available, we expect the cost for the evaluation of the integrand to be at least O(m), because there
are m components for the input variable for the integrand. However, with application problems in
statistics, the posterior density is often a complicated expression made up of a combination of standard
elementary functions evaluated using the input variable components combined with the problem data
(see the second example in the next section). Therefore, if the O(m) integrand evaluation cost is
measured in flops, we expect the constant in O(m) to be very large, so that the 2m 2 =3 flops for the
generation of the evaluation point for that integrand evaluation should not be significant unless m
is very large. For -
SR 5 rules the Q overhead cost per evaluation point drops to approximately 2m=3
flops (plus the cost of m=4 Normal variates), and this is not significant compared to the integrand
evaluation cost for typical statistics integration problems. We also note here that we need m=2 and
respectively, for the rules SR 1 and SR 0 , per integrand evaluation, so the Normal
variate overhead is higher for the two lowest degree rules. Overall, except for very simple integrands or
large m values, we do not expect the overhead costs for the four rules to be significant compared to the
integrand evaluation cost.
6 Examples
We begin with a simple example, where
The following table of results we obtained using the SR rules:
Table
Test Results from SR Rules
Values I oe E I oe E I oe E I oe E
1000
These results are as expected, with much smaller standard errors for the higher degree rules.
For our second example we use a seven dimensional proportional hazards model problem discussed
by Dellaportas and Wright (1991, 1992). The posterior density is given by
aet
. After we first transform ae using x log(ae), we model p(') with a
multivariate normal approximation. So we use
after computing the mode - and C for log(ae) log(p). We added a scaling constant
e 207:19 , to prevent problems with underflow. In the following table we show results from the use of SR
rules to approximate I(f 2 ), and expected vales for each of the integration variables. The constant S in
the table is a normalizing constant. For each of the respective SR rules we used the computed value of
Table
2: Test Results from SR Rules with 120,000 f 2 Values
Integrand I oe E I oe E I oe E I oe E
For this example, the -
SR 3 and -
SR 5 rule results have standard errors that are smaller than the SR 0
and SR 1 rule standard errors by factors that are on average about one half. Because the decrease in
standard errors is inversely proportional to the square root of the number of samples, approximately
four times as much integrand evaluation work would be needed for this problem when using the SR 0
and SR 1 rules to obtain errors comparable to the errors for the -
SR 3 and -
SR 5 rules. These results are
not as good as those for the previous problem, but the higher degree SR rules are still approximately
four times more efficient than the lower degree rules. The degree five rule was not better than the degree
three rule for this problem. After the standardizing transformation, the problem is apparently close
enough to multivariate normal, so that a rule with degree higher than three does not produce better
results. We did not find any significant difference in running times needed by the four algorithms for
the results in Table 2, and this supports our analysis of the relative importance of overhead costs for
the different rules.
The two examples in this section are meant to illustrate the use of the algorithms given in this
paper. Much more extensive testing is needed in order to carefully compare these algorithms with other
methods available for numerical integration problems in applied statistics. For some of the testing work
that has been recently done with these methods we refer the interested reader to the paper by Monahan
and Genz (1996). Further testing work is still in progress.
7 Concluding Remarks
We have shown how to derive low degree stochastic integration rules for radial integrals with normal
and Student-t weight functions. We have also shown how these new rules can be combined with
stochastic rules for the surface of the sphere, to provide stochastic rules for infinite multivariate regions
with multivariate normal and Student-t weight functions. Results from the examples suggest that
averages of samples of these rules can provide more accurate integral estimates than simpler Monte-Carlo
importance sampling methods. The standard errors from the samples provide robust error estimates
for the new rules.
--R
Computing Standard Deviations: Accuracy
Methods of Numerical Integration
Positive Imbedded Integration in Bayesian Analysis
A Numerical Integration Strategy in Bayesian Analysis
Numerical Quadrature and Cubature
Some Integration Strategies for Problems in Statistical Inference.
Computing Science and Statistics 24
Monte Carlo Methods
Stochastic Quadrature Formulas
Continuous Univariate Distributions-I
An Algorithm for Generating Chi Random Variables
A Comparison of Omnibus Methods for Bayesian Computation
The Approximation of Multiple Integrals by using Interpolatory Cubature Formulae
Unbiased Monte Carlo Integration Methods with Exactness for Low Order Polynomials
The Efficient Generation of Random Orthogonal Matrices with An Application to Condition Estimation
The Approximate Calculation of Multiple Integrals
--TR | multiple integrals;numerical integration;monte carlo;statistical computation |
289834 | Multigrid Method for Ill-Conditioned Symmetric Toeplitz Systems. | In this paper, we consider solutions of Toeplitz systems where the Toeplitz matrices An are generated by nonnegative functions with zeros. Since the matrices An are ill conditioned, the convergence factor of classical iterative methods, such as the damped Jacobi method, will approach one as the size n of the matrices becomes large. Here we propose to solve the systems by the multigrid method. The cost per iteration for the method is of O(n log n) operations. For a class of Toeplitz matrices which includes weakly diagonally dominant Toeplitz matrices, we show that the convergence factor of the two-grid method is uniformly bounded below one independent of n, and the full multigrid method has convergence factor depending only on the number of levels. Numerical results are given to illustrate the rate of convergence. | Introduction
In this paper we discuss the solutions of ill-conditioned symmetric Toeplitz systems A n by the
multigrid method. The n-by-n matrices A n are Toeplitz matrices with generating functions f that
are nonnegative even functions. More precisely, the matrices A n are constant along their diagonals
with their diagonal entries given by the Fourier coefficients of f :
\Gamma-
Since f are even functions, we have [A n are symmetric.
In [10, pp.64-65], it is shown that the eigenvalues - j lie in the range of f('), i.e.
min
Moreover, we also have
lim
f(') and lim
Department of Mathematics, Chinese University of Hong Kong, Shatin, Hong Kong. Research supported by
HKRGC grants no. CUHK 178/93E and CUHK 316/94E.
y Institute of Applied Mathematics, Chinese Academy of Science, Beijing, People's Republic of China.
z Department of Mathematics, Chinese University of Hong Kong, Shatin, Hong Kong.
Consequently, if f(') is nonnegative and vanishes at some points ' 0 2 [\Gamma-], then the condition
number is unbounded as n tends to infinity, i.e. A n is ill-conditioned. In fact, if the
zeros of f are of order -, then instance [4].
Superfast direct methods for Toeplitz matrices have been developed around 1980. They can
solve n-by-n Toeplitz systems in O(n log 2 n) operations, see for instance [1]. However, their stability
properties for ill-conditioned Toeplitz matrices are still unclear. Iterative methods based on
the preconditioned conjugate gradient method were proposed in 1985, see [11, 13]. With circulant
matrices as preconditioners, the methods require O(n log n) operations per iteration. For Toeplitz
systems generated by positive functions, these methods have shown to converge superlinearly. How-
ever, circulant preconditioners in general cannot handle Toeplitz matrices generated by functions
with zeros, see the numerical results in x6. The band-Toeplitz preconditioners proposed in [4, 5]
can handle functions with zeros, but are restricted to the cases where the order of the zeros are
even numbers. Thus they are not applicable for functions like
Classical iterative methods such as the Jacobi or Gauss-Seidel methods are also not applicable
when the generating functions have zeros. Since lim n!1 the convergence factor is
expected to approach 1 for large n. In [8, 9], Fiorentino and Serra proposed to use multigrid method
coupled with Richardson method as smoother for solving Toeplitz systems. Their numerical results
show that the multigrid method gives very good convergence rate for Toeplitz systems generated by
nonnegative functions. The cost per iteration of the multigrid method is of O(n log n) operations
which is the same as the preconditioned conjugate gradient methods.
However, in [8, 9], the convergence of the two-grid method (TGM) on first level is only proved for
the so-called band - matrices. These are band matrices that can be diagonalized by sine transform
matrices. A typical example is the 1-dimensional discrete Laplacian matrix diag[\Gamma1; 2; \Gamma1]. In
general, - matrices are not Toeplitz matrices and vice versa. The proof of convergence of the TGM
for Toeplitz matrices was not given there.
From the computational point of view, the matrix on the coarser grid in TGM is still too
expensive to invert. One therefore usually does not use TGM alone but instead applies the idea of
TGM recursively on the coarser grid to get down to the coarsest grid. The resulting method is the
full multigrid method (MGM). We remark that the convergence of MGM for Toeplitz matrices or
for - matrices was not discussed in [8, 9].
In this paper, we consider the use of MGM for solving ill-conditioned Toeplitz systems. Our
interpolation operator is constructed according to the position of the first non-zero entry on the
first row of the given Toeplitz matrix and is different from the one proposed by Fiorentino and
Serra [8, 9]. We show that for a class of ill-conditioned Toeplitz matrices which includes weakly
diagonally dominant Toeplitz matrices, the convergence factor of TGM with our interpolation
operator is uniformly bounded below 1 independent of n. We also prove that for this class of
Toeplitz matrices, the convergence factor of MGM with V -cycles will be level-dependent. One
standard way of removing the level-dependence is to use "better" cycles such as the F - or the
W -cycles, see [12]. We remark however that our numerical results show that MGM with V -cycles
already gives level-independent convergence. Since the cost per iteration is of O(n log n) operations,
the total cost of solving the system is therefore of O(n log n) operations.
We note that the class of functions that we can handle includes functions with zeros of order 2 or
less and also functions such as which cannot be handled by band-Toeplitz preconditioners
proposed in [4, 5]. We will also give examples of functions that can be handled by multigrid method
with our interpolation operator but not with the interpolation operator proposed in [8].
The paper is organized as follows. In x2, we introduce the two-grid method and the full multigrid
method. In x3, we analyze the convergence rate of two-grid method. We first establish in x3.1 the
convergence of two-grid method on the first level for the class of weakly diagonally dominant
Toeplitz matrices. The interpolation operator for these matrices can easily be identified. Then in
x3.2, we consider a larger class of Toeplitz matrices which are not necessarily diagonally dominant.
The convergence of full multigrid method is studied in x4 by establishing the convergence of the two-
grid method on the coarser levels. In x5, we give the computational cost of our method. Numerical
results are given in x6 to illustrate the effectiveness of our method. Finally, concluding remarks are
given in x7.
Given a Toeplitz system A n we define a sequence of sub-systems on different
levels:
Here q is the total number of levels with being the finest level. Thus for
and are just the size of the matrix A m . We denote the interpolation and
restriction operators by I m
respectively. We will
choose
I m+1
The coarse grid operators are defined by the Galerkin algorithm, i.e.
Thus, if A m is symmetric and positive definite, so is A m+1 . The smoothing operator is denoted
by Typical smoothing operators are the Jacobi, Gauss-Seidel and Richardson
iterations, see for instance [3]. Once the above components are fixed, a multigrid cycling procedure
can be set up. Here we concentrate on the V -cycle scheme which is given as follows, see [3, p.48].
procedure
then u q :=
begin do i := 1 to - 1
d m+1 := I m+1
e m+1
do
Here I nm is the nm -by-n m identity matrix. If we set 2, the resulting multigrid method is the
two-grid method (TGM).
3 Convergence of TGM for Toeplitz Matrices
In this section, we discuss the convergence of TGM for Toeplitz matrices. We first give an estimate
of the convergence factor for Toeplitz matrices that are weakly diagonally dominant. Then we
extend the results to a larger class of Toeplitz matrices.
Let us begin by introducing the following notations. We say A
positive (respectively semi-positive) definite matrix. In particular, A ? 0 means that A
is positive definite. The spectral radius of A is denoted by ae(A). For A ? 0, we define the following
inner products which are useful in the convergence analysis of multigrid methods, see [12, p.77-78]:
Here h\Delta; \Deltai is the Euclidean inner product. Their respective norms are denoted by k 2.
Throughout this section, we denote the fine and coarse grid levels of the TGM as the h- and
H-levels respectively. For smoothing operator, we consider the damped-Jacobi iteration, which is
given by
see [3, p.10]. The following theorem shows that kG h k 1 - 1 if ! is properly chosen.
Theorem 1 ([12, p.84]) Suppose A ? 0. Let ff be such thatff
Then
satisfies
Inequality (6) is called the smoothing condition. We see from the theorem that the damped-Jacobi
method (4) with
For a Toeplitz matrix A generated by an even function f , we see from (1) that ae(A) -
Moreover, diag(A) is just a constant multiple of the identity matrix. Thus it
is easy to find an ff that satisfies (5). In applications where f is not known a priori, we can estimate
ae(A) by the Frobenius norm or matrix 1-norm of A. The estimate can be computed in O(n)
operations.
For TGM, the correction operator is given by
with the convergence factor given by k(G h are the
numbers of pre- and post-smoothing steps in the MGM algorithm in x2. For simplicity, we will
consider only The other cases can be established similarly as we have kG h k 1 - 1.
Thus the convergence factor of our TGM is given by kG h T h k 1 . The following theorem gives a general
estimate on this quantity.
Theorem 2 ([12, p.89]) Let chosen such that G h satisfies the
smoothing condition (6), i.e.
Suppose that the interpolation operator I h
H has full rank and that there exists a fi ? 0 such that
min
and the convergence factor of the h-H two-level TGM satisfies
r
Inequality (7) is called the correcting condition. From Theorems 1 and 2, we see that if ff is
chosen according to (5) and that the damped-Jacobi method is used as the smoother, then we only
have to establish (7) in order to get the convergence results. We start with the following class of
matrices.
3.1 Weakly Diagonally Dominant Toeplitz Matrices
In the following, we write n-by-n Toeplitz matrix A generated by f as its j-th
diagonal as a j , i.e. [A] is the j-th Fourier coefficient of f . Let ID be the class of Toeplitz
matrices generated by functions f that are even, nonnegative and satisfy
a
Given a matrix A 2 ID, let l be the first non-zero index such that a l 6= 0. If a l ! 0, we define the
interpolation operator as
I h
I l2 I l 1
I l
I l .2 I l .C C C C C C C C C C A
Here I l is the l-by-l identity matrix. If a l ? 0, we define the interpolation operator as
I h
I l
I l
I l
I l .
Theorem 3 Let A 2 ID and l be the first non-zero index where a l 6= 0. Let the interpolation
operator be chosen as in (10) or (11) according to the sign of a l . Then there exists a fi ? 0
independent of n such that (7) holds. In particular, the convergence factor of TGM is bounded
uniformly below 1 independent of n.
Proof: We will prove the theorem for the case a l ! 0. The proof for the case a l ? 0 is similar
and is sketched at the end of this proof. We first assume that n
according to (10), we have kl. For any e
where
For ease of indexing, we set e
We note that with I h
H as defined in (10) and the norm k \Delta k 0 in (3), we have
l
Thus (7) is proved if we can bound the right hand side above by fihe h ; Ae h i for some fi independent
of e h . To do so, we observe that for the right hand side above, we have
a 0
l
l
\Gammae 2il+j e (2i\Gamma1)l+j
- a 0
l
\Gammae 2il+j e (2i\Gamma1)l+j
- a 0
l
is the n h -by-n h Toeplitz matrix generated by 1 \Gamma cos l'. Thus
min
Hence to establish (7), we only have to prove that
for some fi independent of e h . To this end, we note that the n h -by-n h matrix A is generated by
a j cos j':
But by (9),
a
In particular, by (1)
Thus, by (12), we then have
2a l
2a l
Hence (7) holds with
Next we consider the case where n h is not of the form (2k+1)l. In this case, we let
We then embed the vectors e h and e H into longer vectors
e ~ h and e ~
H of size n ~ h and n ~
H by zeros. Then since
~
he ~
we see that the conclusion still holds.
We remark that the case where a l ? 0 can be proved similarly. We only have to replace the
function above by (1 Since in this case, f n h (') - 2a l (1 we then have
From this, we get (15) and hence (7) with fi defined as in (16).
3.2 More General Toeplitz Matrices
The condition on ID class matrices is too strong. For example, it excludes the matrix
However, from (12) and (13), we see that (7) can be proved if we can find a positive number fi
independent of n and an integer l such that
Since by (14) and (17), we see that (18) holds for any matrices B in ID, we immediately have the
following corollary.
Corollary 1 Let A be a symmetric positive definite Toeplitz matrix. If there exists a matrix B 2 ID
such that A - B. Then (7) holds provided that the interpolation operator for A is chosen to be the
same as that for B.
More generally, we see by (1) that if the generating function f of A satisfies
min
for some l, then (18) holds. Thus we have the following theorem.
Theorem 4 Let A be generated by an even function f that satisfies (19) for some l. Let the
interpolation operator be chosen as in (10) or (11) according to the sign of a l . Then (7) holds. In
particular, the convergence factor of TGM is uniformly bounded below 1 independent of the matrix
size.
It is easy to prove that (19) holds for any even, nonnegative functions with zeros that are of
order 2 or less. As an example, consider
and T n [1 \Gamma cos '] 2 ID, it follows from Theorem 4 that if the interpolation operator for A is chosen
to be the same as that for T n [1 \Gamma cos '], the convergence factor of the resulting TGM will be
bounded uniformly below 1. We note that T n [1 \Gamma cos '] is just the 1-dimensional discrete Laplacian:
diag[\Gamma1; 2; 1]. Our interpolation operator here is the same as the usual linear interpolation operator
used for such matrices, see [3, p.38]. However, we remark that the matrix is a dense
matrix.
As another example, consider the dense matrix T n [j'j]. Since -j'j - ' 2 on [\Gamma-], we have by
Hence T n [j'j] can also be handled by TGM with the same linear interpolation operator used for
Convergence Results for Full Multigrid Method
In TGM, the matrix A H on the coarse grid is inverted exactly. From the computational point of
view, it will be too expensive. Usually, A H is not solved exactly, but is approximated using the
TGM idea recursively on each coarser grid until we get to the coarsest grid. There the operator
is inverted exactly. The resulting algorithm is the full multigrid method (MGM). In x3, we have
proved the convergence of TGM for the first level. To establish convergence of MGM, we need to
prove the convergence of TGM on coarser levels.
Recall that on the coarser grid, the operator A H is defined by the Galerkin algorithm (2), i.e.
h A h I h
H . We note that if n will be a block-Toeplitz-
Toeplitz-block matrix and the blocks are l-by-l Toeplitz matrices. In particular, if l = 1, then A H
is still a Toeplitz matrix. However, if n h is not of the form (2k will be a sum of a
block-Toeplitz-Toeplitz-block matrix and a low rank matrix (with rank less than or equal to 2l).
We will only consider the case where n j. For then on each level
. Hence the main diagonals of the coarse-grid operators A m ,
will still be constant. Recall that from the proof of Theorem 3 that (18) implies (7).
We now prove that if (18) holds on a finer level, it holds on the next coarser level when the same
interpolation operator is used.
Theorem 5 Let a h
0 and a H
0 be the main diagonal entries of A h and A H respectively. Let the
interpolation operator I h
H be defined as in (10) or (11). Suppose that
A h - a hfi h
for some fi h ? 0 independent of n. Then
with
a
Proof: We first note that if we define the (n H
I l I l
I l I lC A ; (24)
then there exists a permutation matrix P such that
I h
I nH
(cf (10) and (11)). Moreover, for the same permutation matrix P , we have
I nH \UpsilonK t
I nH+l
By (2) and (21), we have
h A h I h
a hfi h I H
But by (25) and (26), we have
a hfi h I H
I nH \UpsilonK t
I nH+l
!/
I nH
By the definition of K in (24), we have
a hfi h
Combining this with (28), we get
a hfi h I H
Hence (27) implies (22) with (23).
Recall by (5) that we can choose ff h such that
Notice that K t K - I n h and therefore
h A h I h
I h
Thus on the coarser level, we can choose ff H as
According to (8), (30) and (23), we see that
s
s
ff h a H
s
Recursively, we can extend this result from the next coarser-level to the q-th level and hence obtain
the level-dependent convergence of the MGM:
s
s
We remark that this level-dependent result is the same as that of most MGM, see for instance
[12, 2]. One standard way to overcome level-dependent convergence is to use "better" cycles such
as the F - or W -cycles, see [12]. We note however that our numerical results in x6 shows that MGM
with V -cycles already gives level-independent convergence.
We remark that we can prove the level-independent convergence of MGM in a special case.
Theorem 6 Let f(') be such that
for some integer l and positive constants c 1 and c 2 . Then for any 1 - m - q,
r
Proof: From (31), we have
Recalling the Galerkin algorithm (2) and using (29) recursively, we then have
By the right-hand inequality and (18), we see that
c 1 a m:
and hence by the left hand side of (32)
Therefore by the definition of ff in (5), we see that
c 2 a m:
According to (8), we then conclude that
s
r
As an example, we see that MGM can be applied to T n [' 2 ] with the usual linear interpolation
operator and the resulting method will be level-independent.
Computational Cost
Let us first consider the case where j. Then on each level, n
some k. From the MGM algorithm in x2, we see that if we are using the damped-Jacobi method
(4), the pre-smoothing and post-smoothing steps become
Thus the main cost on each level depends on the matrix-vector multiplication A m y for some vector
y. If we are using one pre-smoothing step and one post-smoothing step, then we require two such
matrix vector multiplications - one from the post-smoothing and one from the computation of the
residual. We do not need the multiplication in the pre-smoothing step since the initial guess u m is
the zero vector.
On the finest level, A is a Toeplitz matrix. Hence Ay can be computed in two 2n-length FFTs,
see for instance [13]. If l = 1, then on each coarser level, A m will still be a Toeplitz matrix. Hence
A m y can be computed in two 2nm -length FFTs. When l ? 1, then on the coarser levels, A m will
be a block-Toeplitz-Toeplitz-block matrix with l-by-l Toeplitz sub-blocks. Therefore A m y can also
be computed in roughly the same amount of time by using 2-dimensional FFTs. Thus the total
cost per MGM iteration is about eight 2n-length FFTs.
In comparison, the circulant-preconditioned conjugate gradient methods require two 2n-length
FFTs and two n-length FFTs per iteration for the multiplication of Ay and C \Gamma1 y respectively. Here
C is the circulant preconditioner, see [13]. The band-Toeplitz preconditioned conjugate gradient
methods require two 2n-length FFTs and one band-solver where the band-width depends on the
order of the zeros, see [4]. Thus the cost per iteration of using MGM is about 8/3 times as that
required by the circulant preconditioned conjugate gradient methods and 4 times of that required
by the band-Toeplitz preconditioned conjugate gradient methods.
Next we consider the case when n is not of the form 1)l. In that case, on the coarser level,
A m will no longer be a block-Toeplitz-Toeplitz-block matrix. Instead it will be a sum of such a
matrix and a low rank matrix (with rank less than 2l). Thus the cost of multiplying A m y will be
increased by O(ln).
6 Numerical Results
In this section, we apply the MGM algorithm in x2 to ill-conditioned real symmetric Toeplitz
systems A n We choose as solution a random vector u such that 0 - 1. The right hand
side vector b is obtained accordingly. As smoother, we use the damped-Jacobi method (4) with
chosen as a 0 =max f(') for pre-smoother and post-smoother. We use one
pre-smoothing and one post-smoothing on each level.
The zero vector is used as the initial guess and the stopping criterion is kr
where r j is the residual vector after j iterations. In the following tables, we give the number
of iterations required for convergence using our method, see column under M . For comparison,
we also give the number of iterations required by the preconditioned conjugate gradient method
with no preconditioner (I), the Strang (S) circulant preconditioner, the T. Chan (C) circulant
preconditioner and also the band (B) preconditioners, see [6, 7, 4]. The double asterisk * signifies
more than 200 iterations are needed.
For the first example, we consider functions with single zero at the point The functions
we tried are We note that T n
ID. However, we have Therefore, according
to Corollary 1, we can use the interpolation operator (10) with
Table
1: Number of Iterations for Functions with Single Zero.
Next we consider two functions with jumps and a single zero at the point
and
We note that both matrices T n [J j (')], are not in ID. However, since J j (') -
for all ', we can still use the interpolation operator defined by 1 \Gamma cos ' for both T n [J j (')]. We
remark that for J 2 ('), since the zero is not of even order, band circulant preconditioners cannot be
constructed.
Table
2: Number of Iterations for Functions with Jumps.
Finally, we consider two functions with multiple zeros. They are
and
ID. But we
note that ' 2 (- both matrices can use the interpolation operator
in (10) with l = 2. In particular, our interpolation operator will be different from that proposed
in [8], which in this case will use the interpolation operator in (10) with l = 1. Their resulting
MGM converges very slowly with convergence factor very close to 1 (about 0.98 for both functions
when 64). For comparison, we list in Table 3, the number of MGM iterations required by such
interpolation operator under column F .
Table
3: Number of Iterations for Functions with Multiple Zeros.
7 Concluding Remarks
We have shown that MGM can be used to solve a class of ill-conditioned Toeplitz matrices. The
resulting convergence rate is linear. The interpolation operator depends on the location of the first
non-zero diagonals of the matrices and its sign.
Here we have only proved the convergence of multigrid method with damped-Jacobi as smoothing
operator. However, our numerical results show that multigrid method with some other smoothing
operators, such as the red-black Jacobi, block-Jacobi and Gauss-Seidel methods, will give better
convergence rate. As an example, for the function convergence factors of
MGM with the point- and block-Jacobi methods as smoothing operator are found to be about 0:71
and 0:32 respectively for 64 - n - 1024.
Acknowledgment
We will like to thank Prof. Tony Chan and Dr. J. Zou for their valuable comments.
--R
Superfast Solution of Real Positive Definite Toeplitz Systems
Convergence Estimates for Multigrid Algorithms without Regularity Assumptions
Toeplitz Preconditioners for Toeplitz Systems with Nonnegative Generating Func- tions
Fast Toeplitz Solvers Based on Band-Toeplitz Preconditioner
Toeplitz Equations by Conjugate Gradients with Circulant Precondi- tioner
An Optimal Circulant Preconditioner for Toeplitz Systems
Multigrid Methods for Toeplitz Matrices
Multigrid Methods for Symmetric Positive Definite Block Toeplitz Matrices with Nonnegative Generating Functions
Linear and Nonlinear Deconvolution Problems
in Multigrid Methods
A Proposal for Toeplitz Matrix Calculations
--TR
--CTR
S. Serra Capizzano , E. Tyrtyshnikov, How to prove that a preconditioner cannot be superlinear, Mathematics of Computation, v.72 n.243, p.1305-1316, July | toeplitz matrices;multigrid method |
289843 | Accelerated Inexact Newton Schemes for Large Systems of Nonlinear Equations. | Classical iteration methods for linear systems, such as Jacobi iteration, can be accelerated considerably by Krylov subspace methods like GMRES @. In this paper, we describe how inexact Newton methods for nonlinear problems can be accelerated in a similar way and how this leads to a general framework that includes many well-known techniques for solving linear and nonlinear systems, as well as new ones. Inexact Newton methods are frequently used in practice to avoid the expensive exact solution of the large linear system arising in the (possibly also inexact) linearization step of Newton's process. Our framework includes acceleration techniques for the "linear steps" as well as for the "nonlinear steps" in Newton's process. The described class of methods, the accelerated inexact Newton (AIN) methods, contains methods like GMRES and GMRESR for linear systems, Arnoldi and JacDav{} for linear eigenproblems, and many variants of Newton's method, like damped Newton, for general nonlinear problems. As numerical experiments suggest, the AIN{} approach may be useful for the construction of efficient schemes for solving nonlinear problems. | Introduction
. Our goal in this paper is twofold. A number of iterative solvers
for linear systems of equations, such as FOM [23], GMRES [26], GCR [31], Flexible
GMRES [25], GMRESR [29] and GCRO [7], are in structure very similar to iterative
methods for linear eigenproblems, like shift and invert Arnoldi [1, 24], Davidson [6, 24],
and Jacobi-Davidson [28]. We will show that all these algorithms can be viewed as
instances of an Accelerated Inexact Newton (AIN) scheme (cf. Alg. 3), when applied
to either linear equations or to linear eigenproblems. This observation may help us
in the design and analysis of algorithms by "transporting" algorithmic approaches
from one application area to another. Moreover, our aim is to identify efficient AIN
schemes for nonlinear problems as well, and we will show how we can learn from the
algorithms for linear problems.
To be more specific, we will be interested in the numerical approximation of the
solution u of the nonlinear equation
(1)
where F is some smooth (nonlinear) map from a domain in R n (or C n ) that contains
the solution u, into R n (or C n ), where n is typically large.
Some special types of systems of equations will play an important motivating role
in this paper.
The first type is the linear system of equations
(2)
Mathematical Institute, Utrecht University, P. O. Box 80.010, NL-3508 Utrecht, The Nether-
lands. E-mail: [email protected], [email protected], [email protected].
D. R. FOKKEMA, G. L. G. SLEIJPEN, AND H. A. VAN DER VORST
where A is a nonsingular matrix and b, x are vectors of appropriate size; A and b are
given, x is unknown. The dimension n of the problem is typically large and A is often
sparse. With is equivalent to (1). This type will serve
as the main source of inspiration for our ideas.
The second type concerns the generalized linear eigenproblem
With we have that, for normalized v, with F (u) := Av \Gamma -Bv, equation (3)
is equivalent to (1). This type is an example of a mildly nonlinear system and will
serve as an illustration for the similarity between various algorithms, seen as instances
of AIN (see Section 6).
However, the AIN schemes, that we will discuss, will be applicable to more general
nonlinear problems, like, for instance, equations that arise from discretizing nonlinear
partial differential equations of the form
where\Omega is a domain in R 2 or R 3 ,
and u satisfies suitable boundary conditions. An example of (4) is, for instance
where\Omega is some domain in R 2 and
@\Omega (see also Section 8).
Guided by the known approaches for the linear system (cf. [25, 29, 7]) and the
eigenproblem (cf. [28, 27]) we will define accelerated Inexact Newton schemes for more
general nonlinear systems. This leads to a combination of Krylov subspace methods
for Inexact Newton (cf. [16, 4] and also [8]) with acceleration techniques (as in [2]) and
offers us an overwhelming choice of techniques for further improving the efficiency of
Newton type methods. As a side-effect this leads to a surprisingly simple framework
for the identification of many well-known methods for linear-, eigen-, and nonlinear
problems.
Our numerical experiments for nonlinear problems, like problem (5), serve as an
illustration for the usefulness of our approach.
The rest of this paper is organized as follows. In Section 2 we briefly review the
ideas behind the Inexact Newton method. In Section 3 we introduce the Accelerated
Inexact Newton methods (AIN). We will examine how iterative methods for linear
problems are accelerated and we will distinguish between a Galerkin approach and a
Minimal Residual approach. These concepts are then extended to the nonlinear case.
In Section 4 we make some comments on the implementation of AIN schemes. In
Section 5 we show how many well-known iterative methods for linear problems fit in
the AIN framework. In Section 6 and Section 7 we consider instances of AIN for the
mildly nonlinear generalized eigenproblem and for more general nonlinear problems.
In Section 8 we present our numerical results and some concluding remarks are in
Section 9.
ACCELERATED INEXACT NEWTON SCHEMES 3
1. choose an initial approximation u 0 .
2. Repeat until u k is accurate
(b) Compute the residual r
(c) Compute an approximation J k for the Jacobian F 0 (u k ).
(d) Solve the correction equation (approximately). Compute
an (approximate) solution p k of the correction equation
Update. Compute the new approximation:
Algorithm 1: Inexact Newton
2. Inexact Newton methods. Newton type methods are very popular for solving
systems of nonlinear equations as, for instance, represented by (4). If u k is the
approximate solution at iteration number k, Newton's method requires for the next
approximate solution of (1), the evaluation of the Jacobian J k := F 0 and the
solution of the correction equation
Unfortunately, it may be very expensive, or even practically impossible to determine
the Jacobian and/or to solve the correction equation exactly, especially for larger
systems.
In such situations one aims for an approximate solution of the correction equa-
tion, possibly with an approximation for the Jacobian (see, e.g., [8]). Alg. 1 is an
algorithmical representation of the resulting Inexact Newton scheme.
If, for instance, Krylov subspace methods are used for the approximate solution of
the correction equation (6), then only directional derivatives are required (cf. [8, 4]);
there is no need to evaluate the Jacobian explicitly. If v is a given vector then the
vector F 0 (u)v can be approximated using the fact that
The combination of a Krylov subspace method with directional derivatives combines
the steps 2c and 2d in Alg. 1.
For an initial guess u 0 sufficiently close to a solution, Newton's method has asymptotically
at least quadratic convergence behavior. However, this quadratic convergence
is usually lost if one uses inexact variants and often the convergence is not much faster
than linear. In the next section we make suggestions how this (linear) speed of convergence
may be improved.
Note: It is our aim to restore, as much as possible, the asymptotic convergence
behavior of exact Newton and we do not address the question of global convergence.
4 D. R. FOKKEMA, G. L. G. SLEIJPEN, AND H. A. VAN DER VORST
1. choose an initial approximation x 0 .
2. Repeat until x k is accurate
(c)
(d) x
Algorithm 2: Jacobi Iteration
3. Accelerating Inexact Newton methods. Newton's method is a one step
method, that is, in each step, Newton's method updates the approximate solution
with information from the previous step only. However, in the computational process,
subspaces have been built gradually, that contain useful information concerning the
problem. This information may be exploited to improve the current approximate
solution, and this is what we propose to do. More precisely, we will consider alternative
update strategies for step 2e of the Inexact Newton algorithm Alg. 1.
3.1. Acceleration in the linear case. The linear system can be written
as F (x) := and \GammaA is the Jacobian of F . When the approximate solution p k
is computed as preconditioning matrix (approximating
A), then the Inexact Newton algorithm, Alg. 1, reduces to a standard Richardson-
type iteration process for the splitting R. For instance, the choice
leads to Jacobi iteration (see Alg. 2).
One may improve the convergence behavior of standard iterations schemes by
ffl using more sophisticated preconditioners M , and/or
ffl applying acceleration techniques in the update step.
Different preconditioners and different acceleration techniques lead to different algo-
rithms, some of which are well-known.
Examples of iterations schemes that use more sophisticated preconditioners are,
for instance, Gauss-Seidel iteration, where with L is the strict lower
triangular part of A and is a
relaxation parameter.
Examples of iterations schemes that use acceleration techniques are algorithms
that take their updates to the approximate solution as a linear combination of previous
directions p j . Preferable updates ~
are those for which b \Gamma Ax k+1 ,
where x
k , is minimal in some sense: e.g., kb \Gamma Ax k+1 k 2 is minimal, as in
GMRES [26] and GCR [31], or b \Gamma Ax k+1 is orthogonal to the p j for j - k, as in FOM
or GENCG [23], or b \Gamma Ax k+1 is "quasi-minimal", as in Bi-CG [17], and QMR [11].
Of course the distinction between preconditioning and acceleration is not a clear
one. Acceleration techniques with a limited number of steps can be seen as a kind
of dynamic preconditioning as opposed to the static preconditioning with fixed M .
In this view one is again free to choose an acceleration technique. Examples of such
iteration schemes are Flexible GMRES [25], GMRESR [29] and GCRO [7].
All these accelerated iteration schemes for linear problems construct approximations
the solution of a smaller or an
ACCELERATED INEXACT NEWTON SCHEMES 5
easier projected problem. For example, GMRES computes y k such that
equivalently (AV k ) (b \Gamma
such that V
as the
solution of a larger tri-diagonal problem obtained with oblique projections.
For stability (and efficiency) reasons one usually constructs another basis for the
span of V k with certain orthogonality properties, depending on the selected approach.
3.2. Acceleration in the nonlinear case. We are interested in methods for
finding a zero of a general nonlinear mapping F . For the linear case, the methods
mentioned above are, apart from the computation of the residual, essentially a mix of
two components: (1) the computation of a new search direction (which involves the
residual), and (2) the update of the approximation (which involves the current search
direction and possibly previous search directions, and the solution y k of a projected
problem). The first component may be interpreted as preconditioning, while the
second component is the acceleration.
Looking more carefully on how y k in the linear case is computed, we can distinguish
between two approaches based on two different conditions. With G k (y) := F
the FOM and the other "oblique" approaches lead to methods that compute y such
that (for appropriate W k
the GMRES
approach leads to methods that compute y such that kG k (y)k 2 is minimal (a Minimal
Residual condition).
From these observations for the linear case we now can formulate iteration schemes
for the nonlinear case.
The Inexact Newton iteration can be accelerated in a similar way as the standard
linear iteration. This acceleration can be accomplished by updating the solution by a
correction ~
k in the subspace spanned by all correction directions p j (j - k).
To be more precise, the update ~
k for the approximate solution is given by ~
y, where of the search space V k spanned by
Furthermore, with G k we propose to determine y by
ffl a Galerkin condition on G k (y): y is a solution of
where W k is some matrix of the same dimensions as V k ,
ffl or a Minimal Residual (MR) condition on G k (y): y is a solution of
min y
ffl or a mix of both, a Restricted Minimal Residual (RMR) condition on G k (y): y
is a solution of
min y
Equation (7) generalizes the FOM approach, while equation (8) generalizes the GMRES
approach.
Solving (7) means that the component of the residual r k+1 in the subspace W k
(spanned by W k ) vanishes. For W k one may choose, for instance, W (as in
6 D. R. FOKKEMA, G. L. G. SLEIJPEN, AND H. A. VAN DER VORST
FOM), or W is the component of J k p k orthogonal to W
(as in GMRES: w
linear equations the Minimal Residual
and the Galerkin approach coincide for the last choice.
As is known from the linear case, a complication of the Galerkin approach is
that equation (7) may have no solution, which means that this approach may lead to
breakdown of the method. In order to circumvent this shortcoming to some extend
we have formulated the Restricted Minimal Residual approach (9). Compared to (7)
this formulation is also attractive for another reason: one can apply standard Gauss-Newton
[9] schemes for solving general nonlinear least squares problems to it. One
might argue that a drawback of a Gauss-Newton scheme is that it may converge slowly
(or not at all). However, for least squares problems with zero residual solutions,
the asymptotic speed of convergence of a Gauss-Newton method is that of Newton's
method. This means, that if the Galerkin problem (7) has a solution, a Gauss-Newton
scheme applied to (9) will find it fast and efficient (see also Section 8).
Note that equations (7)-(9) represent nonlinear problems in only k variables,
which may be much easier to solve than the original problem. If these smaller non-linear
problems can be formulated cheaply, then the costs for an update step may be
considered as being relatively small.
Note also that since equations (7)-(9) are nonlinear, they may have more than one
solution. This fact may be exploited to steer the computational process to a specific
preferable solution of the original problem.
Accelerated Inexact Newton. For the Galerkin approach, step 2e in the Inexact
Newton's algorithm, Alg. 1, is replaced by four steps in which
ffl the search subspace V k\Gamma1 is expanded by an approximate "Newton correction"
and a suitable basis is constructed for this subspace,
ffl a shadow space W k is selected on which we project the original problem,
ffl the projected problem (7) is solved,
ffl and the solution is updated.
This is represented, by the steps 3e-3h in Alg. 3. The Minimal Residual approach
and the Restricted Minimal Residual approach can be represented in a similar way.
4. Computational considerations. In this section we make some comments on
implementation details that mainly focus on limiting computational work and memory
space.
4.1. Restart. For small k, problems (7)-(9) are of small dimension and may
often be solved at relatively low computational costs (e.g., by some variant of Newton's
method).
For larger k they may become a serious problem in itself. In such a situation,
one may wish to restrict the subspaces V and W to subspaces of smaller dimension
(see Alg. 3, step 3i). Such an approach limits the computational costs per iteration,
but it may also have a negative effect on the speed of convergence.
For example, the simplest choice, restricting the search subspace to a 1-dim. sub-space
leads to Damped Inexact Newton methods, where, for instance, the damping
parameter ff is the solution of min ff kG k (ff)k 2 , where G k
ACCELERATED INEXACT NEWTON SCHEMES 7
1. choose an initial approximation u 0 .
2.
3. Repeat until u k is accurate
(b) Compute the residual r
(c) Compute an approximation J k for the Jacobian F 0 (u k ).
(d) Solve the correction equation (approximately). Compute an
(approximate) solution p k for the correction equation
Expand the search space. Select a v k in the span(V that is
linearly independent of V k\Gamma1 and update
(f) Expand the shadow space. Select a w k that is linearly
independent of W k\Gamma1 and update W
(g) Solve the projected problem. Compute nontrivial solutions y of
the projected system
Update. Select a y k (from the set of solutions y) and update the
approximation:
(i) Restart. large, select an '
select ' \Theta ' 0 matrices R V and RW and compute
suitable combinations of the columns of
Algorithm 3: Accelerated Inexact Newton
Of course, a complete restart is also feasible, say after each mth step (cf. step 3i
of Alg. 3):
The disadvantage of a complete restart is that we have to rebuild subspace information
again. Usually it leads to a slower speed of convergence.
It seems like an open door to suggest that parts of subspaces better be retained at
restart, but in practical situations it is very difficult to predict what those parts should
be. A meaningful choice would depend on spectral properties of the Jacobian as well
as on the current approximation. When solving linear equations with GMRESR [29],
good results have been reported in [30] when selecting a number of the first and the
last columns (cf. step 3i of Alg. 3); e.g.,
8 D. R. FOKKEMA, G. L. G. SLEIJPEN, AND H. A. VAN DER VORST
In [7], a variant of GMRESR, called GCRO, is proposed, which implements another
choice. For subspaces of dimension l + m, the first l columns are retained, together
with a combination of the last m columns. This combination is taken such that
the approximate solution, induced by a minimal residual condition, is the same for both
the subspaces of dimension l +m and l + 1. To be more specific, if u
where y k solves min y replaced by [V k l ; V km y km (denoting
4.2. Update. In the update step (step 3h of Alg. 3), a solution y k of the projected
problem has to be selected from the set of solutions y. Selection may be necessary
since many nonlinear problems have more than one solution. Sometimes, this may
be the reason for poor convergence of (Inexact) Newton: the sequence of approximate
solutions "wavers" between different exact solutions. For larger search subspaces, the
search subspace may contain good approximations for more than one solution. This
may be exploited to steer the sequence of approximate solution to the wanted solution
may help to avoid wavering convergence behavior.
The selection of y k should be based on additional properties of the solution u k+1 .
For instance, we may look for the solution largest in norm, or as in the case of eigenvalue
problems, for a solution of which one component is close to some specific value
(for instance, if one is interested in eigenvalues close to, say, 0, the Ritz vector with
Ritz value closest to 0 will be chosen).
4.3. The projected problem. Even though problems of small dimension can
be solved with relatively low computational costs, step 3g in Alg. 3 is not necessarily
inexpensive. The projected problem is embedded in the large subspace and it may
require quite some computational effort to represent the problem in a small subspace
(to which y belongs) of dimension ' := dim(span(V k In the case of linear
equations (or linear eigenvalue problems) the computation of an ' \Theta '-matrix as W
products. For this type of problems, and for many others as well, one
may save on the computational costs by re-using information from previous iterations.
4.4. Expanding the search subspace. The AIN algorithm breaks down if
the search subspace is not expanded. This happens when p k belongs to the span of
(or, in finite precision arithmetic, if the angle between p k and this subspace is
very small). Similar as for GMRES, one may then replace p k by J k v ' , where v ' is the
last column vector of the matrix V k\Gamma1 .
With approximate solution of the correction equation, a breakdown will also occur
if the new residual r k is equal to the previous residual r k\Gamma1 . We will have such a
situation if y instead of modifying the expansion process in iteration
number k, one may also take measures in iteration number in order to avoid
In [29] a few steps by LSQR are suggested when the linear solver is a Krylov
subspace method:
may already cure the stagnation.
5. How linear solvers fit in the AIN framework. In this section we will
show how some well-known iterative methods for the solution of linear systems fit
in the AIN framework. The methods that follow from specific choices in AIN are
equivalent to well-known methods only in the sense that, at least in exact arithmetic,
ACCELERATED INEXACT NEWTON SCHEMES 9
they produce the same basis vectors for the search spaces, the same approximate
solutions, and the same Newton corrections (in the same sense as in which GMRES
and ORTHODIR are equivalent).
With the linear equation (2) is equivalent to the one in (1)
and J \GammaA. In this section, M denotes a preconditioning matrix for A (i.e., for a
vector v, M \Gamma1 v is easy to compute and approximates A \Gamma1 v).
5.1. GCR. With the choice,
Alg. 3 (without restart) is equivalent to preconditioned
GCR [31].
5.2. FOM and GMRES. The choice
algorithms that are related to FOM and GMRES [26]. With the
additional choice w Alg. 3 is just FOM, while the choice
gives an algorithm that is equivalent to GMRES.
5.3. GMRESR. Taking w k such that w k is perpendicular to
GCR and taking p k as an approximate solution of the equation
Alg. 3 is equivalent to the GMRESR algorithms [29]. One
might compute p k by a few steps of GMRES, for instance.
6. AIN schemes for mildly nonlinear problems. In this section we will discuss
numerical methods for the iterative solution of the generalized eigenproblem (3).
We will show that they also fit in the general framework of the AIN Alg. 3 methods.
As already mentioned these AIN methods consist of two parts. In one part
an approximate solution of the correction equation (cf. step 3d in Alg. 3) is used
to extend the search space. In the other part a solution of the projected problem
(cf. step 3g in Alg. 3) is used to construct an update for the approximate solution.
We will start with the derivation of a more suitable form for the (Newton) correction
equation for the generalized eigenproblem. After that, we will make some
comments on how to solve the projected problem.
The correction equation. In order to avoid some of the complications that go
with complex differentiation, we will mainly focus on the numerical computation of
eigenvectors with a fixed component in some given direction (rather then on the computation
of eigenvectors with a fixed norm).
First, let ~
u be a fixed vector with a nontrivial component in the direction of the
desired eigenvector x. We want to compute approximations u k for x with a normalized
component in the ~
u-direction: (u; ~
We will select # k such that the
residual r k := is orthogonal to w, where w is another fixed nontrivial
vector, i.e., the approximate eigenvalue # k is given by # k := w Au k =w Bu k .
Consider the map F given by
F (u) :=
and u belongs to the hyper-plane fy 2 C 1g. The Jacobian J
then given by
D. R. FOKKEMA, G. L. G. SLEIJPEN, AND H. A. VAN DER VORST
and the correction equation reads as
u; such that
equation is equivalent to
~ u 0
\Gammar k#
that is, p is the solution of (10) if and only if p is the solution of (11).
The projected problem. For the Generalized eigenvalue problem we are in the
fortunate position that all the solutions of problems of moderate size can be computed
by standard methods such as for instance the QZ [19] method. However, before we
can apply these methods we have to reformulate the projected problem because of the
exceptional position of u k in W F
The key to this reformulation is the observation that in the methods we consider
the affine subspace u k +span(V k ) is equal to V k because V k contains u k by itself. Now,
as an alternative to step 3g in Alg. 3, we may also compute all the solutions y of
This problem can now be solved by for instance the QZ method, and after selecting
y k a new approximation u k+1 is given by u
6.1. Arnoldi's method. We consider the simplified case where I , i.e., the
standard eigenproblem. If we do only one step of a Krylov subspace method (Krylov
dimension 1) for the solution of the correction equation (10), then we obtain for the
correction
Hence, . Note that this may be a poor (very) approximation,
because, in general, r k 6? ~
u. The approximate eigenvector u k belongs to the search
subspace span(V expanding the search subspace by the component of p k orthogonal
to span(V k\Gamma1 ) is equivalent to expanding this space with the orthogonal component
of Au k , which would be the "expansion" vector in Arnoldi's method. Hence,
the search subspace is precisely the Krylov subspace generated by A and u 0 . Ap-
parently, Arnoldi's method is an AIN method (with a ``very inexact Newton step'')
without restart.
The choice W corresponds to the standard one in Arnoldi and produces
#'s that are called Ritz values, while the choice W leads to Harmonic Ritz
values [22].
6.2. Davidson's method. As in Arnoldi's method, Davidson's method [6] also
carries out only one step of a Krylov subspace method for the solution of the correction
equation. However, in contrast to Arnoldi's method, Davidson also incorporates a
preconditioner.
ACCELERATED INEXACT NEWTON SCHEMES 11
He suggests to solve (10) approximately by p k with
where M is the inverse of the diagonal of A \Gamma # k B. Other choices have been suggested
as well (cf. e.g., [5, 21]). Because of the preconditioner, even if I , the search
space is not simply the Krylov subspace generated by A and u 0 . This may lead to an
advantage of Davidson's method over Arnoldi's method.
For none of the choices of the preconditioner, proper care has been taken of the
projections (see (10)): the preconditioner should approximate the inverse of the projected
matrix (see (10)) as a map from ~
rather than of A \Gamma # k B.
However, if M is the diagonal of A \Gamma #B, and we choose ~ u and w equal to the
same, arbitrary standard basis vector (as Davidson does [6]) then
. Note that p ? because M is diagonal
and r k ? w. Therefore, for this particular choice of w (and ~
u), the diagonal M may
be expected to be a good preconditioner for the correction equation (including the
projections) in the cases where M is a good preconditioner for A \Gamma # k B. Observe that
this argument does not hold for non-diagonal preconditioners M .
6.3. Jacobi-Davidson. Davidson methods with a non-diagonal preconditioner
do not take care properly of the projections in the correction equation (10). This
observation was made in [28], and a new algorithm was proposed for eigenproblems by
including the projections in the Davidson scheme. In addition, these modified schemes
allow for more general approximate solutions p k than . For instance, the
use of ' steps of a preconditioned Krylov subspace method for the correction equation
is suggested, leading to Arnoldi type of methods in which the variable polynomial pre-conditioning
is determined efficiently and the projections have been included correctly.
The new methods have been called Jacobi-Davidson methods (Jacobi took proper care
of the projections, but did not build a search subspace as Davidson did (see [28] for
details and further references)).
The analysis and results in [3, 27] show that these Jacobi-Davidson methods can
also be effective for solving generalized eigenproblems, even without any matrix inversion
The Jacobi-Davidson methods allow for a variety of choices that may improve efficiency
of the steps and speed of convergence and are good examples of AIN methods
in which the projected problem (7) is used to steer the computation.
For an extensive discussion, we refer to [27].
7. AIN schemes for general nonlinear problems. In this section we summarize
some iterative methods for the solution of nonlinear problems that have been
proposed by different authors, and we show how these methods fit in the AIN frame-work
Brown and Saad [4] describe a family of methods for solving nonlinear problems.
They refer to these methods as nonlinear Krylov subspace projection methods. Their
12 D. R. FOKKEMA, G. L. G. SLEIJPEN, AND H. A. VAN DER VORST
modifications to Newton's method are intended to enhance robustness and are heavily
influenced by ideas presented in [9]. One of their methods is a variant of Damped
Inexact Newton, in which they approximate the solution of the correction equation
by a few steps of Arnoldi or GMRES and determine the damping parameter ff by
a "linesearch backtracking technique". So this is just another AIN scheme, with a
special 1-dimensional subspace acceleration. They also propose a model trust region
approach, where they take their update to the approximation from the Krylov subspace
e
Vm generated by m steps of (preconditioned) Arnoldi or GMRES as
Vm ~
~ y k is the point on the dogleg curve for which k~y k k the trust region size: ~ y k is an
approximation for min y
Vm ~
This could be considered as a block version
of the previous method.
In [2] Axelsson and Chronopoulos propose two nonlinear versions of a (truncated)
Generalized Conjugate Gradient type of method. Both methods fit in the AIN frame-
work. The first method, NGCG, is a Minimal Residual AIN method with
and V k orthonormal; in other words the correction equation is not solved. The second
method, NNGCG, differs from NGCG in that p k is now computed as an approximate
solution (by some method) of the correction equation (6), where the accuracy
is such that non-increasing sequence
[8]). So the method NNGCG is a Minimal Residual AIN
method. It can be viewed as generalization of GMRESR [29]. Under certain conditions
on the map F they prove global convergence.
In [15], Kaporin and Axelsson propose a class of nonlinear equation solvers, GNKS,
in which the ideas presented in [4] and [2] are combined. There, the direction vectors
are obtained as linear combinations of the columns of e
Vm and V k . To be more
precise,
This problem
is then solved by a special Gauss-Newton iteration scheme, which avoids excessive
computational work, by taking into account the acute angle between r k and J k p k , and
the rate of convergence. The method generalizes GCRO [7].
8. Numerical Experiments. In this section we test several AIN schemes and
present results of numerical experiments on three different nonlinear problems. For
tests and test results with methods for linear- and eigenproblems we refer to their
references. The purpose of this presentation is to show that acceleration may be
useful also in the nonlinear case. By useful, we mean that additional computational
cost is compensated for by faster convergence.
Different AIN schemes distinguish themselves by the way they (approximately)
solve the correction equation and the projected problem (cf. Section 3.2 and 7). Out
of the overwhelming variety of choices we have selected a few possible combinations,
some of which lead to AIN schemes that are equivalent to already proposed methods
and some of which lead to new methods. We compare the following (existing) Minimal
Residual AIN schemes:
ffl linesearch, the backtracking linesearch technique [4, pp. 458];
ffl dogleg, the model trust region approach as proposed in [4, pp. 462];
ffl nngcg, a variant of the method proposed in [2], solving (8) by the Levenberg-Marquardt
algorithm [20];
ACCELERATED INEXACT NEWTON SCHEMES 13
ffl gnks, the method proposed in [15];
and the (new) Restricted Minimal Residual AIN schemes:
ffl rmr a, choosing W
ffl rmr b, choosing W is the component of J k p k orthogonal
to W k\Gamma1 .
For these last two schemes, the minimization problem solved by the Gauss-Newton
variant described in [15]. The necessary subspaces for the direction p k or the
projected problem were obtained by 10 steps of GMRES, or (in the third example)
also by at most 50 iterations of the generalized CGS variant CGS2 [10].
In all cases the exact Jacobian was used. Furthermore, we used orthonormal
matrices V k and W k , obtained from a modified Gram-Schmidt process and restricted
to the last 10 columns in an attempt to save computational work. The computations
were done on a Sun Sparc 20 in double precision and the iterations were stopped when
method failed, either when the convergence was too slow, i.e., when
or when to number of nonlinear iterations (per step)
exceeded 200.
Since the computational cost of the methods is approximately proportional to the
costs of the number of function evaluations and matrix multiplications, the following
counters are given in the tables:
ffl ni, the number of nonlinear iterations;
ffl fe, the number of function evaluations;
ffl mv, the number of multiplications by the Jacobian;
ffl pre, the number of applications of the preconditioner;
ffl total, the sum of fe, mv and pre.
8.1. A 1D Burgers' Equation. As a first test problem we consider the following
1D Burgers' Equation [14]
@t
@x
We discretized the spatial variable x with finite differences with 64 grid points
and for the time derivative we used
\Deltat
with denotes the solution at time t n\Deltat. For this
test the solution u n was computed for and as an initial guess to u n+1
we took u n . No preconditioning was used.
In table Tab. 1 we show the results for problem (12) with
14 D. R. FOKKEMA, G. L. G. SLEIJPEN, AND H. A. VAN DER VORST
Method ni fe mv total
linesearch 594 1818 4853 6671
dogleg 644 3559 6140 9699
nngcg 229 4426 11769 16195
gnks 187 852 7067 7919
rmr a 230 926 4471 5397
Table
1: Results for Burgers' Equation
A plot of the solutions is given in Fig. 1. The table shows the cumulative
value of the counters for each method after completing the computation of u
If we look at the number of nonlinear iterations (ni), we see that acceleration
indeed reduces this number. However, in the case of gnks this does not result in less
work, because the number of matrix multiplications (mv) increases too much. Here
both the Galerkin approaches rmr a and rmr b are less expensive than all the other
methods. rmr a being the winner.
x
Figure
1: Solution of Burgers' Equation
8.2. The Bratu problem. As a second test problem we consider the Bratu
problem [12, 4]. We seek a solution (u; -) of the nonlinear boundary value problem:
For\Omega we took the unit square and we discretized with finite differences on a 31 \Theta 31
regular grid. It is known, cf. [12], that there exist a critical value - such that for
problem (13) has two solutions and for - problem (13) has no
solutions. In order to locate this critical value we use the arc length continuation
method as described in [12, section 2.3 and 2.4]. Problem (13) is replaced by a problem
ACCELERATED INEXACT NEWTON SCHEMES 15
Method ni fe mv pre total
linesearch 391 1013 3732 3421 8166
dogleg 381 2664 3010 3010 8684
nngcg 361 1297 4243 3091 8631
gnks 358 1056 6896 2780 10732
rmr a 389 539 4005 3399 7943
Table
2: Results for the Bratu Problem, solved by the arc length continuation
method.
Method ni fe mv pre total
linesearch 29 85 336 308 729
dogleg
gnks 38 119 1806 370 2295
rmr a 6 13 77 55 145
Table
3: Single solve of the Bratu Problem, u
of the form
where ', a scalar valued function, is chosen such that s is some arc length on the
solution branch and u s is the solution of (13) for -(s). We preconditioned GMRES
by ILU(0) [18] of the discretized Laplace operator \Delta.
The first table Tab. 2 shows the results after a full continuation run: starting
from the smallest solution (u; -) with solution branch is followed along the
(discretized) arc with s
we see that acceleration may be useful, in spite of the fact that there is little room for
it, because on the average approximately only 4.5 Newton iterations where necessary
to compute the solution per continuation step. In this example rmr b performs better
than rmr a.
Table
Tab. 3 shows the results for the case where we solve (13) for fixed
(near the critical value). In this case Galerkin acceleration is even more useful and
the differences are more pronounced.
The sup norm of the solution for the different values of - are plotted in Fig. 2.
The two solutions at - 4 along the diagonal of the unit square are shown in Fig. 3.
8.3. The Driven Cavity Problem. In this Section we present test results for
the classical driven cavity problem from incompressible fluid flow. We follow closely
D. R. FOKKEMA, G. L. G. SLEIJPEN, AND H. A. VAN DER VORST
Figure
2: Sup norms of the solution u
along the arc.
Figure
3: Solutions u at - 4 along
the diagonal of the domain.
the presentations in [12, 4]. In stream function-vorticity formulation the equations are
@/
@n
where\Omega is the unit square and the viscosity - is the reciprocal of the Reynolds number
Re. In terms of / alone this can be written as
subject to the same boundary conditions. This equation was discretized with finite
differences on a 25 \Theta 25 grid, see Fig. 4. The grid lines are distributed as the roots
of the Chebychev polynomial of degree 25. As preconditioner we used the Modified
[13] decomposition of the biharmonic operator . Starting from the solution
for computed several solutions, using the the arc length continuation
method (cf. the previous example, and [12]) with step sizes \Deltas = 100 for 0 - Re -
Tab. 4 shows the results of this test when using 10 steps of GMRES and CGS2 [10]
for the correction equation. In the case of CGS2 we approximately solved the correction
equation to a relative residual norm precision of 2 \Gammak , where k is the current
Newton step [8], with a maximum of 50 steps. Clearly, the methods using (the basis
produced steps of GMRES perform very poorly for this example. Only gnks
is able to complete the full continuation run, but requires a large number of Newton
steps. If we look at the results for the AIN schemes for which CGS2 is used, we see
that, except for the linesearch method, these methods perform much better. The
Restricted Minimal Residual methods are again the most efficient ones.
ACCELERATED INEXACT NEWTON SCHEMES 17
GMRES
Method ni fe mv pre total
linesearch fails at Re = 400 after a total of 545.
dogleg fails at Re = 100 after a total of 113.
nngcg fails at Re = 2200 after a total of 19875
gnks 641 2315 30206 6210 38731
rmr a fails at Re = 2000 after a total of 13078
rmr b fails at Re = 800 after a total of 7728
Method ni fe mv pre total
linesearch fails at Re = 1300 after a total of 4342.
rmr a 137 297 6266 5969 12532
Table
4: Results for the Driven Cavity problem, solved by the arc length continuation
method for Re
Figure
4: Grid for the Driven Cavity
problem, (25 \Theta 25).
Figure
5: Stream lines of the Driven
Cavity problem, Re = 100.
This test also reveals a possible practical drawback of methods like dogleg and
gnks. These methods exploit an affine subspace to find a suitable update for the
approximation. This may fail, when the problem is hard or when the preconditioner
is not good enough. In that case the dimension of the affine subspace must be large,
which may be, because of storage requirements and computational overhead, not fea-
sible. For the schemes that use approximate solutions of the correction equation,
delivered by some arbitrary, iterative method, e.g., CGS2, one can easily adapt the
precision, which leaves more freedom.
D. R. FOKKEMA, G. L. G. SLEIJPEN, AND H. A. VAN DER VORST
Figure
lines of the Driven
Cavity problem, Re = 400.
Figure
7: Stream lines of the Driven
Cavity problem, Re = 1600.
Figure
8: Stream lines of the Driven
Cavity problem, Re = 2000.
Figure
9: Stream lines of the Driven
Cavity problem, Re = 3000.
Plots of the stream lines for the values
0:0; 0:0025; 0:001; 0:0005; 0:0001; 0:00005
(cf. [12]) are given in Fig. 5-9. The plots show virtually the same solutions as in [12].
9. Conclusions. We have shown how the classical Newton iteration scheme for
nonlinear problems can be accelerated in a similar way as standard Richardson-type
iteration schemes for linear equations. This leads to the AIN framework in which
ACCELERATED INEXACT NEWTON SCHEMES 19
many well-known iterative methods for linear-, eigen-, and general nonlinear problems
fit. From this framework an overwhelming number of possible iterations schemes can
be formulated. We have selected a few and shown by numerical experiments that
especially the Restricted Minimal Residual methods can be very useful for further
reducing computational costs.
--R
The principle of minimized iterations in the solution of the matrix eigenvalue problem
On nonlinear generalized conjugate gradient methods
te Riele
Hybrid Krylov methods for nonlinear systems of equations
The Davidson method
The iterative calculation of a few of the lowest eigenvalues and corresponding eigenvectors of large real symmetric matrices
Nested Krylov methods and preserving the orthogonality
Generalized conjugate gradient squared
QMR: A quasi minimal residual method for non-Hermitian linear systems
A class of first order factorizations methods
The partial differential equation u t
On a class of nonlinear equation solvers based on the residual norm reduction over a sequence of affine subspaces
Acceleration techniques for decoupling algorithms in semiconductor simulation
Solution of systems of linear equations by minimized iteration
An iterative solution method for linear systems of which the coefficient matrix is a symmetric M-matrix
An algorithm for generalized matrix eigenvalue problems
Generalizations of Davidson's method for computing eigenvalues of large non-symmetric matrices
Approximate solutions and eigenvalue bounds from Krylov subspaces
Krylov subspace method for solving large unsymmetric linear systems
GMRES: A generalized minimum residual algorithm for solving nonsymmetric linear systems
A Jacobi-Davidson iteration method for linear eigenvalue problems
GMRESR: A family of nested GMRES methods
Further experiences with GMRESR
Generalized conjugate-gradient acceleration of nonsymmetrizable iterative methods
--TR
--CTR
Keith Miller, Nonlinear Krylov and moving nodes in the method of lines, Journal of Computational and Applied Mathematics, v.183 n.2, p.275-287, 15 November 2005
P. R. Graves-Morris, BiCGStab, VPAStab and an adaptation to mildly nonlinear systems, Journal of Computational and Applied Mathematics, v.201 n.1, p.284-299, April, 2007
Heng-Bin An , Ze-Yao Mo , Xing-Ping Liu, A choice of forcing terms in inexact Newton method, Journal of Computational and Applied Mathematics, v.200 n.1, p.47-60, March, 2007
Heng-Bin An , Zhong-Zhi Bai, A globally convergent Newton-GMRES method for large sparse systems of nonlinear equations, Applied Numerical Mathematics, v.57 n.3, p.235-252, March, 2007
Monica Bianchini , Stefano Fanelli , Marco Gori, Optimal Algorithms for Well-Conditioned Nonlinear Systems of Equations, IEEE Transactions on Computers, v.50 n.7, p.689-698, July 2001 | nonlinear problems;iterative methods;newton's method;inexact Newton |
289882 | Quantitative Evaluation of Register Pressure on Software Pipelined Loops. | Software Pipelining is a loop scheduling technique that extracts loop parallelism by overlapping the execution of several consecutive iterations. One of the drawbacks of software pipelining is its high register requirements, which increase with the number of functional units and their degree of pipelining. This paper analyzes the register requirements of software pipelined loops. It also evaluates the effects on performance of the addition of spill code. Spill code is needed when the number of registers required by the software pipelined loop is larger than the number of registers of the target machine. This spill code increases memory traffic and can reduce performance. Finally, compilers can apply transformations in order to reduce the number of memory accesses and increase functional unit utilization. The paper also evaluates the negative effect on register requirements that some of these transformations might produce on loops. | Introduction
Current high-performance floating-point microprocessors try to maximize the exploitable
parallelism by: heavily pipelining functional units (1;2) , making aggressive
use of parallelism (3;4) , or a combination of both (5) which is the trend in current
and future high-performance microprocessors. To exploit effectively this amount of
available parallelism, aggressive scheduling techniques such as software pipelining are
required.
Software pipelining (6) is an instruction scheduling technique that exploits the
Instruction Level Parallelism (ILP) of loops by overlapping the execution of successive
iterations. There are different approaches to generate a software pipelined schedule
for a loop (7) . Modulo scheduling is a class of software pipelining algorithms that
relies on generating a schedule for an iteration of the loop such that when this same
schedule is repeated at regular intervals, no dependence is violated and no resource
usage conflict arises. Modulo scheduling was proposed at the beginning of the 80s (8) .
Since then, many research papers have appeared on the topic (9;10;11;12;13;14;15;16) , and
it has been incorporated into some production compilers (17;18) .
The drawback of aggressive scheduling techniques such as software pipelining is
that they increase register requirements. In addition, increasing either the stages of
functional units or the number of functional units, which are the current trends in
microprocessor design, tends to increase the number of registers required by software
pipelined loops (19) .
The register requirements of a schedule are of extreme importance for compilers
since any valid schedule must fit in the available number of registers of the target ma-
chine. In this way there has been a research effort to produce optimal/near-optimal
modulo schedules with minimum/reduced register requirements. The optimal methods
are mainly based on linear programming approaches (20;21) . Unfortunately, optimal
techniques have a prohibitive computational cost which make them impractical
for product compilers. Some practical modulo scheduling approaches use heuristics to
produce near-optimal schedules with reduced register requirements (11;16;22) . Other
approaches try to reduce the register requirements of the schedules by applying a
post-pass process (23;24) .
There have been proposals to perform register allocation for software pipelined
loops (25;26;27) . If the number of registers required is larger than the available number
of registers, spill code has to be introduced to reduce register usage. Different alternatives
to generate spill code for software pipelined loops have been proposed and
evaluated in (28) . This spill code can also reduce performance. In this paper we show
that the performance and memory traffic of aggressive (in terms of ILP) machines
are heavily degraded due to a lack of registers.
To avoid this performance degradation big register files are required. In addition,
the number of read and write ports increases with the number of functional units.
The number of registers and the number of ports have a negative effect on the area
required by the register file (29) and on the access time to the register file (30) . Some
new register file organizations have been proposed to have big register files (in terms
of registers and access ports) without degrading either area or access time (31;32;33;34) .
Some of these organizations have been used, combined with register-sensitive software
pipelinig techniques (35) , resulting in better performance. This shows that most of
the techniques for reducing the effects of register pressure are complementary.
In this paper, register requirements of pipelined floating-point intensive loops are
evaluated. Several studies are performed. First, the register requirements of loop
invariants, which are a machine-independent characteristic of the loops, are studied
in Section 3. Section 4 carries out a study of the register requirements of loop variants
as a function of the latency and the number of functional units. Section 5 studies
the cumulative register requirements of both (loop variants and invariants) and show
that loops with high register requirements represent a high percentage of the execution
time of the Perfect Club. Section 6 considers the effects of a limited size register file
on performance and memory traffic; for this purpose spill code has been added to
those loops that require more registers than are available. Finally, Section 7 analyzes
the effects on the register requirements of some optimizations that try to improve
performance by reusing data and increasing functional unit usage.
2. Related Concepts and Experimental Framework
2.1. Overview of Modulo Scheduling
In a software pipelined loop, the schedule for an iteration is divided into stages so
that the execution of consecutive iterations that are in distinct stages is overlapped.
The number of stages in one iteration is termed stage count (SC). The number of
cycles per stage is the initiation interval (II).
The execution of a loop can be divided into three phases: a ramp up phase that
fills the software pipeline, a steady state phase where the software pipeline achieves
maximum overlap of iterations, and a ramp down phase that drains the software
pipeline. During the steady state phase of the execution, the same pattern of operations
is executed in each stage. This is achieved by iterating on a piece of code,
termed the kernel, that corresponds to one stage of the steady state phase.
The initiation interval II between two successive iterations is bounded either by
loop-carried dependences in the graph (RecMII) or by resource constraints of the
architecture (ResMII). This lower bound on the II is termed the Minimum Initiation
Interval (MII= max (RecMII, ResMII)). The reader is referred to (18;13) for an
extensive dissertation on how to calculate ResMII and RecMII.
2.2. Register Requirements
Values used in a loop correspond either to loop-invariant variables or to loop-
variant variables. Loop invariants are repeatedly used but never defined during loop
execution. Loop invariants have a single value for all the iterations of the loop;
each invariant requires one register regardless of the scheduling and the machine
configuration.
For loop variants, a value is generated in each iteration of the loop and, therefore,
there is a different value corresponding to each iteration. Because of the nature
of software pipelining, lifetimes of values defined in an iteration can overlap with
lifetimes of values defined in subsequent iterations. Lifetimes of loop variants can be
measured in different ways depending on the execution model of the machine. We
assume that a variable is alive from the beginning of the producer operation, until
the start of the last consumer operation.
By overlapping the lifetimes of the different iterations, a pattern of length II cycles
that is indefinitely repeated is obtained. This pattern indicates the number of values
that are live at any given cycle. As it is shown in (26) , the maximum number of
simultaneously live values (MaxLive) is an accurate approximation of the number of
registers required by the schedule.
Values with a lifetime greater than II pose an additional difficulty since new values
are generated before previous ones are used. One approach to fix this problem is to
provide some form of register renaming so that successive definitions of a value use
distinct registers. Renaming can be performed at compile time by using modulo
variable expansion (MVE) (36) , i.e., unrolling the kernel and renaming at compile
time the multiple definitions of each variable that exist in the unrolled kernel. A
rotating register file can be used to solve this problem without replicating code by
renaming different instantiations of a loop variant at execution time (37) . In this paper
we assume the presence of rotating register files.
2.3. Experimental Framework
The experimental evaluation has been done using all the innermost loops of the
Perfect Club Benchmark Suite (38) that are suitable for software pipelining. These
loops have been obtained with the ICTINEO compiler (39) . ICTINEO is a source-to-
source restructurer developed on top of Polaris (40) with an internal representation that
combines both high and low-level information. It includes some basic transformations
that allow us to obtain optimized data dependence graphs.
A total of 1258 loops suitable for software pipelining have been used. This set
includes all the innermost loops that do not have subroutine calls or conditional exits.
Loops with conditional structures in their bodies have been IF-converted (41) , with
the result that the loop now looks like a single basic block. The loops represent 78% of
the total execution time of the Perfect Club measured on a HP-PA 7100. In addition,
to show the effects of aggressive optimizations in individual loops, we have also used
some Livermore Loops (42) .
The loops have been scheduled for a wide range of VLIW-like target configura-
tions, with different number of functional units and latencies. Table 1 shows the
different configurations used along the paper. All functional units are fully pipelined,
except the divider, which is not pipelined at all. We labeled the different configurations
by PxLy where x is the number of functional units of each kind and y is the
latency (number of stages) of the mostly used functional units, that is, the adder
and the multiplier. We considered a constant latency for loads, stores, divisions and
square roots independently of the configuration.
Different metrics are used along the paper to evaluate performance. On the one
side, the register requirements are evaluated computing the maximum number of
simultaneously live values MaxLive. It has been shown (26) that some allocation
strategies almost always achieve the MaxLive lower bound. In particular, the wands-
only strategy using end-fit with adjacency ordering almost never requires more than
MaxLive registers. Second, execution times are approximated as the II obtained
after modulo scheduling times the trip count of the loop. Finally, memory traffic is
approximated evaluating the number of memory accesses that are needed to execute
a loop, considering both the memory accesses defined in the graph and the spill code
introduced due to having a finite register file.
3. Register Requirements of Loop Invariants
Loop invariants are values that are repeatedly used by a loop at each iteration,
but never written by it. That is, they are defined before entering the loop, are used
by it, and are not redefined at least until the loop has finished.
Loop-invariant variables can be either stored in registers or in memory but, since
memory bandwidth is without doubt the most performance limiting factor in current
processors, even the most simple optimizing compilers try to hold them in registers
during the execution of the loop. For this purpose loop-invariant variables are loaded
from memory to a register before entering the loop that uses them. Even assuming no
memory bandwidth restrictions loading these variables to registers before entering the
loop saves instruction bandwidth in load/store architectures, i.e. almost all current
processors.
Another source of loop invariants are the invariant computations (i.e. computations
that produce the same result during all the iterations of the loops). These
computations can be extracted out of the loop -where they are performed at every
iteration- and computed only once before entering the loop. Also because of memory
bandwidth and instruction bandwidth, the partial result of these computations is held
in registers.
This optimization is termed loop-invariant removal and is one of the optimizations
performed by the ICTINEO compiler. In fact, if we consider load and store
operations as computations, they also can be extracted during the loop-invariant
removal optimization. Figure 1a shows an example. Figure 1b shows a low-level
representation of the loops before removing loop-invariant computations. Figure 1c
shows the same loops after removing loop-invariant computations. Notice that q and
are loop-invariant variables for both loops, so they can be loaded to registers before
entering them. C(j) is also a loop-invariant variable for the innermost loop, so it can
be extracted from it, but not from the outermost loop. Because v1 and v2 are loop
invariants (which are assumed to be allocated in registers), the computation v3 is a
loop-invariant computation, and therefore can be extracted from the innermost loop.
The extraction of loop invariants has lead to a smaller innermost loop -the one
that is executed most of the time- with less operations. In the original one there
were: 1 addition, 2 multiplications and 5 memory accesses, while in the optimized
one there are only 1 addition, 1 multiplication and 2 memory accesses.
The ICTINEO compiler performs -among other optimizations- an aggressive extraction
of loop-invariant computations. We have used the optimized dependence
graph to evaluate the register requirements of loops due to loop invariants. Figure 2
shows the cumulative distribution of the requirements for loop invariants for all 1258
loops. In general loops have very few loop invariants. For instance 25% of the loops
have no loop invariants and 95% of the loops have 8 or less invariants. Nevertheless
a few loops have a high number of loop invariants. For instance 9 loops have more
than invariants and 1 of them requires 68 loop invariants.
4. Register Requirements of Loop Variants
Unlike loop invariants, the number of registers required by loop variants is a schedule
dependent characteristic. So, the register requirements depend on the scheduling
technique, and the target machine configuration for which the scheduling is performed,
as well as the topology of the loop.
In this section we are interested in the effects of the machine configuration on the
register requirements. The main characteristics that influence the final schedule, and
therefore the register requirements, are the number of stages of functional units and
the number of functional units. That is, the degree of pipelining and the degree of
parallelism. For this purpose we have generated the software pipelined schedule for
all the Perfect Club loops for the target configurations shown in Table 1.
Figure
3 shows the cumulative distribution of registers for each configuration.
Notice that when the product of the latency of functional units and the number of
functional units goes up, the number of registers needed by the loops also increases.
Note that the number of functional units has a slightly bigger effect on the number
of registers required than the degree of pipelining has. This is mainly due to the fact
that the latency of some functional units is kept constant for all the configurations.
There is a small number of loops (6%) that do not require any register for loop
variants. The loop bodies of these loops have no invariants because they only have
store operations. These loops are typically used to initialize data structures.
There is another region (6% to 30%) where the loops have few register requirements
and where the register requirements of the loops are not influenced by the
number of stages. This is due to the fact that these loops have no arithmetic opera-
tions, that is, their bodies only have load and store operations. These loops have few
register requirements to hold the values between loads and stores. They are basically
used to copy data structures.
5. Combined Register Requirements
Although some special architectures, such as the Cydra-5 (43) , have separated register
files for loop-invariant variables (global register file) and loop-variant variables
(rotating register file), all the current superscalar microprocessors, for which software
pipelining can also be applied, have a single register file to store both sets of variables.
Therefore it is of great interest to know the combined register pressure of loop-variant
variables plus loop-invariant variables. Figure 4 shows the cumulative distribution
of the register requirements of the loops of the Perfect Club when software pipelined
for the configurations of Table 1.
It is interesting to notice that 96% of the loops can be scheduled with registers
and without adding spill code for the less aggressive configuration (P1L2). On
the contrary only 85% of the loops can be scheduled with registers for the more
aggressive configuration (P2L6). If 64 registers were available we would be able to
schedule 99.5% and 96% of the loops respectively. Also, approximately 60% of the
loops (it varies depending on the configuration) require a small number of registers
or less).
From these figures, one can conclude that register requirements are not extremely
high, and that 64 registers might seem enough for the configurations used. Even
though, we have observed that small loops represent a small percentage of the execution
time and that most of the time is spent in big loops which, in general, have
higher register requirements.
Figure
5 shows the dynamic register requirements, where each loop has been
weighted by its execution time. The graph of Figure 5 is similar to the one of Figure
4 but instead of representing the percentage of loops that require a certain amount
of registers, it represents the percentage of time spent on loops that require a certain
amount of registers.
Data gathered from this figure shows that small loops requiring less that 8 registers
portion of the execution time (about 15%). If only 32 registers are
available, the loops that can be scheduled without adding spill code represent only
between 67% and 52% of the execution time. And even for a machine with 64 registers,
the loops that can be scheduled without adding spill code represent only between 78%
and 69% of the execution time. Even more, a big percentage of the execution time of
the Perfect Club is spent on a few loops that require more than 100 registers.
6. Effects of a Limited Register File
In the previous sections an infinite number of registers has been assumed. In this
section we study the effects of having a limited amount of registers on performance.
When there is a limited number of registers and the register allocator fails to find a
solution with the number of registers available, some additional action must be taken.
In (28) we have proposed several alternatives to schedule software pipelined loops with
register constraints. For the purposes of our experiments, we use the best option, in
terms of performance for the loops, to add spill code.
Current microprocessors have only floating-point registers and 32 integer reg-
isters. We think that future generations of microprocessors will enlarge the register
file to 64 registers (at least for the floating point-register file). Since this study is targeted
to floating point intensive applications, we study the effects of having register
files with registers and with 64 registers compared to the ideal case of having an
infinite number of registers.
Adding spill code to generate a valid schedule with a given number of registers
produces two negative effects. One of them can hurt performance indirectly, since
spill code adds new load and store operations, which might interfere in the memory
subsystem or cause additional cache misses. The other effect, that affects performance
directly, is that if new operations are added it might be necessary to increase the II
of the loop, reducing the throughput even in the hypothetical case of having a perfect
memory system.
Figure
6 shows the number of memory accesses required to execute all the loops for
the six configurations we use. Notice that the number of memory accesses is the same
for all the configurations if no spill code is added. Also predictable is the fact that
the number of memory accesses increases as the number of registers is reduced. It can
also be observed that the growth of memory accesses is more dramatic for aggressive
configurations. For instance the configuration P2L6 with 64 registers requires 52%
more accesses than an ideal machine with an infinite register file and 176% more
memory accesses if it only has registers. Anyway it is difficult to predict the
performance degradation that these additional accesses can have without simulating
the memory subsystem, which is out of the scope of this paper.
In any case, we can easily predict the direct effect on performance that the spill
code has on the execution time of the loops because of larger IIs. Figure 7 shows the
number of cycles required to execute all the loops for the six configurations, with an
infinite number of registers, with 64 registers, and with registers. Notice that, as
expected, fewer registers mean lower performance (more cycles are required to execute
all the loops). In general, the more aggressive the machine configuration is, the bigger
the performance degradation. For instance, a P1L6 machine can have a speed-up of
1.19 if the register file is doubled from 32 registers to 64 registers. Instead, a P2L6
machine will have a speed-up of 1.29 by doubling the register file.
Through the data gathered from this experiment it can be concluded that simply
adding functional units without caring about the number of registers results in
performance figures lower than expected (due to the negative effects of the additonal
spill code). For instance, doubling the number of functional units (i.e. going from
P1L6 to P2L6) produces a speed-up of 1.59 if the number of registers is not limited.
For machines with 64 and 32 registers, the speed-up is 1.56 and 1.43, respectively.
However, if the number of register is doubled together with the number of functional
units (i.e. from P1L6 with 32 registers to P2L6 with 64 registers), the speed-up is
1.85.
7. Optimizations and Register Requirements
In addition to the effect of latency and the number of functional units, register
requirements of loops can also increase when certain optimizations are applied to the
loops. In order to see how some 'advanced' optimizations affect the register requirements
of loops we have hand-optimized some of the Livermore Loops and a few loops
from the Perfect Club, where the optimizations are applicable. We have not considered
common optimizations such as loop-invariant removal, common subexpression
elimination, redundant load and store removal, or dead code removal which we assume
are applied by any compiler. In the following subsections, we present a brief
description of the optimizations studied and the effect that applying the optimizations
has on both performance and register requirements.
7.1. Loop Unrolling
Loop unrolling (44) is an optimization used by the compilers of current micro-
processors. It allows a better usage of resources because several iterations can be
scheduled together, increasing the number of instructions available to the scheduler.
It also reduces the loop overhead caused by branching and index update.
In our case we achieve efficient iteration overlapping through Software Pipelining.
Unrolling is required to match the number of resources required by the loop with the
resources of the processor and also to schedule loops with fractional MII (45) . As an
example, assume a loop with 5 additions and a processor with 2 adders. If there are
no recurrences and additional resources do not limit the scheduling, this loop can be
scheduled with cycles. The loop will be executed at 83.3% of the peak
machine performance. If the loop is unrolled once before scheduling, we obtain a new
loop with 10 additions. This new loop can be scheduled with an
but each iteration of the new loop corresponds to two iterations of the original loop, so
on average an iteration is completed in 2.5 cycles. The unrolled loop can be executed
at 100% of the peak machine performance. Unroll increases the register requirements
because bigger loops have more temporal variables to store, which usually requires
more registers.
Table
2 shows the effects of unrolling some loops on their register requirements.
For the loops used, unrolling them once results in a better usage of the available
resources. Unfortunately it also produces an increase in the register requirements
which, as we have seen in Section 6, can degrade performance if the number of available
registers is less than the registers required.
7.2. Common Subexpression Elimination Across Iterations
Common subexpression elimination across iterations (CSEAI) is an extension for
inner loops of the common subexpression elimination optimization applied by almost
all optimizing compilers. It consists of the reuse of values generated in previous
iterations. The reused data has to be stored in a register and, since the value is used
across iterations, more than one physical register is required in order not to overwrite
a live value with a new instantiation of the same variable. This is an extension to loops
of the 'common subexpression elimination' optimization. It is more sophisticated in
the sense that it can only be applied to loops and that it requires a dependence
analysis to know which values of the current iteration are used in further iterations.
The objective of this optimization will be to reduce the number of operations of the
loop body which can lead to a lower II. Even if after applying the optimization the
final II is not lowered, it is worthwhile applying it if the number of memory accesses
is reduced.
As an example consider the loop of Figure 8a whose unoptimized body is shown in
Figure 8b. We have added to each of the loop variants generated by the operations a
subindex associated to the iteration where it is generated. For instance
represents
the value generated by the first operation at iteration i.
A conventional common subexpression elimination step will recognize that both
load the same value Z(i+1). Therefore, operation V 4 i
be removed and all subsequent uses of V 4 i
be substituted by uses of V 2 i
. This
optimization produces as output the loop shown in Figure 8c. Notice that because of
this optimization the loop requires only 7 operations instead of 8.
The common subexpression elimination optimization can detect that the value of
V 3 at the previous iteration (V 3
is equivalent to the value of
. Therefore the operations required to calculate V 6 i
can be removed, and all uses
of
substituted by V 3
. The same can be done with operations
producing the loop body of Figure 8d. Notice that the resulting
loop body has only 4 operations.
Table
3 shows the II and the memory traffic of some loops with and without
applying CSEAI. Notice that in all cases there are improvements in memory traffic,
in the II or in both. It is also interesting to notice that, even though the optimization
reduces the number of operations of the loops and therefore the number of lifetimes
per iteration, the register requirements increase in most cases. This is mainly due to
the fact that some loop variants must be preserved across several iterations.
7.3. Back-substitution
Back-substitution (37) is a technique that increases the parallelism of loops that
have single recurrences and therefore limited parallelism, but it increases the number
of operations executed per iteration. For instance, consider the Livermore Loop 11
shown in Figure 9a. There is a recurrence of weight 1 that limits the MII . If we
have an adder with 6 cycles of latency, this loop can be scheduled with an II of 6
cycles. Using back-substitution once, the loop becomes the one shown in Figure 9b.
Now the recurrence that limits the schedule of the loop has a weight of 2. Obviously
in this case we have to do two additions instead of one, but this transformed loop
can be executed with an II of 3 cycles doubling the performance. To avoid part
of this increase in operations, other optimizations can be applied such as common
subexpression elimination across iterations. In this example it reduces the number of
loads, but not the number of additions.
A way of further reducing the increment of operations is to perform unroll as well
as back-substitution. If we apply unroll to Livermore Loop 11 together with back-substitution
we can obtain the loop of Figure 9c. Notice that in this loop we require
3 additions to compute two iterations of the original loop. The loop of Figure 9c can
be executed with an II of 6 cycles but it performs two iterations, so the performance
is the same as in the loop of Figure 9b and requires fewer operations.
When back-substitution is applied the number of operations per iteration is in-
creased, and the II is reduced. In general, bigger loops require more registers because
they have more temporal values to store; in addition a reduced II requires more registers
for the same temporal values, because new values are created in each iteration.
Table
4 shows the effect on both performance and register requirements when applying
back-substitution to Livermore Loops 11 and 5. Notice that for loop 5 in some
cases the II is 13. This is because, in those cases, the scheduler used fails to find a
schedule in 12 cycles, even though it exists.
7.4. Blocking
Blocking at the register level is a well-known optimization in the context of dense
matrix linear algebra (46) . Blocking is a transformation applied to multiple nested
loops that finds opportunities for reuse of subscripted variables and replaces the
memory references involved by references to temporary scalar variables allocated in
registers.
Blocking basically consists of unrolling the outer loops and restructuring the body
of the innermost loop so that the memory references that are reused in the same
iteration are held in registers for further uses. Blocking reduces the number of memory
accesses, so if the loop is memory-bound, a reduction of the number of memory
references improves the maximum performance. Also, a reduction of the memory
references can improve performance due to less interference in the memory subsystem.
Finally, if the innermost loop is bound by recurrences, doing several iterations of the
outermost loops, in one iteration of the innermost loop, can improve the resource
usage.
Unfortunately, blocking increases the size of the loop, which in general increases
the register requirements. It also enlarges the lifetimes of some loop variants, which
must be stored in registers for a longer time because they are reused later. This
enlargement of lifetimes also contributes to an increase of the register requirements
of the transformed loop versus the original one. Table 5 shows the II , the effective II
per iteration, the memory traffic and the registers required when blocking is applied
to a basic matrix by matrix kernel.
8. Conclusions
In this paper we have evaluated the register requirements of software pipelined
loops. We have evaluated the register requirements of loop invariants and loop variants
of the loops of the Perfect Club. We have shown empirically that the register
requirements of loop variants increases with the latency and the number of functional
units. These results corroborate the theoretical study done in (19) . We have also
shown that loops with high register requirements take up an important proportion of
the execution time of some representative numerical applications.
We have also evaluated the effect of register file size on the number of memory
accesses and on the performance. We have shown that the memory traffic can have
a high growth if the register file is small and the machine configuration is very ag-
gressive. Also performance (even under the hypothesis of a perfect memory system)
is degraded with small register files.
Finally, we have done a limited evaluation of the effects of some advanced opti-
mizations. We have shown that these advanced optimizations increase performance
and reduce memory traffic at the expense of an increase in the register requirements.
This suggests that, in practice, the optimizations must be performed carefully. If a
loop is excessively optimized, the high register requirements can offset the benefits of
the optimization applied, and even produce worse results.
As future work, we will integrate those advanced optimizations in the ICTINEO
compiler in order to perform an extensive evaluation of the effect of those optimizations
on performance and the tradeoffs with the performance degradation due to the
higher register requirements.
Acknowledgements
This work has been supported by the Ministry of Education of Spain under contract
TIC 429/95, and by CEPBA (European Center for Parallelism of Barcelona).
--R
The Mips R4000 processor.
Next generation of the RISC System/6000 family.
Superscalar instruction execution in the 21164 Alpha microprocessor.
An approach to scientific array processing: The architectural design of the AP120B/FPS-164 family
Software pipelining.
Some scheduling techniques and an easily schedulable horizontal architecture for high performance scientific computing.
Software pipelining: An effective scheduling technique for VLIW ma- chines
Circular scheduling: A new technique to perform software pipelining.
Parallelisation of loops with exits on pipelined architectures.
Iterative modulo scheduling: An algorithm for software pipelining loops.
A realistic resource-constrained software pipelining algo- rithm
Modulo scheduling with multiple initiation inter- vals
Hypernode reduction modulo scheduling.
Software pipelining in PA-RISC compilers
Compiling for the Cydra 5.
Register requirements of pipelined processors.
Minimal register requirements under resource-constrained software pipelining
Optimum modulo schedules for minimum register requirements.
Swing modulo scheduling: A lifetime
Stage scheduling: A technique to reduce the register requirements of a modulo schedule.
RESIS: A new methodology for register optimization in software pipelining.
Register allocation using cyclic interval graphs: A new approach to an old problem.
Register allocation for software pipelined loops.
The meeting a new model for loop cyclic register allocation.
Heuristics for register-constrained software pipelining
Principles of CMOS VLSI Design: A systems Per- spective
Partitioned register files for VLIWs: A preliminary analysis of tradeoffs.
Using Sacks to organize register files in VLIW machines.
Digital 21264 sets new standard.
Reducing the Impact of Register Pressure on Software Pipelining.
A Systolic Array Optimizing Compiler.
Overlapped loop support in the Cydra
The Perfect Club benchmarks: Effective performance evaluation of supercomputers.
A uniform representation for high-level and instruction-level transformations
POLARIS: The next generation in parallelizing compilers.
Conversion of control dependence to data dependence.
The Livermore FORTRAN kernels: A computer test of the numerical performance range.
The Cydra 5 departmental super- computer: design philosophies
Unrolling loops in FORTRAN.
Software pipelining: A comparison and improvement.
Improving register allocation for subscripted variables.
--TR
Principles of CMOS VLSI design: a systems perspective
Software pipelining: an effective scheduling technique for VLIW machines
The Cydra 5 Departmental Supercomputer
Overlapped loop support in the Cydra 5
Improving register allocation for subscripted variables
Parallelization of loops with exits on pipelined architectures
Circular scheduling
Register allocation for software pipelined loops
Register requirements of pipelined processors
Partitioned register files for VLIWs
Lifetime-sensitive modulo scheduling
Compiling for the Cydra 5
Designing the TFP Microprocessor
Iterative modulo scheduling
Minimizing register requirements under resource-constrained rate-optimal software pipelining
Software pipelining
Optimum modulo schedules for minimum register requirements
Modulo scheduling with multiple initiation intervals
Stage scheduling
Hypernode reduction modulo scheduling
Heuristics for register-constrained software pipelining
Software pipelining
A Systolic Array Optimizing Compiler
Conversion of control dependence to data dependence
The Mips R4000 Processor
Superscalar Instruction Execution in the 21164 Alpha Microprocessor
RESIS
Using Sacks to Organize Registers in VLIW Machines
Some scheduling techniques and an easily schedulable horizontal architecture for high performance scientific computing
Non-Consistent Dual Register Files to Reduce Register Pressure
Swing Modulo Scheduling
--CTR
Javier Zalamea , Josep Llosa , Eduard Ayguad , Mateo Valero, Software and hardware techniques to optimize register file utilization in VLIW architectures, International Journal of Parallel Programming, v.32 n.6, p.447-474, December 2004
David Lpez , Josep Llosa , Mateo Valero , Eduard Ayguad, Widening resources: a cost-effective technique for aggressive ILP architectures, Proceedings of the 31st annual ACM/IEEE international symposium on Microarchitecture, p.237-246, November 1998, Dallas, Texas, United States
Javier Zalamea , Josep Llosa , Eduard Ayguad , Mateo Valero, Two-level hierarchical register file organization for VLIW processors, Proceedings of the 33rd annual ACM/IEEE international symposium on Microarchitecture, p.137-146, December 2000, Monterey, California, United States
David Lpez , Josep Llosa , Mateo Valero , Eduard Ayguad, Cost-Conscious Strategies to Increase Performance of Numerical Programs on Aggressive VLIW Architectures, IEEE Transactions on Computers, v.50 n.10, p.1033-1051, October 2001
Javier Zalamea , Josep Llosa , Eduard Ayguad , Mateo Valero, Improved spill code generation for software pipelined loops, ACM SIGPLAN Notices, v.35 n.5, p.134-144, May 2000 | SOFTWARE PIPELINING;REGISTER REQUIREMENTS;SPILL CODE;LOOP TRANSFORMATIONS;PERFORMANCE EVALUATION |
290063 | A Feasibility Decision Algorithm for Rate Monotonic and Deadline Monotonic Scheduling. | Rate monotonic and deadline monotonic scheduling are commonly used for periodic real-time task systems. This paper discusses a feasibility decision for a given real-time task system when the system is scheduled by rate monotonic and deadline monotonic scheduling. The time complexity of existing feasibility decision algorithms depends on both the number of tasks and maximum periods or deadlines when the periods and deadlines are integers. This paper presents a new necessary and sufficient condition for a given task system to be feasible and proposes a new feasibility decision algorithm based on that condition. The time complexity of this algorithm depends solely on the number of tasks. This condition can also be applied as a sufficient condition for a task system using priority inheritance protocols to be feasible with rate monotonic and deadline monotonic scheduling. | INTRODUCTION
Hard real-time systems have been defined as those containing processes that have
deadlines that cannot be missed [Bur89a]. Such deadlines have been termed hard: they
must be met under all circumstances otherwise catastrophic system failure may result
To meet hard deadlines implies constraints upon the way in which system resources
are allocated at runtime. This includes both physical and logical resources.
Conventionally, resource allocation is performed by scheduling algorithms whose purpose
is to interleave the executions of processes in the system to achieve a pre-determined goal.
For hard real-time systems the obvious goal is that no deadline is missed.
One scheduling method that has been proposed for hard real-time systems is the rate
monotonic algorithm [Liu73a]. This is a static priority based algorithm for periodic
processes in which the priority of a process is related to its period. Whilst this algorithm
has several useful properties, including a schedulability test that is sufficient and necessary
[Leh89a], the constraints that it imposes on the process system are severe: processes must
be periodic, independent and have deadline equal to period.
Many papers have successively weakened the constraints imposed by the rate-monotonic
algorithm and have provided associated schedulability tests. Reported work
includes a test to allow aperiodic processes to be scheduled [Sha89a], and a test to
schedule processes that synchronise using semaphores [Sha88a]. One constraint that has
remained is that the deadline and period of a process must be equal.
The weakening of this latter constraint would benefit the application designer by
providing a more flexible process model for implementing the system. The increased
flexibility is seen by observation: processes with deadline = period are expressible within a
process model permitting deadline - period . For example, process systems whose timing
characteristics are suitable for rate-monotonic scheduling would also be accepted by a
scheduling scheme permitting deadlines and periods of a process to differ.
This paper relaxes this constraint and so transforms the rate-monotonic algorithm into
the deadline-monotonic algorithm. Schedulability tests are developed which guarantee the
deadlines of periodic processes. This approach is then shown to be applicable for
guaranteeing the deadlines for arbitrary mixtures of periodic and sporadic processes.
The following sub-section gives a brief description of the symbols and terminology
used in the remainder of the paper. Section 2 gives an overview of the rate-monotonic
scheduling algorithm and associated schedulability tests. Section 3 introduces the
deadline-monotonic scheduling algorithm. New schedulability constraints for the algorithm
are developed. Section 4 outlines some previously proposed methods of guaranteeing
sporadic process deadlines within the context of the rate-monotonic algorithm. The
section then proposes a simpler method guaranteeing the deadlines of arbitrary mixtures of
sporadic and periodic processes using the deadline-monotonic scheduling algorithm.
1.1. Notation
A process is periodic if it is released for execution in a periodic manner. When this is
not the case, and a maximum release frequency can be defined, the process is termed
sporadic. If no such maximum can be defined the process is termed aperiodic [Aud90a].
A process is given by t i , where i identifies the process. The subscript i is defined to
be the priority of that process, where priorities are unique. Priorities are assigned
numerically, taken from the interval #
# where 1 is the highest priority and n (the
number of processes in the system) the lowest.
The process t i has timing characteristics T i , C i and D i . These refer to the value of the
period, computation time and deadline of t i .
2. THE RATE-MONOTONIC SCHEDULING ALGORITHM
Rate-monotonic scheduling is a static priority based mechanism [Liu73a]. Priorities
assigned to processes are inversely proportional to the length of the period. That is, the
process with the shortest period is assigned the highest priority. Processes are executed in
preemptive manner: at any time, the highest priority process with outstanding computation
requirement is executed.
Amongst the class of static priority scheduling schemes, it has been shown that rate-monotonic
priority assignment is optimal [Liu73a]. This implies that if a given static
priority scheduling algorithm can schedule a process system, the rate-monotonic algorithm
is also able to schedule that process system. In the case of the rate-monotonic scheduling
algorithm, optimality implies the imposition of constraints upon the process system. These
include:
# fixed set of processes;
all processes are periodic;
# all processes have deadline equal to period;
# one instance of a process must be complete before subsequent instances are run;
# all processes have known worst-case execution times;
no synchronisation is permitted between processes;
# all processes have an initial release at time 0.
The last of these constraints is fundamental in determining the schedulability of a
given process system. When all processes are released simultaneously, we have the worst-case
demand for the processor. The times at which all processes are released
simultaneously are termed critical instants[Liu73a] (thus the first critical instant occurs at
time 0). This leads to the observation that if all processes can meet their deadline in the
executions starting at the critical instant, then all process deadlines will be met during the
lifetime of the system.
Schedulability tests for the rate-monotonic algorithm are based upon the critical
instant concept. In [Liu73a] the concept is developed into a schedulability test based upon
process utilisations +. The test is given by:
(1)
where the utilisation U i of process t i is given by:
The utilisation converges on 69% for large n . Thus if a process set has utilisation of
less than 69% it is guaranteed to be scheduled by the rate-monotonic algorithm. That is,
all deadlines are guaranteed to be met.
Whilst test (1) is sufficient, it is also not necessary. That is, the test may indicate
falsely that a process system is not schedulable. For example, consider two processes with
the following periods and computational requirements:
For these processes, equation (1) evaluates to false, as the utilisation of two processes is
100%, greater than the allowable bound of 83%. However, when run, neither process will
ever miss a deadline. Hence, the test is sufficient but not necessary.
A necessary and sufficient schedulability constraint has been found [Sha88a,
Leh89a]. For a set of n processes, the schedulability test is given by ++:
min
l T k
l T k
where
The equations take into account all possible process phasings.
2.1.
Summary
In summary, schedulability tests are available for an optimal static priority scheduling
scheme, the rate-monotonic scheduling algorithm, with processes limited by the following
fundamental constraints:
all processes are periodic;
Utilisation is a measure of the ratio of required computation time to period of a process. The
summation of these ratios over all processes yields the total processor utilisation.
# evaluates to the smallest integer - x
# evaluates to the largest integer - x
# all processes have period equal to deadline;
no synchronisation is permitted between processes.
The first schedulability test (equation (1)) is sufficient and not necessary; the second
test (equation (2)) is sufficient and necessary. One difference between the two
schedulability tests lies in their computational complexities. The first test is of O (n ) in the
number of processes. The second test is far more complicated: its complexity is data
dependent. This is because the number of calculations required is entirely dependent on
the values of the process periods. In the worst-case, the test can involve enumeration of the
schedule for each process in the system, upto the period of that process. Hence, a trade-off
exists between accuracy and computational complexity for these schedulability tests.
The following section removes the second of the three constraints (i.e.
allows processes to be sporadic thus relaxing the first of
the above constraints. The third constraint is beyond the scope of this paper but as was
noted earlier, Sha et al have considered this [Sha88a, Sha90a, Sha87a].
3. DEADLINE MONOTONIC SCHEDULING
We begin by observing that the processes we wish to schedule are characterised by
the following relationship:
computation time - deadline - period
Leung et al[Leu82a] have defined a priority assignment scheme that caters for
processes with the above relationship. This is termed inverse-deadline or deadline
monotonic priority assignment.
Deadline monotonic priority ordering is similar in concept to rate monotonic priority
ordering. Priorities assigned to processes are inversely proportional to the length of the
deadline [Leu82a]. Thus, the process with the shortest deadline is assigned the highest
priority, the longest deadline process the lowest priority. This priority ordering defaults to
a rate monotonic ordering when period =deadline .
Deadline monotonic priority assignment is an optimal static priority scheme for
processes that share a critical instant. This is stated as Theorem 2.4 in [Leu82a]:
"The inverse-deadline priority assignment is an optimal priority assignment for
one processor."
To generate a schedulability constraint for deadline monotonic scheduling the behaviour of
processes released at a critical instant is fundamental: if all processes are proved to meet
their deadlines during executions beginning at a critical instant these processes will always
their deadlines [Liu73a, Leu82a].
Using the results of Leung et al stated above as a foundation, new schedulability tests
are now developed. Initially, two processes are considered, then we generalise to allow any
number of processes.
3.1. Schedulability Of Two Processes
Consider two processes: t 1 and t 2 . Process t 1 has a higher priority than process t 2
and so by deadline monotonic priority assignment,
Consider the following case.
Case (i) : both processes are always released simultaneously.
This occurs if the following holds:
This is illustrated in Figure 1.
time
time
Figure
1.
has the highest priority, it claims the processor whenever it has an
outstanding computational requirement. This will occur for the first C 1 units of each
period T 1 . The schedulability of this system is given by:
(a) check schedulability of
the deadline must be sufficiently large to contain the computation demand,
i.e.
(b) check schedulability of
all higher priority processes (i.e. t 1 ) have prior claim on the processor.
Hence, in any interval #
can utilise the processor in the
interval #
# . Therefore, t 2 can have a maximum
computation time defined by:
that is t 2 is schedulable if
The second term of equation (3) relates to the maximum time that t 2 is prevented from
executing by higher priority processes, in this case t 1 . This time is termed the interference
time, I.
Definition 1: I i is the interference that is encountered by t i between the release
and deadline of any instance of t i . The interference is due to the execution
demands of higher priority processes. The maximum interference on process t i
occurs during a release of t i beginning at a critical instant (by definition of critical
instant [Liu73a] ).
Considering the processes in Case (i), I 2 equates to the time that t 1 executes whilst t 2 has
outstanding computational requirement. Thus I 2 is equal to one computation of t 1 . That is:
I
In Case (i) I 1 is zero, as t 1 is the highest priority process:
I
The schedulability for Case (i) can be restated:
"i
I i
where
I
I
We now consider cases where the periods of the two processes are not equal.
Case is released many times before the second release of t 1
time
time
Figure
2.
In
Figure
2, the maximum interference I 2 is equal to one computation time of t 1 . The
schedulability equations (4) will hold for this case.
Case is released many times before the second release of t 2
Consider Figure 3.
time
time
Figure
3.
Clearly, t 2 is prevented from running by releases of process t 1 . The number of
releases of t 1 within the interval #
# is given by:
Therefore, schedulability is expressed by:
"i
I i
where
I
I
In equation (5), the value of I 2 above could be larger than the exact maximum
interference. This is because I 2 includes computation time required by t 1 for its (i
release, some of which may occur after the value of I 2 is at least as great
as the maximum interference, the test must hold since the it is based upon the exact
maximum interference.
An example using schedulability equation (5) is now given.
Example
Consider the following process system.
The schedulability of the process system can be determined by equation (5).
(a) check process t 1
I 1
Hence t 1 is schedulable.
(b) check process t 2
I 2
I 2
where
I
substituting6
Hence t 2 is schedulable. An example run of the system is given in Figure
4+.
Figure
4.
The construction of I 2 is sufficient but not necessary as the following example shows.
Consider the effect of increasing D 2 to 11. This should not affect the schedulability of the
system.
(c) recheck process t 2
I 2
I 2
where
I
substituting6
Hence t 2 is now unschedulable by equation (5). However, the process
system is schedulable (Figure 4 above).
Simulation diagrams are discussed in Appendix 1.
The schedulability constraint in equation (5) is too strong due to the value of I 2 . An
exact expression for I 2 is now developed. Consider Figure 5.
time
time
Figure
5.
A critical instant has occurred (iT with the interference on t 2 a maximum. We note
that the interference consists of executions of t 1 that have deadlines before D 2 , and the
execution of t 1 that has a release before D 2 and a deadline after D 2 . We can restate I 2 as
I
where b represents the interference due to complete executions of t 1 and k the incomplete
executions.
The number of complete executions in the interval #
# is equal to the
number of deadlines t 1 has in this interval. The number is given by:
Hence, the interference due to complete executions is given by:
The number of incomplete executions of t 1 is given by the number of releases of t 1
minus the number of deadlines of t 1 in #
# . This evaluates to either 0 or 1. The
number of releases is given by:
Note that if a release of t 1 coincides with D 2 , then it is deemed to occur fractionally after
. Hence the number of incomplete executions in #
# is given by:
The start of the incomplete execution is given by:
Hence, the length of the interval utilised by the incomplete execution before D 2 is:
The maximum time t 1 can use during the interval is given by the length of the
interval. However, the interval may be longer than C 1 . Therefore the maximum
interference due to incomplete executions is given by:
Substituting b and k into equation (6) gives the following schedulability constraint:
"i
I i
where
I
I
Consider the following theorems which relate to the sufficient and necessary properties of
equation (7).
Theorem 1: the schedulability test given in equation (7) is sufficient for two
processes.
The proof is by contradiction. We assume there is a process system that passes
the test but is not schedulable and show that if the system is not schedulable
then it must fail the test.
Consider a process system containing t 1 and t 2 . Let process t 1 pass the test.
pass the test, but not be schedulable. To pass the test, the following
must hold:
For t 2 not to be schedulable, it must miss its deadline during an instance of the
process starting at the critical instant of all processes. At this point t 2 suffers its
maximum interference, I 2 , due to the higher priority process. Therefore for t 2 to
miss its deadline and not be schedulable we have:
This gives
I 2
A clear contradiction exists between (a) and (b). Therefore, if t 2 passes the test,
it is schedulable.
The proof for Theorem 1 relies upon I 2 being exact (this is given by Theorem 2).
Theorem 1 will still hold if I 2 is greater than the exact value. This merely represents a
worse than worse-case. Therefore, by implication of Theorem 1, the schedulability test
given by equation (5) is also sufficient.
Theorem 2: the schedulability test is necessary if values of I i are exact.
For process t 2 to pass the schedulability test requires:
I - 2
where I - 2 represents the exact value of I 2 . When comparing I - 2 and I 2 we have
three cases:
(i) I - 2 > I 2 - this is clearly impossible as we know that I 2 is at least I - 2 from the
above discussion.
occurs when we have made a pessimistic calculation for I 2 .
As I 2 increases, the computation time that could be guaranteed for t 2 decreases
since:
occurs when the calculation of I 2 is precise. The allowable
computation time for t 2 is maximised (by above inequality).
In summary, we have the greatest amount of time for t 2 if I 2 is exact. There-
fore, the schedulability test is necessary if I 2 is exact.
Therefore, the schedulability test given by equation (7) is necessary as the values for
I i are exact. By implication, the schedulability test given by equation (5) is also necessary
if I i is exact. However, I i values in equation (7) are exact in more instances than in
equation (5): the former will declare more process systems schedulable than the latter.
The following example illustrates this point.
Example
We return to the process system that failed equation (5) but was illustrated to meet all
deadlines.
The schedulability of the system can be determined by equation (7).
(a) check process
I 1
Hence t 1 is schedulable.
(b) check process t 2
I 2
I 2
where
I
##
I
substituting,6
Hence t 2 is schedulable.
The system is schedulable by equation (7). A simulated run of the system was given in
Figure
previously.
3.1.1.
Summary
Noting the results stated in [Leu82a] that deadline monotonic priority assignment is
optimal, two schedulability tests for two-process systems have been developed. The test in
equation (5) is sufficient but not necessary, whilst the test in equation (7) is sufficient and
necessary (and hence optimal).
One difference between the tests is that the former is of computational complexity
O #
# and the latter O #
# . A trade-off again exists between accuracy and computational
complexity.
3.2. Schedulability Of Many Processes
The schedulability test given by equations (5) and (7) are now generalised for
systems with arbitrary numbers of processes. Firstly, equation (5) is expanded. Consider
Figure
6.
time
time
time
Figure
6.
The interference I i that is inflicted upon process t i by all higher priority processes
corresponds to the computation demands by those processes in the interval of time from
the critical instant to the first deadline of t i .
The interference on t i by t j can be given by:
This may include part of an execution of t j that occurs after D i . The total interference on
t i can be expressed by:
I
Therefore to feasibly schedule all processes:
"i
I i
where
I
Equation (8), like equation (5), is sufficient but not necessary. This is illustrated by the
following example.
Example
Consider the following process system.
The schedulability of the process system can be determined by equation (8).
(a) check process t 1
I 1
Hence t 1 is schedulable.
(b) check process t 2
I 2
I 2
where
I
substituting2
Hence t 2 is schedulable.
(c) check process t 3
I 3
I 3
where
I
I
substituting4
Hence t 3 is schedulable.
Consider the effect of increasing D 3 to 11. This should not affect the schedulability of the
system.
(d) recheck process t 3
where
I
I
substituting4
Hence t 3 is unschedulable by equation (8).
The process system is unschedulable by equation (8). However, when the system is
run all deadlines are met (see Figure 7).
Figure
7.
In the above example, the process system is not schedulable by equation (8) because the
values of I i are greater than exact values. Each I i can contain parts of executions that
occur after D i . This is similar to the drawback of equation (5) when a two-process system
was being considered. To surmount this problem we generalise equation (7) for many
processes. Consider Figure 8.
time
time
time
time
Figure
8.
From
Figure
8, it can be seen that in the general case with n processes, I i is equal to the
interference of all the processes t 1 to t i-1 in the interval #
# . Thus, equation (7)
can be rewritten to provide a schedulability test for an n process system:
"i
I i
where
I
To show that the above constraint is more accurate than equation (8) consider the
following example.
Example
We return to the process system that failed equation (8) but was shown to meet all
deadlines (see Figure 7).
(a) check process t 1
Hence t 1 is schedulable.
(b) check process t 2
I 2
I 2
where
I
I
##
substituting2
Hence t 2 is schedulable.
(c) check process t 3
I 3
I 3
where
I
I
##
##
I
substituting3
Hence t 3 is schedulable. An example run was given in Figure 7.
The expression for I i in equation (9) is not exact. This is because the interference on t i due
to incomplete executions of t 1 to t i-1 given by (9) is greater than or equal to the exact
interference. Consider the interference on t i by incomplete executions of t 1 and t i-1
Figure
8). Within I i , allowance is made for t 1 using all of #
# , and for t i-1
using all of #
# . Since only one of these processes can execute at a time, I i is
greater than a precise value for the interference.
Consider the following theorems.
Theorem 3: the schedulability test given by equation (9) is sufficient.
The proof follows from Theorem (1).
Theorem 4: the schedulability test given by equation (9) is necessary if values
of I i are exact.
The proof follows from Theorem (2).
Theorems (3) and (4) show that both equations (9) and (8) (by implication) are
sufficient and not necessary. When no executions of higher priority processes overlap the
deadline of t i then I i will be exact with both tests (8) and being necessary. Indeed, if
the I i values in both equations (8) and are exact, the two equations are equivalent.
However, when executions do overlap the deadline of t i test (9) will pass more process
systems than test (8) as it contains a more precise measurement of I i .
To obtain an exact value for I i under all cases requires the exact interleaving of all
higher priority processes to be considered upto the deadline D i . This could involve the
enumeration of the schedule upto D i with obvious computational expense. The following
section outlines an alternative strategy for improving the schedulability constraint.
3.3. Unschedulability of Many Processes
The previous section developed a sufficient and not necessary test for the schedulability of
a process system. We note that whilst this test identifies some of the schedulable process
systems, a sufficient and not necessary unschedulability test will identify some of the
unschedulable systems. This approach is illustrated by Figure 9.
Schedulable Systems Unschedulable Systems
Systems Found By Sufficient and
Not Necessary Schedulability Test
Systems Found By Sufficient and
Not Necessary Unschedulability Test
Exact Division Given By Sufficient and Necessary
Schedulability or Unschedulability Test
Domain of Process Systems
Figure
9.
A sufficient and not necessary unschedulability test identifies some unschedulable
process systems in the same manner as the test in the previous sub-section identifies
schedulable systems. The combination of the two tests enables the identification of many
schedulable and unschedulable process systems without resorting to a computationally
expensive sufficient and necessary test. A sufficient and not necessary unschedulability
test is now presented.
Consider the interference of higher priority processes upon t i . This is at a minimum
when any incomplete executions of higher priority processes occur as late as possible. This
maximises the time utilised by higher priority processes after D i and minimises the time
utilised before D i .
Theorem 5 : I i is at a minimum when incomplete executions of higher priority
processes perform their execution as late as possible.
time
time
Figure
10.
Consider Figure 10. The execution of t j in #
# is
decreased by moving the execution towards the deadline of t j (which is after
movement decreases I i . I i will be at a minimum when the execution
has been moved as close as possible to the deadline of t j .
Consider the schedulability of t i . When I i is a minimum, we have the best possible
scenario for scheduling t i . If t i cannot be scheduled with I i a minimum, it cannot be
scheduled with an exact I i since this value is as least as large as the minimum value.
Therefore, to show the unschedulability of a process system, it is sufficient to show the
unschedulability of the system with minimum values of I i .
An unschedulability test is now developed using minimum interference. In Figure
10, the interference on t i is the sum of complete executions of higher priority processes,
and the parts of incomplete executions that must occur before b be the
complete executions and k the incomplete executions. The total interference is stated as:
I
Complete executions occur in the interval #
# and are given by:
The incomplete executions number either 0 or 1 for each of the processes with a higher
priority than t i . Hence, the interference due to incomplete executions can be stated as
Substituting into equation (10) we generate an unschedulability test:
"i
I i
where
I
We note that only one process need pass the unschedulability test for the process
system to be unschedulable.
The converse of Theorem 3 proves equation (11) to be a sufficient condition for
unschedulability. This follows from the observation that since I i is a minimum (by
Theorem 5), then by Theorem 5 if a process system cannot be scheduled with I i less than
an exact value, the process system cannot be scheduled with exact values of I i . By
Theorem 4, we note that equation (11) is a not necessary condition for unschedulability
since the values used for I i are less than or equal to the exact value for I i .
Equations and (11) can be used together. Consider a process system that fails
equation (9). Since this is test is not necessary it does not prove the process system
unschedulable. The same process system can be submitted to equation (10). If the system
equation (10) we have determined the unschedulability of the process system.
However, if the process system failed both schedulability and unschedulability tests we
note that it could still be schedulable.
We illustrate the use combined use of equations (9) and (11) with the following
example.
Example
Consider the following process system.
We can show the unschedulability of the system by using equation (11).
(a) check process
I 1
Hence t 1 fails the test and is therefore not unschedulable.
(b) check process
I 2
I 2
where
I
I
substituting3
Hence t 2 fails the test and is therefore not unschedulable.
(c) check process
I 3
I 3
where
I
I
substituting7
Therefore t 3 passes the unschedulability test. The process system is therefore
unschedulable. An example run of the system is given in Figure 11. Process t 3 misses its
deadline at time 13.
Figure
11.
We now reduce the computation time of process t 3 to 5:
By observation, we can see that t 3 now fails the unschedulability test:
I 3
Since the characteristics of the first two processes are identical, the process system as
a whole fails the unschedulability test. However, the system is not necessarily
unschedulable. Now we try to prove the process system schedulable using equation (9).
(a) check process t 1
I 1
Hence t 1 is schedulable.
(b) check process t 2
where
I
I
substituting3
Hence t 2 is schedulable.
(c) check process t 3
I 3
I 3
where
I
I
substituting5
Hence t 2 is not schedulable by equation (9).
In the above, we have shown the process system failing both the schedulability and
unschedulability tests. Since both tests are sufficient and not necessary we have not
decisively proved the process system schedulable or unschedulable.
The above example illustrated the combined use of unschedulability and
schedulability tests. The first part of the example utilised the unschedulability test to
prove the test unschedulable. Then, by decreasing the computation time of t 3 , the system
fails the unschedulability test. However, after application of equation (9), the system was
shown to fail the schedulability test also. Indeed, by examining the example we can see
that when C 3 lies in #
# the system can be proved schedulable. When C 3 lies in #
# the
system can be proved unschedulable. When C 3 lies in #
# we can not prove the system
schedulable nor unschedulable. This requires a more powerful schedulability test. Such a
test is presented in the next sub-section.
3.4. Exact Schedulability of Many Processes
The schedulability and unschedulability constraints for systems containing many
processes, given by equations (9) and (11) respectively, are sufficient and not necessary in
the general case. To form a sufficient and necessary schedulability test requires exact
values for I i (by Theorems 2 and 4). To achieve this, the schedule has to be evaluated so
that the exact interleaving of higher priority process executions is known. This is costly if
the entire interval between the critical instant and the deadline of process t i is evaluated as
this would require the solution of D i equations.
The number of equations can be reduced by observing that if t i meets its deadline at
lies in #
# , we need not evaluate the equations in #
# . Further
reductions in the number of equations requiring solution can be made by considering the
behaviour of the processes in the interval #
# .
Consider the interaction of processes t 1 to t i-1 on process t i in the interval #
# .
For process t i to meet its deadline at D i we require the following condition to be met:
I i
We wish to consider only the points in D i upto and including t - i . Therefore, we need to
refine the definition of interference on t i so that we can reason about the interval #
rather than the single point in time D i .
Definition 2: I i
t is the interference that is encountered by t i between the release
of t i and time t , where t lies in the interval #
# . This is equal to the quantity
of work that is created by releases of higher priority processes in the interval
between the release of t i and time t .
At t - i the outstanding work due to higher priority processes must be 0 since t i can
only execute if all higher priority processes have completed. Hence, the point in time at
which t i actually meets its deadline is given by:
I i
Therefore, we can state the following condition for the schedulability of
I i
where
I i
We note that the definition of I i
t includes parts of executions that may occur after t .
However, since the outstanding workload of all processes is 0 at t - i then when the
expression I i
t is exact.
The above equations require a maximum of D i calculations to be made to determine
the schedulability of t i . For an n process system the maximum number of equations that
need to be evaluated is:
The number of equations that need to be evaluated can be reduced. This is achieved
by limiting the points in #
# that are considered as possible solutions for t - i . Consider
the times within #
# that t i could possibly meet its deadline. We note that I i
t is
monotonically increasing within the time interval #
# . The points in time that the
interference increases occur when there is a release of a higher priority process. This is
illustrated by Figure 12.
(D 4 )
I 4
Release
Release
Release t 3
Release
Release
Figure
12.
In
Figure
12, there are three processes with higher priority than t 4 . We see that as the
higher priority processes are released, I 4
increases monotonically with respect to t . The
graph is stepped with plateaus representing intervals of time in which no higher priority
processes are released. It is obvious that only one equation need be evaluated for each
plateau as the interference does not change.
To maximise the time available for the execution of t i we choose to evaluate at the
right-most point on the plateau. Therefore, one possible reduction in the number of
equations to evaluate schedulability occurs by testing t i at all points in #
# that
correspond to a higher priority process release. Since as soon as one equation identifies the
process system as schedulable we need test no further equations. Thus, the effect is to
evaluate equations in #
# .
The number of equations has been reduced in most cases. We note that no reduction
will occur if for each point in time in #
# a higher priority process is released with t i
meeting its deadline at D i .
The number of equations is reduced further by considering the computation times of
the processes. Consider Figure 13.
(D 4 )
Time 0: Release t 1 , t 2 , t 3 and t 4 .
Time 4: Release t 1
Time 5: Release t 2
Time
Time 8: Release t 1
Figure
13.
In
Figure
13 the total computation requirement of the system (C s ) is plotted against time.
At the first point in time when the outstanding computation is equal to the time elapsed, we
have found t - 4 (by equation (12)). In the above diagram this point in time coincides with
the deadline of t 4 .
Considering Figure 13, there is no point in testing the schedulability of t i in the
interval #
# . Also, since time 0 corresponds with a critical instant (a simultaneous
release of all processes) the first point in time that t i could possibly complete is:
This gives a schedulability constraint of:
I i
Since the value of t 1 assumes that only one release of each process occurs in #
# , the
constraint will fail if there have been any releases of higher priority processes within the
interval #
# . The exact amount of work created by higher priority processes in this
interval is given by:
I i
The next point in time at which t i may complete execution is:
This gives a schedulability constraint of:
I i
Again, the constraint will fail if releases have occurred in the interval #
# . Thus, we
can build a series of equations to express the schedulability of t i .
I i
I i
I i
I i
tk
If any of the equations hold, t i is schedulable. The series of equations above is
encapsulated by the following algorithm:
Algorithm
foreach t i do
while #
do
if
I i
else
endif
exit /* t i is unschedulable * /
endif
endwhile
endfor
The algorithm terminates as the following relation always holds.
When t i is greater than D i the algorithm terminates since t i is unschedulable. Thus we have
a maximum number of steps of D i . This is a worst-case measure.
The number of equations has been reduced from the method utilising plateaus in
Figure
11. This is because we consider only the points in time where it is possible for t i to
complete, rather than points in time that correspond to higher priority process releases.
An example use of the above algorithm is now given:
Example
We return to the process system which could not be proved schedulable nor unschedulable:
proved schedulable in the previous example so we confine
attention to t 3 . We use the successive equations to show unschedulability.
I 3
where
where
I 3
substituting14
The process is unschedulable at time 12, so we proceed to the next equation.
I 3
where
Since we now have t 1 > D 3 we terminate with t 3 unschedulable.
We reduce the computation time of t 3 to 3:
We use the successive equations to show t 3 schedulable.
I 3
where
where
I 3
substituting7
Hence t 3 is schedulable, meeting its deadline at time 10. An example run of the
system is seen in Figure 14.
Figure
14.
The successive equations (13) have shown the process system to be schedulable. The
solution of a single equation was required.
3.5.
Summary
This section has introduced a number of schedulability and unschedulability tests for
the deadline monotonic algorithm:
# a O (n ) schedulability test that is sufficient and not necessary;
# a O (n 2 ) schedulability test that is sufficient and not necessary;
# a O (n 2 ) unschedulability test that is sufficient and not necessary;
# a sufficient and necessary schedulability test that has data-dependent
complexity.
The first test provides the coarsest level. The second and third tests combine to provide a
finer grain measure of process systems that are definitely schedulable or definitely
unschedulable. The sufficient and necessary test is able to differentiate schedulable and
unschedulable systems to provide the finest level of test.
One constraint on the process systems is that they must have a critical instant. This is
ensured as all processes have an initial release at time 0.
4. SCHEDULING SPORADIC PROCESSES
Non-periodic processes are those whose releases are not periodic in nature. Such
processes can be subdivided into two categories [Aud90a]: aperiodic and sporadic. The
difference between these categories lies in the nature of their release frequencies.
Aperiodic processes are those whose release frequency is unbounded. In the extreme, this
could lead to an arbitrarily large number of simultaneously active processes. Sporadic
processes are those that have a maximum frequency such that only one instance of a
particular sporadic process can be active at a time.
When a static scheduling algorithm is employed, it is difficult to introduce non-periodic
process executions into the schedule: it is not known before the system is run when non-periodic
processes will be released. More difficulties arise when attempting to guarantee
the deadlines of those processes. It is clearly impossible to guarantee the deadlines of
aperiodic processes as there could be an arbitrarily large number of them active at any
time. Sporadic processes deadlines can be guaranteed since it is possible, by means of the
maximum release frequency, to define the maximum workload they place upon the system.
One approach is to use static periodic polling processes to provide sporadics with
executions time. This approach is reviewed in section 4.1. Section 4.2 illustrates how to
utilise the properties of the deadline monotonic scheduling algorithm to guarantee the
deadlines of sporadic processes without resorting to the introduction of polling processes.
4.1. Sporadic Processes: the Polling Approach
To allow sporadic processes to execute within the confines of a static schedule (such as
that generated by the rate-monotonic algorithm) computation time must be reserved within
that schedule. An intuitive solution is to set up a periodic process which polls for sporadic
processes [Leh87a]. Strict polling reduces the bandwidth of processing as
processing time that is embodied in an execution of the polling process is
wasted if no sporadic process is active when the polling process becomes
occurring after the polling process's computation time in one
period has been exhausted or just passed have to wait until the next period for
service.
A number of bandwidth preserving algorithms have been proposed for use with the rate-monotonic
scheduling algorithm. One such algorithm is the deferrable server [Leh87a,
Sha89b, Sha89a]. The server is a periodic process that is allotted a number of units of
computation time per period. These units can be used by any sporadic process with
outstanding computational requirements. When the server is run with no outstanding
sporadic process requests, the server does not execute but defers its assigned computation
time. The server's time is preserved at its initial priority. When a sporadic request does
occur, the server has maintained its priority and can thus run and serve the sporadic
processes until its allotted computation time within the server period has been exhausted.
The computation time for the server is replenished at the start of its period.
Problems arise when sporadic processes require deadlines to be guaranteed. It is difficult to
accommodate these with a deferrable server due to the rigidly defined points in time at
which the server computation time is replenished. The sporadic server [Sha89a] provides
a solution to this problem. The replenishment times are related to when the sporadic uses
computation time rather than merely at the period of the server process.
The sporadic server is used by Sha et al [Sha89a] in conjunction with the rate-monotonic
scheduling algorithm to guarantee sporadic process deadlines. Since the rate-monotonic
algorithm is used, a method is required to map sporadic processes with timing
characteristics given by
computation time - deadline - period
onto periodic server processes that have timing characteristics given by
computation time - deadline = period
The method adopted in [Sha89a] lets the computation time, period and deadline of the
server be equal to the computation time, minimum inter-arrival time and deadline of the
sporadic process. The rate-monotonic scheduling algorithm is then used to test the
schedulability of the process system, with runtime priorities being assigned in a deadline
monotonic manner.
The next section details a simpler approach to guaranteeing sporadic deadlines based upon
the deadline monotonic scheduling algorithm.
4.2. Sporadic Processes: the Deadline Monotonic Scheduling Approach
Consider the timing characteristics of a sporadic process. The demand for
computation time is illustrated in Figure 15.
Process
Released
Deadline
Released
Deadline
Released
Figure
15.
The minimum time difference between successive releases of the sporadic process is
the minimum inter-arrival time m . This occurs between the first two releases of the
sporadic. At this point, the sporadic is behaving exactly like a periodic process with period
the sporadic is being released at its maximum frequency and so is imposing its
maximum workload.
When the releases do not occur at the maximum rate (between the second and third
releases in Figure 15) the sporadic behaves like a periodic process that is intermittently
activated and then laid dormant. The workload imposed by the sporadic is at a maximum
when the process is released, but falls when the next release occurs after greater than m
time units have elapsed.
In the worst-case the sporadic process behaves exactly like a periodic process with
period m and deadline D (D - m ). The characteristic of this behaviour is that a maximum
of one release of the process can occur in any interval #
# where release time t is at
least units after the previous release of the process. This implies that to guarantee
the deadline of the sporadic process the computation time must be available within the
interval #
# noting that the deadline will be at least m after the previous deadline of
the sporadic. This is exactly the guarantee given by the deadline-monotonic schedulability
tests in section 3.
For schedulability purposes only, we can describe the sporadic process as a periodic
process whose period is equal to m . However, we note that since the process is sporadic,
the actual release times of the process wil not be periodic, merely separated by at least m
time units.
For the schedulability tests given in section 3 to be effective for this process system,
all processes, both periodic and sporadic, have to be released simultaneously. We can
assume that all the processes are released simultaneously at time 0: a critical instant. This
forms the worst-case workload on the processor. If the deadline of the sporadic can be
guaranteed for the release at a critical instant then all subsequent deadlines are guaranteed.
An example is now given.
Example
Consider the following process system.
are periodic, whilst t 2 and t 4 are sporadic with minimum inter-arrival
times given by T 2 and T 4 respectively.
We check the schedulability of the system using the equations given in section 3. The
simplest test (equation (8)) is used.
(a) check process t 1
I 1
Hence t 1 is schedulable.
(b) check process t 2
I 2
I 2
where
I
substituting2
Hence t 2 is schedulable.
(c) check process t 3
I 3
I 3
where
I
substituting2
Hence t 3 is schedulable.
(d) check process t 4
I 4
I 3
where
I
I
substituting2
Hence t 4 is schedulable.
The process system is schedulable. An example run is given in Figure 16.
Figure
16.
In the example run (Figure 16) all deadlines are met. Each of the sporadic processes
are released at time 0. This forms a critical instant and thus the worst-possible scenario for
scheduling the process system. A combination of many periodic and many sporadic
processes was shown to be schedulable under this scheme without the need for server
processes which are required for scheduling sporadic processes with the rate-monotonic
scheduling algorithm (see section 4.1).
4.3.
Summary
The proposed method for guaranteeing the deadlines of sporadic processes using sporadic
servers within the rate-monotonic scheduling framework has two main drawbacks. Firstly,
one extra periodic server process is required for each sporadic process. Secondly, an extra
run-time overhead is created as the kernel is required to keep track of the exact amount of
time the server has left within any period.
The deadline-monotonic approach circumvents these problems since no extra
processes are required: the sporadic processes can be dealt with adequately within the
existing periodic framework.
5. CONCLUSIONS
The fundamental constraints of the rate-monotonic scheduling algorithm have been
weakened to permit processes that have deadlines less than period to be scheduled. The
resulting scheduling mechanism is the deadline-monotonic algorithm. Schedulability tests
have been presented for the deadline-monotonic algorithm.
Initially a simple sufficient and not necessary schedulability test was introduced.
This required a single equation per process to determine schedulability. However, to
achieve such simplicity meant the test was overly pessimistic.
The simplifications made to produce a single equation test were then partially
removed. This produced a sufficient and not necessary schedulability test which passed
more process systems than the simple test. The complexity of the second test was O (n 2 )
compared with O (n ) for the simple test. Again, the test was pessimistic.
To complement the second schedulability test, a similar unschedulability test was
developed. The combination of sufficient and not necessary schedulability and
unschedulability tests was shown to be useful for identifying some unschedulable systems.
However, it was still possible for a process system to fail both the schedulability and
unschedulability tests.
This problem was resolved with the development of a sufficient and necessary
schedulability test. This was the most complex of all the tests having a complexity related
to the periods and computation times of the processes in the system. The complexity was
reduced substantially when the number of equations required to determine the
schedulability of a process were minimised.
The problem of guaranteeing the deadlines of sporadic processes was then discussed.
Noting that schedulability tests proposed for sporadic processes and the rate-monotonic
algorithm require the introduction of special server processes, we then proposed a simple
method to guarantee the deadlines of sporadic processes within the confines of the
deadline-monotonic algorithm. The simplicity of the method is due to sporadic processes
being treated exactly as periodic processes for the purpose of determining the
schedulability. Using this scheme, any mixture of periodic and sporadic deadlines can be
scheduled subject to the process system passing the deadline-monotonic schedulability
constraint.
A number of issues raised by the work outlined in this paper require further
consideration. These include the effect of allowing processes to synchronise and vary their
timing characteristics. Another related issue is the effect of deadline-monotonic
scheduling upon system utilisation. These issues remain for further investigation.
ACKNOWLEDGEMENTS
The author thanks Mike Richardson, Alan Burns and Andy Wellings for their
valuable comments and diatribes.
--R
"Misconceptions About Real-Time Computing: A Serious Problem for Next Generati on Systems"
"Scheduling Real-Time Systems"
"Scheduling Algorithms for Multiprogramming in a Hard Real-Time Environment"
"The Rate-Monotonic Scheduling Algorithm: Exact Characterization and Average Case Behaviour"
"Aperiodic Task Scheduling for Hard Real-Time Systems"
"Real-Time Scheduling Theory and Ada"
"Real-Time Scheduling Theory and Ada"
"Priority Inheritance Protocols: An Approach to Real-Time Synchronisation"
"On the Complexity of Fixed-Priority Scheduling of Periodic, Real-Time Tasks"
"Enhanced Aperiodic Responsiveness in Hard Real-Time Environments"
"An Analytical Approach to Real-Time Software Engineering"
--TR | deadline monotonic scheduling;priority inheritance protocol;rate monotonic scheduling;feasibility decision algorithm;periodic task |
290070 | The Synchronous Approach to Designing Reactive Systems. | Synchronous programming is available through several formally defined languages having very different characteristics: Esterel is imperative, while Lustre and Signal are declarative in style; Statecharts and Argos are graphical languages that allow one to program by constructing hierarchical automata. Our motivation for taking the synchronous design paradigm further, integrating imperative, declarative (or dataflow), and graphical programming styles, is that real systems typically have components that match each of these profiles. This paper motivates our interest in the mixed language programming of embedded software around a number of examples, and sketches the semantical foundation of the Synchronie toolset which ensures a coherent computational model. This toolset supports a design trajectory that incorporates rapid prototyping and systematic testing for early design validation, an object oriented development methodology for long term software management, and formal verification at the level of automatically generated object code. | Introduction
Reactive computer systems continuously respond to external stimuli generated by their en-
vironments. They are critical components of our technology dominated lives, be they in
control systems such as ABS for cars, fly-by-wire in aircraft, railway signalling, power gen-
eration, shopfloor automation, or in such mundane things as washing machines and video
recorders. Mastering the design of these systems, and reducing the time needed to bring
them to market, becomes of utmost economic importance in times of increasing market
dynamics. This paper advances the paradigm of synchronous programming as a means to
match these goals.
Several programming languages that originally emerged as engineering notations are now
defined in the IEC Standard 1131-3 [18]. These languages have been designed specifically
for embedded software applications, mainly in the process control industry but with increasing
influence in other sectors. Unfortunately the IEC 1131 languages have not been
designed with the benefit of formal semantics. Yet, to advance the state of the practice of
embedded software design it is important to provide tools that support high-level specification
and rapid prototyping, integrate testing and formal verification to achieve early design
validation, and encourage modular software development-to ease review, maintenance,
and certification. For this, languages having precise mathematical semantics are required.
Synchronous programming languages like those discussed in this paper have the potential
to introduce such rigour embedded software design. These languages have very distinctive
characteristics: ESTEREL [4] is imperative in style, while LUSTRE [13] and SIGNAL
are declarative; STATECHARTS [15] and ARGOS [26] are graphical notations that
enable one to program directly by constructing hierarchical automata. These languages
share a common communication metaphor, that of synchronously broadcast signals. Sections
2 and 3 introduce the key ideas behind the synchronous approach to embedded soft-
ware, and outline some of the programming constructs available in LUSTRE, ESTEREL, and
ARGOS through a few simple examples. A longer introduction to these languages can be
found in [12].
The specific profiles of these languages reflects the fact that they have been developed
in response to problems emerging in different application areas. LUSTRE and SIGNAL derive
from requirements of industries mainly aware of electrical and electronics engineering
methods, who wanted to manage the increasing complexity of their applications, and gain
greater flexibility in their design, by introducing software. These languages have therefore
been designed for the discrete handling of continuous phenomena. They invoke metaphors
commonly used by electrical engineers in control theory, and are thus most suited to signal-based
applications such as in navigation or digital signal processing where often a sampling
of different related frequencies can be found. In contrast, ESTEREL and graphical languages
like ARGOS are better suited for scheduling complex operating modes, handling intricate
patterns of events, and describing interrupt-driven behaviours. However, the difference in
their best-suited application profiles is only a qualitative assessment since these languages
have broadly similar expressive power.
Our motivation for taking the synchronous design paradigm further, with the wholesale
integration of imperative, declarative (or dataflow), and graphical programming styles, is
that real systems typically have components that match each of these profiles. As argued
by Gajski [11], the construction of embedded systems requires a combination of state based
and dataflow models which support hierarchical structuring of behaviour, concurrency, and
exception handling. We believe a semantical combination of the languages described here
will definitely satisfy these requirements. That is the unique feature of the SYNCHRONIE
workbench which is under active development in the Embedded Software Design group at
GMD. The success of this venture rests on the underlying computational model sketched
in Section 4, and with greater rigour in Section 5, which provides a coherent mathematical
framework, and yields compact, verifiable code. The main functions of the SYNCHRONIE
workbench are described in Section 6.
Synchronous programming languages are already being evaluated in some industries, particularly
aviation and power generation where the problems are real-time constrained and
safety critical. The commercial interest in synchronous languages lies not only in the style,
but in their seamless integration with existing software development practices through the
programming environments with which they are provided. A short overview of the marketed
environments that support each language, and some industrial applications, can be
found in [2, 19]. In addition to project management facilities, and editing facilities that mix
text and graphics, the commercial environments provide advanced features for design vali-
dation, and back-end compilation to various languages like C or VHDL. The SYNCHRONIE
workbench also provides such features, but offers system designers in addition much greater
freedom in the choice of programming language, with the unique option to freely mix the
various synchronous programming modes.
For the most part synchronous languages provide primitive datatypes and operations on
them only: the emphasis is on gaining intellectual control over the program's often intricate
control logic, rather than on data processing issues. Where compound datatypes need to be
used, the synchronous language is 'hosted' in a common language such as C, Fortran, or
Ada. Synchronous languages achieve good separation between concerns of data and control
logic; their novel fusion in SYNCHRONIE with object oriented construction techniques [7,
8] offers the right kind of design encapsulation and abstraction mechanisms to achieve much
needed transparency in the software development lifecycle.
2. Reactivity and Synchrony
In contrast to an interactive system (say a text editing program), a reactive system is fully
responsible for the synchronisation with its environment. A system is reactive when it is
fast enough to respond to every input event, and its reaction latency is short enough that the
environment is still receptive to its responses. Most control systems, and systems for digital
signal processing as used in industry or in telecommunications, are reactive according to
this characterisation. Common features of reactive systems are:
concurrency: they typically consist of several concurrent components that cooperate
to realize the intended overall behaviour.
real-time: they are supposed to meet strict constraints with regard to timing, such as
response time or availability.
determinism: a system's reaction is uniquely determined by the kind and timing of
external stimuli.
heterogeneity: they often consist of components implemented in quite different technologies
like software, hardware, or on distributed architectures.
reliability: requirements include functional correctness as well as temporal correctness
of behaviour; also robustness, and fault tolerance.
The fundamental idea of the synchronous approach is simple: reactive systems are idealised
by assuming that stimulation and reaction are simultaneous, or that reaction takes
zero time-meaning no observable amount of time. A system is stimulated by events from
the environment, but responds instantaneously. Physical time does not play any special role:
time will be considered as a sequence of a particular kind of external events. From this point
of view the statement
"the train must stop within 10 seconds,"
is not essentially different to the statement
"the train must stop within 100 metres."
In both cases something is said about the occurrence of events [12]:
"The event stop must precede the 10th (or 100th) next occurrence of the event second
(or metre)."
Events are manifest by signals which are broadcast throughout the system instantaneously.
A system reacts by emitting (that is, broadcasting) signals as well, so the statement above
could be modified to:
"The signal train stops must be emitted before the signal second has been
emitted
Abstracting physical time offers a number of advantages: the granularity of time may be
changed without affecting the sequence of events, and system components can be composed
and decomposed into subcomponents without changing the observable behaviour. This is
particularly beneficial for proving system properties. In practical terms, the synchrony hypothesis
states that a system reacts fast enough to record all external events in the proper
order. This property is realistic only insofar as it can be checked: it corresponds to the hardware
point of view that the time needed by an operation is of no importance as long as it
does not exceed the duration of a clock cycle, or, vice versa, that the clock cycle is determined
by the operation which consumes most time. The synchronous approach therefore
advances a two-stage design for reactive systems: physical time is first abstracted to focus
on the functions to be maintained, the validity of this abstraction being actually verified
when these systems are implemented.
3. Synchronous Languages
In this section we sketch some typical synchronous programming styles, focusing in the
subsections that follow on ESTEREL, ARGOS, and LUSTRE. A larger example is described
in Section 4.1 that uses these languages together to solve a programming problem in a natural
way. Here we wish to give the reader a feeling for synchronous programming in general,
so our examples are necessarily rather simple.
3.1. Imperative Programming in ESTEREL
Suppose the following informal specification has been given:
"If a second mouse click succeeds a first one within 5 milliseconds there is a double
click, otherwise there is a single click."
Assuming that time units are specified by the signal tick, the required behaviour is captured
by the ESTEREL program (not the shortest) in Figure 1. At first the program waits
until a click signal is produced by the environment. Then two programs are executed in
parallel. The upper subprogram broadcasts the single signal after five ticks provided that
loop
trap done in
await click;
await 5 tick;
present double else
emit single; exit done
await click;
emit double; exit done
loop
Figure
1. An ESTEREL mouse controller
double is not emitted simultaneously; it
then raises the exception (the trap signal)
done, and terminates. The conditional programming
construct present . end
should be interpreted as 'if the double signal
is present do nothing, otherwise emit
single (and exit)'. The other subprogram
emits the double signal if click occurs
a second time, and exits.
Trap signals are exceptions which, when
raised, abort all programs within the scope
of their declaration. The net effect of raising
done in either of the parallel branches
in the mouse program is that the body of
the loop is terminated; ESTEREL's semantics
cause the loop to restart immediately, so
the program returns to the await click
at the top of the loop. Note the priority given
to double over single if the second click should happen at the fifth tick of the clock.
In contrast to the trap statement, the construct
do halt watching click
(which is what await click actually abbreviates) involves a second, stronger kind of
preemption mechanism. The halt statement is the only one in ESTEREL to consume time;
in fact halt starts but never terminates. However, the body of the watching construct
will be preempted whenever the watchdog condition becomes true (that is, whenever the
signal click is present in this example).
All language constructs of ESTEREL are instantaneous apart from the halt statement,
or derived constructs like await. As we saw, the loop restarts immediately it terminates;
sequential composition is likewise reckoned to take zero time, as are the tests in conditional
statements. Thus, if a second click does coincide with the fifth tick, the present test
in the first subprogram in the example above will be executed in the same logical instant as
the click which aborts the await in the second subprogram. Of course, no assumptions
are made here about behaviour with respect to physical time which is represented explicitly
by the tick signal supplied be the environment.
3.2. Graphical Programming in ARGOS
To illustrate a graphical notation the program of the previous section is now coded in ARGOS
(see Figure 2). States are represented by rounded boxes. Automata are hierarchic in
that states can contain subautomata (for example in state two there are two subautomata).
Automata can run in parallel, which is indicated by a dashed line. Finally, the scope of signals
can be restricted-this is indicated by a square cornered box, instead of a rounded one,
mouse#
done
two
four
three
one
click/
double,done
click
timeout
five
timeout.-double/
single,done#
done
Figure
2. ARGOS program for the mouse controller
with a list of signals glued to it. A default arrow indicates the initial state of each subau-
tomaton (e.g., those labelled one, three, and Count5).
When started the mouse automaton enters the initial state (labelled one here). Occurrence
of the click signal causes a jump from there to state two, and this initialises both of the
parallel subautomata. A label on an arrow that is of the form
# indicates that the transition
should take place if signal
# arrives, and that this will cause the signal
# to be emitted simul-
taneously. Generally the guard (
# ) on a transition can be a list which specifies the presence
or absence of a number of signals, and the output action (
# ) is a list of signal names (omit-
ted, if the list is empty). So the (subsequent) transition from state Count5 to state five will
fire if the timeout signal is present and the double signal is not present; this causes both
single and done to be emitted.
We have to add that the refinement of state Count5 contains the subautomaton displayed in
Figure
3. At the fifth tick the timeout signal is emitted and the subautomaton comes to a
tick
tick
tick tick/
timeout
tick
Figure
3. ARGOS 5 tick counter (i.e., await 5 tick; emit timeout)
halt (tick is implicit in the guard on every transition in ARGOS, but is mentioned explicitly
here if the guard would otherwise be empty). Meanwhile, if a second click occurs the
transition from state three to state four takes place causing the simultaneous emission of
double and done. However, if the second click occurs at the fifth tick the transition to
state five is not possible because double occurs negatively in the guard of the transition.
Hence, exactly one of single or double will be emitted from state two, and in either
case this will be accompanied by the (local) signal done. This signal causes state two to
be abandoned when it is emitted because it appears in the guard on the transition out of that
EI0BKJML > N I0O 90> 9;P JQF 88
OF
state-whatever individual states the subautomata are in, when this signal occurs they will
be abandoned. Such control flow is similar to the preemption mechanisms of ESTEREL.
The program therefore returns to state one, and waits for the next click.
3.3. Declarative Programming in LUSTRE
l
mn
l
l
l
prq s
s
ev
e
s
prq s
Figure
4. A first order digital filter
To illustrate the declarative synchronous programming
style we shall implement a recursive
digital filter in LUSTRE. A first order digital
filter may be specified by a signal flow
graph [32] such as that in Figure 4. Quantities
on incoming edges at the nodes of this
graph are summed, and their result is broadcast
along the outgoing edges. The labels pq?s ,
ev ,
e
s and
s on the arcs denote a delay by
a shift register, and multiplications by the respective
constants. In linear form we would
have the equation
uyx/z ev wx{
e
s wx q?s
uyx q?s
where
|T}~ denotes the time index. The boundary condition is
z ev
v . This equation
with its boundary condition translates to the LUSTRE node:
node FILTER
let
(one has to instantiate the constants to specific real values of course). The equation is evaluated
on every program cycle which is marked in LUSTRE by an implicit clock tick, rather
than the explicit tick seen earlier. Once a LUSTRE program is started it runs forever without
terminating, executing every equation once a cycle. In this case only at the first tick will
the term a0 * x be evaluated; at all later times the term a0 * x
b1 * pre(y) is evaluated. This gives the semantics of the followed-by operator ->. The
pre operator allows access to the previous value of the expression on which it operates.
LUSTRE has other operators for upsampling and downsampling. These operations are
illustrated in the timing diagram below. Downsampling is by means of the when operator,
and upsampling is by means of the current:
- .
- .
- .
takes the value of X only if the 'clock' signal B is true (it is undefined
otherwise), and current(Z) latches the value of Z up to the next sampling signal, i.e.,
the next instant B becomes true (the
# is explained in Section 5). This mechanism allows
one to easily define the digital filter with regard to a different base frequency:
node BFILTER (b :
let
Other declarative synchronous languages include SIGNAL and SILAGE [22, 16]. The latter
is mainly used as the specification language of the CATHEDRAL [21] toolset for the synthesis
of DSP chips. These languages are quite similar to LUSTRE in style but use slightly
different mechanisms for upsampling and downsampling.
This concludes our survey of the synchronous programming styles. Most of the features
of ARGOS and LUSTRE have been mentioned, but ESTEREL is a slightly richer language
than the mouse controller example illustrates since, in particular, signals can carry data
which may be tested to modify the control flow. These languages share the communication
principle of synchronously broadcast signals, with the scoping mechanisms shown. How-
ever, the fact that signals can be tested and emitted simultaneously in the parallel branches
of a synchronous program can sometimes give rise to confusion over the cause and effect
of a signal's activation. Such causality cycles (they correspond to short circuits in sequential
are programming errors that can be detected statically by the compilers; they
do not affect the semantics of the individual languages as such, only the class of acceptable
programs. The detection of causality cycles is intricate however, and falls outwith the scope
of the present paper (see [25, 35], for example).
4. Integration of Synchronous Languages
Since complex systems often have components which match each of the different profiles
sketched above, it is natural to wish to express each in the most appropriate language rather
than shoe-horning solutions from a single language. To illustrate the useful interaction between
these synchronous languages we reprogrammed the production cell case study [23]
that has been used for evaluating and comparing software development methodologies and
tools. The successful design and verification of the controller has been discussed independently
in the contexts of ESTEREL, SIGNAL and LUSTRE [23], so we do not dwell on the
details of the specification in Section 4.1 below, but rather on its overall organisation. Section
4.2 outlines some aspects of the underlying computational model which makes possible
the idea of integrating these languages into a single, coherent mathematical framework.
The formalities are drawn out in Section 5.
4.1. Multi-modal Programming
The production cell's input arrives via a feed belt that conveys metal plates to an elevating
rotary table; the table lifts each plate to a position where a robot picks it up with its first
arm, transferring it to a press. When the plate has been forged by the press it is removed
to a deposit belt by the second arm of the robot. The circuit completes with a crane that
unloads the forged plates at the end of the deposit belt. Although the circuit for one item
is quite simple, the design of the cell's control software should maximise the throughput
while meeting the various constraints of the devices.
The cell's controller has a short initialisation phase during which the various devices are
set to specific (safe) states, followed by an endlessly looping process that controls the actual
production cycle. The production phase divides naturally into six components that run
in parallel, one for each physical device. To represent the design at this highest level of
organisation we used ARGOS as illustrated in Figure 5. The graphical nature of this lan-
Production
Cell
done
Production
deposit
belt
crane
feed belt table
robot
press
Initialise
iCtrl
iData
done
Figure
5. The Production Cell in ARGOS
guage makes it very suitable for presenting high-level design choices, and the reader can
appreciate at a glance the overall structure of the control program. The graphical style eases
communication between partners in the development of the software, and others involved
in constructing the production cell including management, mechanical and electrical engineers
designing the physical components, system and safety analysts, and so on.
The initialisation phase sets both arms of the robot to a retracted state, ready to handle the
first plate to arrive (for safety since they might otherwise be damaged in the press when it
is switched on). This behaviour is implemented by first emitting signals Rretract1 and
Rretract2 to the actuators of the robot, and then waiting for the arms to reach the desired
positions. Reaching these positions is signalled by OutPress1 and OutPress2,
and when this occurs a stop command is emitted to the respective arm by means of the signals
Rstop1 and Rstop2. When both arms have reached the desired positions the signal
done is emitted, transferring control to the unending production phase.
If we were programming only in ARGOS (or STATECHARTS, for that matter), there would
be little choice but to implement this initialisation logic in a program similar to that in Figure
6(a). A drawback of the graphical formalism becomes apparent. The control flow is
iCtrl
OutPress1/
tick/
OutPress2/
tick/
Rstop2/ done
Rstop1/
done
Rstop1.
Rstop2/ done Rstop1
(a) ARGOS initialisation
module iCtrl:
input OutPress1,OutPress2;
output Rretract1,Rstop1,
Rretract2, Rstop2,
done;
emit Rretract1;
await OutPress1;
emit Rstop1
emit Rretract2;
await OutPress2;
emit Rstop2
emit done
module
(b) ESTEREL initialisation
Figure
6. Initialisation phase of the Production Cell
confused by the fact that one has to explicitly manage the synchronisation to emit done
when both arms have been retracted (via the process in the lower part of the figure). Also,
the reader may be forgiven for wondering whether some external process running in parallel
with Initialise can interfere by emitting an Rstop1 signal, say. (There is no such process,
but this cannot be inferred simply by looking at Figures 5 and 6.) At this level of detail
graphical programming becomes cumbersome, and therefore error prone, and one quickly
loses sight of the flow of information.
Instead, we can refine the state iCtrl in Figure 5 with the ESTEREL program shown in Figure
6(b). This makes the natural control flow explicit (with the semi-colon after the concurrent
initialisation of the two arms), and ESTEREL's powerful parallel operator handles the
synchronisation on our behalf to ensure that done is only emitted when both branches have
terminated. This program only handles pure signals (that is to say, those signifying events)
and no data-even though the position of the arms are in reality provided by a potentiometer
delivering a real value. In the next section we shall describe one method of handling
these data (refining state iData) to complete the initialisation program.
The rest of the program is implemented in LUSTRE as described by Holenderski [17],
and will not be discussed here. The steady state behaviour of the press, the robot, and the
other physical components is adequately expressed in LUSTRE since these are interlocked,
nonterminating parallel processes. However, specifying the sequential composition of the
initialisation and production phases in LUSTRE leads to an obscure program since the sequential
composition operator is simply not available. To avoid this in his implementation,
Holenderski programmed the sequence at the level of the C interface (to the simulator provided
for the case study), but this ad hoc approach is highly error prone in general, undermines
the formal definition of the synchronous language (LUSTRE, in this case), and pre-empts
our performing any formal verification of the full program.
4.2. Synchronous Automata
Performing the sequential composition in ARGOS is, in contrast, fully formal as long as the
meaning of the combination of the synchronous languages is clear. We are aided here by
their relative simplicity (e.g., when compared to Ada or C), and though they each have a
very different 'look and feel', ESTEREL, LUSTRE, and ARGOS can be interpreted in the
same computational model. This section introduces the main notions behind synchronous
automata, the more formal presentation being deferred until Section 5.
4.2.1. Boolean Automata
Boolean automata are easier to understand and are, in fact, a particular instance of synchronous
automata which capture the essence of the synchronous languages. Boolean automata
have two kinds of statements:
s
signal s is emitted if condition
# is satisfied, and
the control register h is true (or 'set') in the next instant if
# is satisfied.
A synchronous program is represented by a collection of such statements defining signals
to represent transient information, or registers to represent persistent information. The operational
semantics of Boolean automata is defined by two successive phases: given a val-
uation
# that assigns a truth value, tt or ff, to each of the registers and inputs (inputs are
represented by free variables), a reaction is a solution of the system of equations
this solution extends
# to cover all signals, and we use this to compute the assignments
# to yield the next state of the machine. A solution to the signal equations for all
input patterns and (reachable) states must be proved to exist at compilation time to guarantee
that the program is reactive in that it may respond to every input stimulus. In addition,
this solution must be unique to guarantee that the program is deterministic. These issues
are common to all synchronous programming languages.
For an example, the behaviour of the ESTEREL statement await OutPress1 is defined
by the Boolean automaton:
The register h
# captures the pause in this await construct. All registers are initially false;
# will be set when the Boolean automaton is initialised-that is, when the special start
signal is present (it is only present, or true, in the very first clock cycle). Thereafter, h
# is
set every program cycle until the signal OutPress1 occurs, at which point the automaton
terminates (h becomes, and stays, false). Termination is signified by the special signal
# .
Note that the OutPress1 will be ignored in the very first clock cycle-readers familiar
with ESTEREL will realise that an 'immediate' is needed to handle that event.
The Boolean automaton
captures the behaviour of the statement emit Rstop1: this automaton emits the signal
and terminates immediately. The sequence await OutPress1;emit Rstop1 specifies
that if the former statement terminates, control passes instantaneously to the latter. This
control flow is tracked by the compiler which substitutes for the start condition (
# ) of the
Boolean automaton associated with the second command in sequence, the termination condition
of the first. Hence, for the upper parallel branch of Figure 6(b), we obtain:
We cannot sketch the translation of all the language constructs here, but have hopefully
provided some feeling for how the translation proceeds. ARGOS and Boolean LUSTRE also
have natural and compact interpretations in Boolean automata. The full translation of pure
ESTEREL, along with its proof of correctness with respect to the published semantics, is
given in [34].
4.2.2. Synchronous Automata
Synchronous automata represent an enhancement to the model to handle data; Boolean automata
only capture the pure control of synchronous languages, focusing on synchronisation
issues and ignoring the data carried by signals, and actions upon them. Intuitively, this
enhancement is achieved by coupling the presence of a signal with a unique datum. The
earlier notion is refined thus:
s
# is satisfied, the signal s is emitted with the value returned by the
function , and
# is satisfied, the register h is set for the next instant with the result
of the function
# .
In this framework, Boolean automata are synchronous automata where the domain of values
is restricted to a single-point set denoted
# . Such signals are referred to as pure signals
(also pure, or control, registers) because the values are of little interest. So, the earlier
Boolean equations (s
# ) and assignments (h
# ) have now given way to conditional
equations (s
guarded commands (h
automata are
thus pure synchronous automata.
To illustrate the idea let us specify the discretisation of the position of the arms of the
robot in the production cell in the initialisation phase thus:
node iData (Arm1, Arm2 : real)
returns
let
This LUSTRE node emits OutPress1 and OutPress2whenever the sensors on the arms
of the robot indicate that they are a safe distance from the press. This is defined by the
synchronous automaton:
OutPress1
OutPress2
idata
The first two statements translate the equations of the LUSTRE node; the latter represents
the implicit control in declarative programs. The register h idata is initially inactive, gets set
(or becomes active) when the program is started with
# , and remains set thereafter. This
persistence is indicated by h idata appearing in the statement guard where it refers to the active
status of the register, not its value. Since it never again changes state, h idata might be thought
redundant. Nevertheless, it captures the nonterminating property of declarative programs:
once a declarative program gets started via
retains control in that it is executed at every
later instant of time. We anticipate that it may lose control as well. The idea required to
integrate declarative and imperative styles is formally captured in synchronous automata
by the control axiom.
4.2.3. The Control Axiom
The control axiom states that no synchronous automaton can react if it is not in control-
a synchronous automaton has control either if
# is present, or some of the registers of the
process are set. By way of a non-example, the following statement alone does not define
a synchronous automaton since the presence of OutPress1 neither depends on
# , nor any
state (the latter dependency exists implicitly in a LUSTRE program's main loop):
OutPress1
The control axiom appears to contradict the notion that a reactive system must maintain an
ongoing relationship with its environment, and thus must always react. But even if this is
true for a complete reactive system (the whole program), some of its subcomponents may
be active for just a while; in particular, they may be preempted. In the production cell, while
the ESTEREL module iCtrl terminates by itself after emitting done, the LUSTRE node
iData never terminates. But, with respect to the semantics of ARGOS, the Initialise state
Figure
is preempted by done, which therefore aborts the LUSTRE node. This is performed
(in the compiler) by redefining h idata thus:
idata
done h idata
This guarding of register statements translates a weak preemption mechanism [3]. This
is similar to ESTEREL's trap construct, and is the only preemption mechanism of ARGOS.
The strong preemption mechanism is captured by guarding both kinds of statements. A
longer presentation of synchronous automata and their algebra is given in [24], whence the
summary of the formal semantics in the next section is drawn.
5. Semantics in a Nutshell
It should be evident by now that the synchronous programming paradigm has a dual nature.
On the one hand languages such as LUSTRE are descriptive in that they constrain possible
behaviours; one the other hand languages like ESTEREL and ARGOS foster a constructive
point of view in that an automaton is specified which prescribes the transformation of a
given state into another. We argued informally above that synchronous automata are a useful
intermediate representation. These structures are reasonably unsophisticated mathemat-
ically, and match both programming styles well enough to serve as a kind of synchronous
object code, though of course originating in the tradition of deterministic automata. While
not wishing to lose the light presentational style of the previous sections, or burden the
reader with undue formality, we shall try to shed some light on the formal semantics in a
slightly more rigorous manner. The subsections that follow first address the descriptive,
then the prescriptive aspects of synchronous programming, before merging 'dataflow' and
'controlflow' in their synthesis at the end.
5.1. The Declarative Aspect: Constraining Dataflow
Behaviour is manifest in what it is possible to observe. We classify observations by attributing
a name which, for the moment, will be referred to as a signal. A signal may be present
with a value taken from a set
# , or it may be absent. If a particular signal is observed over
time, a flow of values and 'misses' (when the signal is absent) will be obtained:
# .
EI0BKJML > N I0O 90> 9;P JQF 88
OF
A flow is characterised by a subset
- of the natural numbers (the sampling rate of
a valuation
. If
|-
- is present at instant
| , and takes the value
-|0- .
In the example above,
z -D~-y-0!-y-
- . We refer to a set of flows as a synchronous
process, and at each instant of time require that at least one signal of a synchronous process
is present. This requirement allows us to identify 'observed' (or external) time with the
natural numbers.
Conceptually, synchronous dataflow programs deal with the specification of such pro-
cesses. For example, the LUSTRE equation specifies constraints upon the
corresponding flows:
u the sampling rates are the same, and
at each instant
p the values are the same.
Taking a second example, the memoisation operator pre in Y = pre(X) introduces a
delay so that the value of X observed at one sampling point is observed on Y at the next:
- .
pre(X)
- .
At the first sampling point no value of X can have been observed beforehand so we insert
a 'non-value'
which indicates that even if Y is present, its value is undefined. Of course,
a compiler must guarantee that no program's reaction ever depends on a non-value. This
phenomenon is similar to that of program variables (in other languages) which should not
be used at run-time before they have been initialised. LUSTRE's `followed-by' operator,
discussed in Section 3.3, allows one to initialise signals properly.
The formal definition (of pre) is quite elaborate since it depends on the sampling points.
Skipping the exact definition, let us use the notation
- w- to refer to the
th sampling point
in sequence, counting from 0. Then:
- , and
s
- , for all sampling points
w- , and
z
Synchronous automata are introduced here as a more elementary language for specifying
constraints on dataflows. The statement
specifies that, whenever the pure signal
- is present, the signal must be present and its
value must be equal to that of
w . In more formal terms:
u , and for all
Notation is somewhat overloaded here. To be precise, each subset
- IN is in one-to-one
correspondence with its characteristic function
- such that
z tt iff
|- . Hence the Boolean operators used in control expressions (the guards after the @
-\ 89;:=<?>A@ B0CDBE/F0GH
are well justified, and can be unambiguously interpreted as Boolean operators, or
as operators on subsets of IN.
A point to note is that although the definition above forces
u to have a value at instant
only if
may still have values at other instants. Furthermore, we require the signal
w to be present at instants
|- , otherwise the constraint
The idea naturally generalises to more complex statements
- where
- is a data
expression and
- is a control expression.
Closely related to the pre operator is the second type of statement found in synchronous
automata, namely
This delays the observation on
w by just one instant, but upsamples at the same time: when-
ever
|- , whatever was observed on in instant
| will be observed on
u at instant
|
{
- .
The figure
s
interprets the more formal definition
u , and for all
{ -y- z-w -=|0-
where
z -|
{ -|-- . However, it is not a totally trivial exercise to prove that,
provided attention is restricted to signals
w and
u , the synchronous automaton
init
z
s
s
s
is equivalent to the LUSTRE statement pre(X). This introduces the third (and final)
clause in synchronous automata, viz
init
which is used to initialise a dataflow that would not otherwise take a value 'at
z
While the proof that the above automaton implements pre is a little tricky,
the reader can easily verify that the semantics compute the flows in the adjacent diagram
which shows how the 'last' value of
w is stored until the next sampling point. Note that
t , so it turns out that
z IN. For the diagram we supposed
- .
EI0BKJML > N I0O 90> 9;P JQF 88
OF
5.2. The Imperative Aspect: Managing Control
The imperative synchronous languages are inherently based on the idea of a (distributed)
state. In ESTEREL, for instance, the halt statement is used to indicate local sequence control
within the parallel branches of a program. The halt's behaviour is defined by
- ff which abbreviates
where an is introduced as a specific control register for each such halt. Once this synchronous
automaton has been activated by , the register
remains active forever, and the
automaton never terminates. This is precisely reflected in the formal semantics of
which yields
-/-
that
z -~r- this means
-z- .
In this discussion we revert to Boolean automata because control is specified in terms of
pure signals only. We apologise for any terminological confusion that may be troubling the
reader at this point: we stated earlier that flows represent observations made of certain at-
tributes, namely signals, but now speak in terms of registers. To reconcile the nomenclature
we shall also relate flows to registers, but stipulate that registers are 'unobservable' (as one
might expect). Registers may be active (and thus possessing a value, though it may be '
in the case of control registers), or inactive.
In order to model preemption in synchronous automata the simple idea is to guard 'reac-
tivation' in that the above automaton is modified to:
--Once the control register is activated, it remains activated only up to the instant that the
signal is present; the termination signal
- is emitted at the same later instant. Note that
- requires that 'control' is with
- (i.e.,
- is active). These clauses capture precisely
the semantics of ESTEREL's await statement (cf. Section 3.1) where
await click is defined as do halt watching click
This construct generalises to do P watching s, where P is an arbitrary program state-
ment. In addition to the preemption of halts, ESTEREL's semantics require that no signal is
emitted by P if the watchdog signal
- is present; also,
- is to be ignored by P in the instant
that it commences execution [4]. The preemption effect can be achieved by guarding the
emittance of signals. A typical clause in P that depends on a register
- may be
-=e -V-0-R-=g
meaning that
- is emitted if
e is present in the first (
or if the control register
- is
active when
g is present. The emission of
- is preempted by the watchdog action
thus:
-=e -V-0-R-=gV-
-Note that
- is still emitted in the first instant if
e is present.
In contrast to this, ARGOS only allows preemption of registers. With regard to Figure 2,
emmittance of the signal done should preempt all the states inside state two, including state
two itself. The reader will note that preemption of signals must be avoided because otherwise
the signal done, being the cause of the preemption in this case, would not then be
emitted. Preemption of registers is achieved by the same guarding mechanism as before,
now with the guard
done. Working through the details, the reader may quickly check that
h one
done
# h two
click
h two
click
done
# h two
relates to the transitions from state one to state two, and conversely. Further, building on the
informal description in Section 3.2, it is easy to check that the behaviour of the subprocess
in state two is captured by the synchronous automaton
double
click
single
# timeout
double
done
# timeout
double
click
h three
click
# timeout
double
(omitting the register clauses refining state Count5, and the timeout signal). We have not
defined h four or h five here since these 'terminal' states in the respective ARGOS subautomata
never become activated. Guarding all the registers of
# with done therefore (the '
clauses do not change, so are not repeated), we obtain
h one
done
# h two
click
h two
click
done h two
h three
click
click
done
click
# timeout
double
done h Count5
which is a fairly precise account of what wasdescribed informally in Section 3.2. Of course,
a more structural, or compositional treatment is needed for an ARGOS compiler to synthesis
# from
# in general. Thus, for example, the ARGOS compiler must bind h four and h five
since it is not until the context surrounding these subautomata is known that it becomes clear
that these registers never become activated.
A compositional semantics for ARGOS in terms of synchronous automata which allows
us to make such optimisations as that above is rather easy to establish, but the same cannot
be said for the compositional semantics of ESTEREL, which is a much more complex lan-
guage. However, to give a flavour of the approach let us return to the discussion of sequential
composition begun in Section 4.2. Assume that P and Q are translated to synchronous
automata
# and
# respectively. We use the common trick when modelling P;Q within
a parallel calculus (see, e.g., [29]) of placing
# and
# in parallel, but preventing
# from
starting until signifies that it has terminated. The start signal
suspends (by the control
axiom) the activation of
# emits its
The notation
# means that
# is given a fresh name
# in
# (the same replaces
# in
# ). The beauty of synchronous automata is that the '
just textual juxtaposition
(concatenation). Since
# is broadcast,
# starts in the same instant terminates.
Formally, termination (of
# ) requires that all control registers become inactive at the end
of the instant in which
emits
# . This is a defining property of synchronous automata, like
the control axiom discussed in Section 4.2.3. Fortunately, these turn out to be invariants of
our compositional semantics of ESTEREL. Specifically, in [34] we prove that given
# is the
translation to synchronous automata of the ESTEREL statement P:
1. if
emits
# , then in the next instant no control register
# in
# is active, and
2. if no control register
# in
# is active, then no signals are emitted from
# .
These results support the main theorem in [34] which states that our synchronous automata
semantics coincides with the official behavioural semantics of (pure) ESTEREL [4].
5.3. Merging Control and Dataflow
The basic question is: what is control in dataflow? It seems that calling a dataflow program
magically transfers 'control' to the program, just as killing the resulting (endless) process
withdraws control. To get a grasp of the issue, recall that "a system is reactive when it is
fast enough to respond to every input event, and its reaction latency is short enough that the
environment is still receptive to its responses." Implicit in this point of view is the notion
that without an input event, no 'response' is given-that is: if some signal is present then
some input signal must be present. Hence, it should be possible to control a synchronous
process by guarding the input.
Let X and Y be input signals to the LUSTRE program (node) consisting of the single equation
Y. Then the behaviour of the LUSTRE program is equivalent to
on the level of observable signals. Now preemption may be applied since, cheerfully abusing
syntax, do watching s is just:
This simple trick of adding control reconciles the declarative and the imperative programming
styles, at least with regard to preemption of dataflows.
Another facet of the interaction between dataflow and controlflow emerges in the example
discussed in Section 4.2. Recall that a LUSTRE node (iData) was placed in parallel with
an ESTEREL module (iCtrl) in order to realise the initialisation phase of the production
cell. We have already seen how to terminate the LUSTRE node when done is emitted, but
how is the OutPress1 signal (say) to be handled by the ESTEREL control program? The
point is that
OutPress1
from Section 4.2.2, is a Boolean flow which is (modulo preemption) always present with a
Boolean value; it is not a pure signal as required by
7- 89;:=<?>A@ B0CDBE/F0GH
s
in Section 4.2.1. The difficulty is that LUSTRE does not distinguish between 'clocks' and
Boolean flows (although SIGNAL and ESTEREL do, for example). For strict formality we
are therefore obliged to sample the Boolean flows:
s
s
- true OutPress1
s
where true is the natural projection: true
z -D|V-
z tt
- . Ultimately, of course,
the control expressions will be implemented by Boolean functions in the target (program-
ming) language to which the synchronous automaton is compiled; true is thus a detail required
for the consistency of the model, rather than an implementation detail.
This closes our discussion of the mathematical underpinnings of the SYNCHRONIE work-
bench. Of course, many technical details (of the formal translations) have been omitted in
this semantics in a nutshell, but we hope that the reader will have grasped the basic, rather
simple ideas. However, one important issue in synchronous programming languages that
we have not dwelt upon, but which deserves a mention at least, is that of guaranteeing determinism
and reactivity. The formal translations may well generate clauses such as
u- @
which are inconsistent, or allow non-deterministic behaviour. Compilers can report such
mishaps and indicate what went wrong. If a synchronous automaton is free of such problems
(this comes under the name of causality analysis, or clock calculus), it is guaranteed
to specify a deterministic Mealy machine which may be implemented either in hardware
(for Boolean automata), or software.
6. The Architecture of SYNCHRONIE
The SYNCHRONIE project at GMD [33] is organised around the construction a workbench
for mixed synchronous language programming which combines editors, compilers, code
generators and tools for simulation as well as tools for testing, verification and animation.
The integration of languages is based on the synchronous automata sketched above; these
play a pivotal role in the system architecture of SYNCHRONIE (see Figure 7 on page 21)
since all back-end tools are based on this common representation of synchronous programs.
An essential tool that is intrinsic to the workbench is the linker which merges synchronous
automata from various sources; this approach guarantees a maximum of independence with
regard to specific language constructs, and improves modularity in translating programs to
synchronous automata. The other kernel function is a causality analyser which implements
sound heuristics that guarantee reactivity and determinism-the algorithm implemented is
similar to that of Shiple et al. [35].
Editor
Lustre
Translator
Lustre
Editor
Translator
Esterel
Argos
Editor
Translator
Argos
Esterel
Embedded Eiffel Embedded Eiffel
Synchronie Kernel
Causality
Analyser Linker
Synchronous
Automata
Validation
Testing
VIS
Verifiers
Timing Analysis
Netlist
Optimisers
Code Generators
Compilation
Simulation
Esterel
Argos
Lustre
Viewers
Simulator
Animator
Project Management
Figure
7. The SYNCHRONIE workbench
Project Management The increasing trend in electro-technical systems of hardware components
being replaced by software provides the advantage that a device can be easily adjusted
to an individual customer's needs. In this enterprise, synchronous languages themselves
address the more delicate parts of the design problem (managing the often convoluted
control flow, and synchronisations between distributed elements or control inputs), but this
has to be recognised as only a part of the software problem. Overall, the appropriate flexibility
in software design is best achieved using object oriented techniques. Although there
is no space to elaborate here, this is an important part of the SYNCHRONIE toolset and we
are developing an object oriented design environment for real-time applications based on
a fusion of ARGOS and ESTEREL (for programming the control flow), and the object oriented
language Eiffel (for the arithmetic, and other data manipulations). We call the hybrid
language Embedded Eiffel.
Thus, a project may have multiple components written in a variety of synchronous languages
(but 'hosted' only in C or Embedded Eiffel at present). This most visible layer of
the SYNCHRONIE workbench provides programmers with editing, browsing, and project
management facilities amongst components written in different languages, and a control
front-end to other functions of SYNCHRONIE.
Compilation The compilation functions provide the link towards executing synchronous
automata on various platforms. In particular, this function produces code for simulation and
analysis tools for rapid prototyping and design validation. This encompasses:
Optimisers may reorganise generated code variously, but the optimisations performed depend
on the ultimate destination of the code. For instance, optimisations that eliminate
internal signals should be inhibited when code for simulators and symbolic debuggers
is required. However, experience indicates that optimisation to minimise the number
of registers is desirable for formal verification tools.
Code Generators exploit the close relationship between Boolean automata and sequential
circuits which makes it straightforward to generate netlist formats to be exported to
hardware synthesis and analysis tools such as SIS [5]. Generating portable executable
code like C exploits a hierarchical representation of synchronous automata for runtime
efficiency of the resulting code.
Ultimately we aim to produce code for particular target architectures, like the PIC processors
for micro-controller applications, but optimisation has to be applied with care, particularly
in safety critical applications, as there is often a requirement for readable code
(meaning that requirements can be traced to the executable code).
Validation Analysis functions are intended to improve confidence throughout the steps
of the development chain, helping designers and programmers to reduce the cost of errors
by finding them as early as possible. Several types of analysis function are identified with
Verifiers support design validation through formal proof of logical specifications and re-
quirements. For the most part we provide links and interfaces to third party tools. For
instance, Boolean automata equate with sequential circuits so we can use (among many
adequate alternatives) the SMV model checker, or the NP-TOOLS propositional logic
verifier, on aspects of the design [28, 36].
Testing improves product confidence through simulation. The testing tool allows selection
between several criteria like path or boundary values testing. A prototype test specification
environment to support systematic testing in the sense of [30] has been designed
around the workbench's graphical ARGOS editor. This second type of verification tool
complements the formal verification tools in that it focuses more on behavioural specification
[31].
Synchronous automata, and particularly Boolean automata, have a simple execution model
that supports fine grained timing analysis. This is important for verifying that a program
can meet strong timing constraints (i.e., to satisfy the synchrony hypothesis).
Simulation The simulation functions provide stepwise interpretation of synchronous au-
tomata, and a means to effect rapid prototyping. Two simulation tools are distinguished:
Simulators illustrate execution of synchronous programs at the source level by highlighting
the syntactical entities of programs which correspond to active signals and registers.
Source viewers, like editors, provide browsing facilities among the different components
of a program.
Animators support rapid prototyping of simple environments for executing programs. Animators
are based on a toolbox of basic components for generating inputs and displaying
outputs (via waveform diagrams, etc.
The production cell control program described in Section 4 has been developed entirely
using prototype tools from the SYNCHRONIE toolset (the figures were created using the
ARGOS editor). The program itself is moderately complex, having several hundred signals
and 140 control registers, but is easily handled by the VIS (a CTL [9] model checker and
Verilog simulation and hardware synthesis environment [6]) for which we have a simple
back-end code generator.
7. Conclusions and Further Work
Embedded Eiffel has already been used to develop two small-scale industrial products, both
now marketed in Germany: the first is a Massflow Meter (a sensitive coriolis device for
measuring fluid flow [8]); the second is a small electronic lock system based on Radio Frequency
technology [7]. Both were developed using SYNCHRONIE tools, but
the high-level (ESTEREL) specification had to be hand translated into assembler in the latter
case since the target technology (a PIC16C86 micro-controller with a meager 2k ROM)was
not accessible through automatically generated (C) code. Automatically producing code for
such limited hardware from high-level specifications is a formidable challenge that represents
a next design goal of the SYNCHRONIE project.
The SYNCHRONIE project is a member of the (European) Eureka project SYNCHRON:
Bringing the Synchronous Technology for Real-Time Design [1]. This is dedicated to promoting
synchronous programming languages in industry. The SYNCHRONIE workbench is
not therefore being developed in isolation from the current generation of synchronous programming
environments (which promote the respective languages independently). Links
to export synchronous automata to existing tools, such as model checkers like LESAR [14],
and industrial strength verification tools like NP-TOOLS [36], will emerge in due course
since we plan to be fully compatible with the DC (for 'declarative code') format [10], principal
deliverable of the SYNCHRON project.
The idea of integrating synchronous programming languages is not entirely without precedent
therefore, although the efforts to date have mainly focused on providing a common
exchange format between downstream analysis tools (Halbwachs gives a good summary
of the 'common formats' predating DC [12]). Jourdan and others proposed a semantical
integration of ARGOS and LUSTRE, based on a translation of ARGOS into LUSTRE [20],
but that first attempt was flawed in that some causally correct ARGOS programs would produce
causally incorrect LUSTRE. More recently Maraninchi and Halbwachs have shown
how to encode ARGOS in DC [27], and this offers a robust method of merging these two
languages. The 'declarative code', as its name suggests, has been largely influenced by
the dataflow synchronous programming community, and there is as yet some doubt as to its
suitability for (full) ESTEREL. Synchronous automata, in contrast, supply a uniform mathematical
framework through which we are able to freely intermix declarative and imperative,
textual and graphical programming styles.
Acknowledgments
The work reported in this paper is not that of the named authors only. We should like to
take this opportunity to thank all our colleagues in the Embedded Software Design group
at GMD, and give them credit for their efforts in guaranteeing the success of SYNCHRONIE.
The group has been led throughout by Axel Poign-e, with significant input from Reinhard
Budde and Leszek Holenderski. Karl-Heinz Sylla (jointly with Budde) designed Embedded
Eiffel as a paradigmatic example for the combination of object-oriented design techniques
with synchronous programming; their several industrial case studies in the design
of real-time systems provided insights from which the whole group has profited. Maciej
Kubiczek, working closely with Leszek Holenderski, very professionally wrote compilers
for ARGOS and LUSTRE, and all the supporting software in the present version of the work-
bench. Agathe Merceron did many case studies in verification using the prototype tools
being developed for the workbench. Monika M-ullerburg takes responsibility for all activities
related to testing. Hans Nieters presented the first full version of the presentation layer
including an ARGOS editor in Spring 1996 which put all our ideas to work.
--R
SYNCHRON: a project proposal.
Synchronous languages provide safety in reactive system design.
Preemption in concurrent systems.
The ESTEREL synchronous programming language: Design
VIS: A system for verification and synthesis.
Eingebettete Echtzeitsysteme.
Design and verification of synchronizing skeletons using branching time temporal logic.
The common formats of synchronous languages: The declarative code DC.
Specification and Design of Embedded Systems.
Synchronous Programming of Reactive Systems.
The synchronous data flow programming language LUSTRE.
Programming and verifying real-time systems by means of the synchronous data-flow language LUSTRE
visual formalism for complex systems.
Production cell in LUSTRE.
Software for computers in the application of industrial safety-related systems
A formal approach to reactive systems software: A telecommunications application in ESTEREL.
A multiparadigm language for reactive systems.
Architectural synthesis for medium and high throughput processing with the New CATHEDRAL environment.
Programming real-time applications with SIG- NAL
Formal Development of Reactive Systems
Synchronous automata for reactive
Analysis of cyclic combinational circuits.
Operational and compositional semantics of synchronous automaton compositions.
Compiling argos into boolean equations.
Symbolic Model Checking.
Interpreting one concurrent calculus in another.
Systematic testing: a means for validating reactive systems.
Systematic testing and formal verification to validate reactive systems.
Specification of complex systems.
Boolean automata for implementing ESTEREL.
Constructive analysis of cyclic circuits.
Modelling and verifying systems and software in propositional logic.
--TR
--CTR
Per Bjurus , Axel Jantsch, Modeling of mixed control and dataflow system in MASCOT, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.9 n.5, p.690-703, October 2001
Axel Jantsch , Per Bjurus, Composite signal flow: a computational model combining events, sampled streams, and vectors, Proceedings of the conference on Design, automation and test in Europe, p.154-160, March 27-30, 2000, Paris, France
Klaus Winkelmann, Formal Methods in Designing Embedded Systemsthe SACRES Experience, Formal Methods in System Design, v.19 n.1, p.81-110, July 2001 | embedded software;synchronous automata;reactive systems;synchronous programming |
290092 | Filters, Random Fields and Maximum Entropy (FRAME). | This article presents a statistical theory for texture modeling. This theory combines filtering theory and Markov random field modeling through the maximum entropy principle, and interprets and clarifies many previous concepts and methods for texture analysis and synthesis from a unified point of view. Our theory characterizes the ensemble of images I with the same texture appearance by a probability distribution f(I) on a random field, and the objective of texture modeling is to make inference about f(I), given a set of observed texture examples.In our theory, texture modeling consists of two steps. (1) A set of filters is selected from a general filter bank to capture features of the texture, these filters are applied to observed texture images, and the histograms of the filtered images are extracted. These histograms are estimates of the marginal distributions of f( I). This step is called feature extraction. (2) The maximum entropy principle is employed to derive a distribution p(I), which is restricted to have the same marginal distributions as those in (1). This p(I) is considered as an estimate of f( I). This step is called feature fusion. A stepwise algorithm is proposed to choose filters from a general filter bank. The resulting model, called fields And Maximum Entropy), is a Markov random field (MRF) model, but with a much enriched vocabulary and hence much stronger descriptive ability than the previous MRF models used for texture modeling. Gibbs sampler is adopted to synthesize texture images by drawing typical samples from p(I), thus the model is verified by seeing whether the synthesized texture images have similar visual appearances to the texture images being modeled. Experiments on a variety of 1D and 2D textures are described to illustrate our theory and to show the performance of our algorithms. These experiments demonstrate that many textures which are previously considered as from different categories can be modeled and synthesized in a common framework. | Introduction
Texture is an important characteristic of the appearance of objects in natural scenes,
and is a powerful cue in visual perception. It plays an important role in computer vision,
graphics, and image encoding. Understanding texture is an essential part of understanding
human vision.
Texture analysis and synthesis has been an active research area during the past three
decades, and a large number of methods have been proposed, with different objectives
or assumptions about the underlying texture formation processes. For example, in computer
graphics, reaction-diffusion equations (Witkin and Kass 1991) have been adopted
to simulate some chemical processes that may generate textures on skin of animals. In
computer vision and psychology, however, instead of modeling specific texture formation
process, the goal is to search for a general model which should be able to describe a wide
variety of textures in a common framework, and which should also be consistent with the
psychophysical and physiological understanding of human texture perception.
The first general texture model was proposed by Julesz in the 1960's. Julesz suggested
that texture perception might be explained by extracting the so-called 'k-th order' statis-
tics, i.e., the co-occurrence statistics for intensities at k-tuples of pixels (Julesz 1962).
Indeed, early works on texture modeling were mainly driven by this conjecture (Haralick
1979). A key drawback for this model is that the amount of data contained in the k-th
order statistics is gigantic and thus very hard to handle when k ? 2. On the other hand,
psychophysical experiments show that the human visual system does extract at least some
statistics of order higher than two (Diaconis and Freeman 1981).
More recent work on texture mainly focus on the following two well-established areas.
One is filtering theory, which was inspired by the multi-channel filtering mechanism
discovered and generally accepted in neurophysiology (Silverman et al. 1989). This mechanism
suggests that visual system decomposes the retinal image into a set of sub-bands,
which are computed by convolving the image with a bank of linear filters followed by
some nonlinear procedures. The filtering theory developed along this direction includes
the Gabor filters (Gabor 1946, Daugman 1985) and wavelet pyramids (Mallat 1989, Simoncelli
etc. 1992, Coifman and Wickerhauser 1992, Donoho and Johnstone 1994). The
filtering methods show excellent performance in classification and segmentation (Jain and
Farrokhsia 1991).
The second area is statistical modeling, which characterizes texture images as arising
from probability distributions on random fields. These include time series models (Mc-
Cormick and Jayaramamurthy 1974), Markov chain models (Qian and Terrington 1991),
and Markov random field (MRF) models (Cross and Jain 1983, Mao and Jain 1992, Yuan
and Rao 1993). These modeling approaches involve only a small number of parameters,
thus provide concise representation for textures. More importantly, they pose texture
analysis as a well-defined statistical inference problem. The statistical theories enable
us not only to make inference about the parameters of the underlying probability models
based on observed texture images, but also to synthesize texture images by sampling
from these probability models. Therefore, it provides a rigorous way to test the model by
checking whether the synthesized images have similar visual appearances to the textures
being modeled (Cross and Jain 1983). But usually these models are of very limited forms,
hence suffer from the lack of expressive power.
This paper proposes a modeling methodology which is built on and directly combines
the above two important themes for texture modeling. Our theory characterizes the
ensemble of images I with the same texture appearances by a probability distribution
f(I) on a random field. Then given a set of observed texture examples, our goal is to infer
f(I). The derivation of our model consists of two steps.
(I) A set of filters is selected from a general filter bank to capture features of the
texture. The filters are designed to capture whatever features might be thought to be
characteristic of the given texture. They can be of any size, linear or nonlinear. These
filters are applied to the observed texture images, and histograms of the filtered images
are extracted. These histograms estimate the marginal distributions of f(I). This step is
called feature extraction.
(II) Then a maximum entropy distribution p(I) is constructed, which is restricted to
match the marginal distributions of f(I) estimated in step (I). This step is called feature
fusion.
A stepwise algorithm is proposed to select filters from a general filter bank, and at
each step k it chooses a filter F (k) so that the marginal distributions of f(I) and p(I)
with respect to F (k) have the biggest distance in terms of L 1 norm. The resulting model,
called fields And Maximum Entropy), is a Markov random
field (MRF) model 1 , but with a much more enriched vocabulary and hence much stronger
descriptive power compared with previous MRF models. The Gibbs sampler is adopted
to synthesize texture images by drawing samples from p(I), thus the model is tested by
synthesizing textures in both 1D and 2D experiments.
Our theory is motivated by two aspects. Firstly, a theorem proven in section (3.2)
shows that a distribution f(I) is uniquely determined by its marginals. Therefore if
a model p(I) matches all the marginals of f(I), then recent
psychophysical research on human texture perception suggests that two 'homogeneous'
textures are often difficult to discriminate when they have similar marginal distributions
from a bank of filters (Bergen and Adelson 1991, Chubb and Landy 1991). Our method
is inspired by and bears some similarities to Heeger and Bergen's (1995) recent work
on texture synthesis, where many natural looking texture images were synthesized by
matching the histograms of filter responses organized in the form of a pyramid.
This paper is arranged as follows. First we set the scene by discussing filtering methods
and Markov random field models in section (2), where both the advantages and disadvantages
of these approaches are addressed. Then in section (3), we derive our FRAME
model and propose a feature matching algorithm for probability inference and stochastic
simulation. Section (4) is dedicated to the design and selection of filters. The texture
modeling experiments are divided into three parts. Firstly section (5) illustrates important
concepts of the FRAME model by designing three experiments for one dimensional
texture synthesis. Secondly a variety of 2D textures are studied in section (6). Then
section (7) discusses a special diffusion strategy for modeling some typical texton images.
Finally section (8) concludes with a discussion and the future work.
Among statisticians, MRF usually refers to those models where the Markov neighborhood is very
small, e.g. 2 or 3 pixels away. Here we use it for any size of neighborhood.
Filtering and MRF Modeling
2.1 Filtering theory
In the various stages along the visual pathway, from retina, to V1, to extra-striate cortex,
cells with increasing sophistication and abstraction have been discovered: center-surround
isotropic retinal ganglion cells, frequency and orientation selective simple cells, and complex
cells that perform nonlinear operations. Motivated by such physiological discoveries,
the filtering theory proposes that the visual system decomposes a retinal image into a set
of sub-band images by convolving it with a bank of frequency and orientation selective
linear filters. This linear filtering process is then followed by some nonlinear operations.
In the design of various filters, Gaussian function plays an important role due to its nice
low-pass frequency property. To fix notation, we define an elongated two-dimensional
Gaussian function as:
with location parameters parameters (oe x ; oe y ).
A simple model for the radially symmetric center-surround ganglion cells is the Laplacian
of Gaussian with oe
Similarly, a model for the simple cells is the Gabor filter (Daugman 1985), which is a
pair of cosine and sine waves with frequency ! and amplitude modulated by the Gaussian
By carefully choosing the frequency ! and rotating the filter in the x-y coordinate system,
we obtain a bank of filters which cover the entire frequency domain. Such filters are used
for image analysis and synthesis successfully by (Jain and Farrokhsia 199, Lee 1992 ).
Other filter banks have also been designed for image processing by (Simoncelli etc. 1992).
The filters mentioned above are linear. Some functions are further applied to these
linear filters to model the nonlinear functions of the complex cell. One way to model the
complex cell is to use the power of each pair of Gabor filter j In fact,
is the local spectrum S(!) of I at (x; y) smoothed by a Gaussian function.
Thus it serves as a spectrum analyzer.
Although these filters are very efficient in capturing local spatial features, some problems
are not well understood. For example, i) given a bank of filters, how to choose the
best set of filters? especially when some of the filters are linear while others are nonlinear,
or the filters are highly correlated to each other, ii) after selecting the filters, how to
fuse the features captured by them into a single texture model? These questions will be
answered in the rest of the paper.
2.2 MRF modeling
MRF models were popularized by Besag (Besag 1973) for modeling spatial interactions
on lattice systems and were used by (Cross and Jain 1983) for texture modeling. An
important characteristic of MRF modeling is that the global patterns are formed via
stochastic propagation of local interactions, which is particularly appropriate for modeling
textures since they are characterized by global but not predictable repetitions of similar
local structures.
In MRF models, a texture is considered as a realization from a random field I defined
over a spatial configuration D, for example, D can be an array or a lattice. We denote I(~v)
as the random variable at a location ~v 2 D, and let Dg be a neighborhood
system of D, which is a collection of subsets of D satisfying 1)
. The pixels in N ~v are called neighbors of ~v. A subset C of D is a clique if every
pair of distinct pixels in C are neighbors of each other; C denotes the set of all cliques.
Definition. p(I) is an MRF distribution with respect to N if p(I(~v) j
denotes the values of all pixels other than ~v, and for A ae D,
denotes the values of all pixels in A.
Definition. p(I) is a Gibbs distribution with respect to N if
Z
where Z is the normalizing constant (or partition function), and -C () is a function of
intensities of pixels in clique C (called potential of C). Some constraints can be imposed
on -C for them to be uniquely determined.
The Hammersley-Clifford theorem establishes the equivalence between MRF and the
Gibbs distribution (Besag
Theorem 1 For a given N , p(I) is an MRF distribution () p(I) is a Gibbs distribution.
This equivalence provides a general method for specifying an MRF on D, i.e., first
choose an N , and then specify -C . The MRF is stationary if for every C 2 C, -C depends
only on the relative positions of its pixels. This is often assumed in texture modeling.
Existing MRF models for texture modeling are mostly auto-models (Besag 1973) with
potentials, i.e. -C j 0 if jCj ? 2, and p(I) has the following form
Z
expf
where and ~v are neighbors.
The above MRF model is usually specified through conditional distributions,
where the neighborhood is usually of order less than or equal to three pixels, and some
further restrictions are usually imposed on g for p(I(~v) j I(\Gamma~v)) to be a linear regression
or the generalized linear model.
Two commonly used auto-models are the auto-binomial model and the auto-normal
model. The auto-binomial model is used for images with finite grey levels I(~v) 2 f0;
1g (Cross and Jain 1983), the conditional distribution is a logistic regression,
where
log p ~v
It can be shown that the joint distribution is of the form
Z expf
G
When 2, the auto-binomial model reduces to the auto-logistic model (i.e., the Ising
model), which is used to model binary images.
The auto-normal model is used for images with continuous grey levels (Yuan and Rao
1993). It is evident that if an MRF p(I) is a multivariate normal distribution, then p(I)
must be auto-normal, so the auto-normal model is also called a Gaussian MRF or GMRF.
The conditional distribution is a normal regression,
and p(I) is of the form
i.e., the multivariate normal distribution N(-; oe the diagonal elements of B
are unity and the off-diagonal (~u; ~v) element of it is \Gammafi ~u\Gamma~v .
Another MRF model for texture is the OE-model (Geman and Graffigne 1986):
Z expf\Gamma
where OE is a known even symmetric function, and the OE-model can be viewed as extended
from the Potts model (Winkler 1995).
The advantage of the auto-models is that the parameters in the models can be easily
inferred by auto-regression, but they are severely limited in the following two aspects: i)
the cliques are too small to capture features of texture, ii) the statistics on the cliques
specifies only the first-order and second order moments (e.g. means and covariances for
GMRF). However, many textures has local structures much larger than three or four
pixels, and the covariance information or equivalently spectrum can not adequately characterize
textures, as suggested the existence of distinguishable texture pairs with identical
second-order or even third-order moments, as well as indistinguishable texture pairs with
different second-order moments (Diaconis and Freeman 1981). Moreover many textures
are strongly non-Gaussian, regardless of neighborhood size.
The underlying cause of these limitations is that equation 3 involves too many parameters
if we increase the neighborhood size or the order of the statistics, even for the
simplest auto-models. This suggests that we need carefully designed functional forms
for -C () to efficiently characterize local interactions as well as the statistics on the local
interactions.
3 From maximum entropy to FRAME model
3.1 The basics of maximum entropy
Maximum entropy (ME) is an important principle in statistics for constructing a probability
distributions p on a set of random variables X (Jaynes 1957). Suppose the available
information is the expectations of some known functions OE
R
Let\Omega be the set of all probability distribution p(x)
which satisfy the constraints, i.e.,
The ME principle suggests that a good choice of the probability distribution is the one
that has the maximum entropy, i.e.,
Z
log p(x)dxg; (11)
subject to E p [OE n
R
and
R
By Lagrange multipliers, the solution for p(x) is:
where is the Lagrange parameter, and Z (
R expf\Gamma
is the partition function that depends on and it has the following properties:
@ log Z
Z
In equation (12), (- 1 ; :::; -N ) is determined by the constraints in equation (11). But a
closed form solution for not available in general, especially when OE n (\Delta) gets
complicated, so instead we seek numerical solutions by solving the following equations
iteratively.
dt
The second property of the partition function Z ( ) tells us that the Hessian matrix
of log Z ( ) is the covariance matrix of vector (OE 1 (x); OE 2 (x); :::; OE N (x)) which is definitely
positive 2 , and log Z ( ) is strictly concave with respect to (- 1 ; :::; -N ), so is log p(x; ).
Therefore given a set of consistent constraints, there is a unique solution for
in equation (13).
3.2 Deriving the FRAME model
Let image I be defined on a discrete domain D, D can be a N \Theta N lattice. For each pixel
and L is an interval of R or L ae Z. For each texture, we assume that
there exists a "true" joint probability density f(I) over the image space L jDj , and f(I)
should concentrate on a subspace of L jDj which corresponds to texture images that have
perceptually similar texture appearances. Before we derive the FRAME model, we first
fix the notation as below.
Given an image I and a filter F (ff) with being an index of filter, we let
I (ff) I(~v) be the filter response at location ~v, and I (ff) the filtered image. The
marginal empirical distribution (histogram) of I (ff) is
where ffi() is the Dirac delta function. The marginal distribution of f(I) with respect to
F (ff) at location ~v is denoted by
f (ff)
Z Z
I
(ff) (~v)=z
At first thought, it seems an intractable problem to estimate f(I) due to the overwhelming
dimensionality of image I. To reduce dimensions, we first introduce the following
theorem.
Here, it is reasonable to assume that OE n (x) is independent of OE j (x) if i
Theorem 2 Let f(I) be the j D j-dimensional continuous probability distribution of a
texture, then f(I) is a linear combination of f (-) , the latter are the marginal distributions
on the linear filter response F (-) I.
[Proof ]. By inverse Fourier transform, we have
Z
Z
f (-) is the characteristic function of f(I), and
Z
Z
e \Gamma2- i!-; I? f(I)dI
Z
e \Gamma2- iz dz
Z
Z
f(I)dI
Z
e \Gamma2- iz dz
Z
Z
Z
e \Gamma2- iz f (-) (z)dz
is the inner product, and by definition f (-)
R
R
is the marginal distribution of F (-) I, and we define F (-) as a linear filter. 2
Theorem 2 transforms f(I) into a linear combination of its one dimensional marginal
distributions. 3 Thus it motivates a new method for inferring f(I): construct a distribution
p(I) so that p(I) has the same marginal distributions f (-) . If p(I) matches all marginal
distributions of f(I), then f(I). But this method will involve uncountable number
of filters and each filter F (-) is as big as image I.
Our second motivation comes from recent psychophysical research on human texture
perception, and the latter suggests that two homogeneous textures are often difficult to
discriminate when they produce similar marginal distributions for responses from a bank
of filters (Bergen and Adelson 1991, Chubb and Landy 1991). This means that it is
plausible to ignore some statistical properties of f(I) which are not important for human
texture discrimination.
To make texture modeling a tractable problem, in the rest of this paper we make
the following assumptions to limit the number of filters and the window size of each
3 It may help understand the spirit of this theorem by comparing it to the slice-reconstruction of 3D
volume in tomography.
filter for computational reason, though these assumptions are not necessary conditions
for our theory to hold true. 1). We limit our model to homogeneous textures, thus f(I) is
stationary with respect to location ~v. 4 2). For a given texture, all features which concern
texture perception can be captured by "locally" supported filters. In other words, the
sizes of filters should be smaller than the size of the image. For example, the size of image
is 256 \Theta 256 pixels, and the sizes of filters we used are limited to be less than 33 \Theta 33
pixels. These filters can be linear or non-linear as we discussed in section (2.1). 3). Only
a finite set of filters are used to estimate f(I)
Assumptions 1) and 2) are made because we often have access to only one observed
(training) texture image. For a given observed image I obs and a filter F (ff) , we let I obs(ff)
denote the filtered image, and H obs(ff) (z) the histogram of I obs(ff) . According to assumption
1), f (ff)
is independent of ~v. By ergodicity, H obs(ff) (z) makes a consistent
estimator to f (ff) (z). Assumption 2) ensures that the image size is lager relative to the
support of filters, so that ergodicity takes effect for H obs(ff) (z) to be an accurate estimate
of f (ff) (z).
Now for a specific texture, let Kg be a finite set of well selected
filters, and f (ff) (z); are the corresponding marginal distributions of f(I). We
denote the probability distribution p(I) which matches these marginal distributions as a
set,
is the marginal distribution of p(I) with respect to filter F (ff) at
location ~v. Thus according to assumption 3), any p(I)
2\Omega is perceptually a good enough
model for the texture, provided that we have enough well selected filters. Then we choose
from\Omega a distribution p(I) which has the maximum entropy,
Z
p(I) log p(I)dIg; (15)
subject to E
and
R
4 Throughout this paper, we use circulant boundary conditions.
The reason for us to choose the maximum entropy (ME) distribution is that while
p(I) satisfies the constraints along some dimensions, it is made as random as possible
in other unconstrained dimensions, since entropy is a measure of randomness. In other
words, p(I) should represent information no more than that is available. Therefore an ME
distribution gives the simplest explanation for the constraints and thus the purest fusion
of the extracted features.
The constraints on equation (15) differ from the ones given in section (3.1) in that z
takes continuous real values, hence there are uncountable number of constraints, therefore,
the Lagrange parameter - takes the form as a function of z. Also since the constraints are
the same for all locations ~v 2 D, - should be independent of ~v. Solving this maximization
problem gives the ME distribution:
Z
is a set of selected filters, and
is the Lagrange parameter. Note that in equation (17), for each filter F (ff) , - (ff) () takes
the form as a continuous function of the filter response I (ff) (~v).
To proceed further, let's derive a discrete form of equation (17). Assume that the filter
response I (ff) (~v) is quantitized into L discrete grey levels, therefore z takes values from
set fz (ff)
L g. In general, the width of these bins do not have to be equal, and
the number of grey levels L for each filter response may vary. As a result, the marginal
distributions and histograms are approximated by piecewisely constant functions of L
bins, and we denote these piecewise functions as vectors. H
L ) is
the histogram of I (ff) , H obs(ff) denotes the histogram of I obs(ff) , and the potential function
- (ff) () is approximated by vector -
equation (16) is rewritten as:
by changing the order of summations:
The virtue of equation (18) is that it provides us with a simple parametric model for
the probability distribution on I, and this model has the following properties,
specified by
ffl Given an image I, its histograms H (1) are sufficient statistics, i.e.
p(I; K ; SK ) is a function of (H
We plug equation (18) into the constraints of the ME distribution, and solve for
iteratively by the following equations,
d- (ff)
dt
In equation (19), we have substituted H obs(ff) for f (ff) , and E p(I; K ;S K ) (H (ff) ) is the expected
histogram of the filtered image I (ff) where I follows p(I; K ; SK ) with the current
K . Equation (19) converges to the unique solution at
K as we discussed in
section (3.1), and -
K is called the ME-estimator.
It is worth mentioning that this ME-estimator is equivalent to the maximum likelihood
estimator (MLE),
log p(I obs ; K
log Z ( K
By gradient accent, maximizing the log-likelihood gives equation (19), following property
i) of the partition function Z ( K ).
In Equation (19), at each step, given K and hence p(I; K ; SK ), the analytic form of
available, instead we propose the following method to estimate it
as we did for f (ff) before. We draw a typical sample from p(I; K ; SK ), and thus synthesize
a texture image I syn . Then we use the histogram H syn(ff) of I syn(ff) to approximate
This requires that the size of I syn that we are synthesizing should be
large enough. 5
To draw a typical sample image from p(I; K ; SK ), we use the Gibbs sampler (Geman
and Geman 1984) which simulates a Markov chain in the image space L jDj . The Markov
chain starts from any random image, for example, a white noise image, and it converges
to a stationary process with distribution p(I; K ; SK ). Thus when the Gibbs sampler
converges, the images synthesized follow distribution p(I; K ; SK ).
In summary, we propose the following algorithm for inferring the underlying probability
model p(I; K ; SK ) and for synthesizing the texture according to p(I; K ; SK ). The
algorithm stops when the subband histograms of the synthesized texture closely match
the corresponding histograms of the observed images. 6
5 Empirically, 128 \Theta 128 or 256 \Theta 256 seems to give a good estimation.
6 We assume the histogram of each subband I
(ff) is normalized such that
all the
computed in this algorithm have one extra degree of freedom for each ff, i.e., we can
increase f- (ff)
by a constant without changing p(I; K ; SK ). This constant will be absorbed
by the partition function Z ( K ).
Algorithm 1. The FRAME algorithm
Input a texture image I obs .
Select a group of K filters g.
Compute Kg.
Initialize - (ff)
Initialize I syn as a uniform white noise texture. 7
Repeat
Calculate H syn(ff) :::; K from I syn , use it for E p(I; K ;SK
Update - (ff)
Apply Gibbs sampler to flip I syn for w sweeps under p(I; K
Until 1P L
Algorithm 2. The Gibbs Sampler for w sweeps
Given image I(~v), flip counter/ 0
Repeat
Randomly pick a location ~v under the uniform distribution.
For being the number of grey levels of I
Calculate
Randomly flip I(~v) / val under p(val j I(\Gamma~v)).
flip counter / flip counter
Until flip counter=w \Theta M \Theta N .
In algorithm 2, to compute I(\Gamma~v)), we set I(~v) to val, due to Markov
property, we only need to compute the changes of I (ff) at the neighborhood of ~v. The size
of the neighborhood is determined by the size of filter F (ff) . With the updated I (ff) , we
calculate H (ff) , and the probability is normalized such that
1.
In the Gibbs sampler, flipping a pixel is a step of the Markov chain, and we define
flipping jDj pixels as a sweep, where jDj is the size of the synthesized image. Then the
overall iterative process becomes an inhomogeneous Markov chain. At the beginning of
the process, p(I; K ; SK ) is a "hot" uniform distribution. By updating the parameters,
the process get closer and closer to the target distribution, which is much colder. So the
algorithm is very much like a simulated annealing algorithm (Geyer and Thompson 1995),
7 Note that the white noise image with uniform distribution are the samples from p(I; K ; SK ) with
which is helpful for getting around local modes of the target distribution. We refer to
(Winkler 1995) for discussion of alternative sampling methods.
The computational complexity of the above algorithm is notoriously large: O(U \Theta w \Theta
jDj \Theta G \Theta K \Theta FH \Theta FW ) with U the number of updating steps for K , w the number
of sweeps each time we update K , D the size of image I syn , G the number of grey levels
of I, K the number of filters, and FH;FW are the average window sizes of the filters. To
synthesize a 128 \Theta 128 texture, the typical complexity is about 50 \Theta 4 \Theta 128 \Theta 128 \Theta 8 \Theta 4 \Theta
takes about one day to run on a Sun-20.
Therefore it is very important to choose a small set of filter which can best capture the
features of the texture. We discuss how to choose filters in section (4).
3.3 A general framework
The probability distribution we derived in the last section is of the form
This model is significant in the following aspects.
First, the model is directly built on the features I (ff) (~v) extracted by a set of filters
F (ff) . By choosing the filters, it can easily capture the properties of the texture at multiple
scales and orientations, either linear or nonlinear. Hence it is much more expressive than
the cliques used in the traditional MRF models.
Second, instead of characterizing only the first and second order moments of the
marginal distributions as the auto-regression MRF models did, the FRAME model includes
the whole marginal distribution. Indeed, in a simplified case, if the constraints
on the probability distribution are given in the form of kth-order moments instead of
marginal distributions, then the functions - (ff) (\Delta) in equation (21) become polynomials
of order m. In such case, the complexity of the FRAME model is measured by the following
two aspects: 1) the number of filters and the size of the filter, 2) the order of
the moments-m. As we will see in later experiments, equation (21) enable us to model
strongly non-Gaussian textures.
It is also clear to us that all existing MRF texture models can be shown as special
cases of FRAME models.
Finally, if we relax the homogeneous assumption, i.e., let the marginal distribution
of I (ff) (~v) depend on ~v, then by specifying these marginal distributions, denoted by f (ff)
p(I) will have the form
Z expf\Gamma
This distribution is relevant in texture segmentation where - (ff)
~v are assumed piecewise
consistent with respect to ~v, and in shape inference when - (ff)
~v changes systematically with
respect to ~v, resulting in a slowly varying texture. We shall not study non-stationary
textures in this paper.
In summary, the FRAME model incorporates and generalizes the attractive properties
of the filtering theory and the random fields models, and it interprets many previous
methods for texture modeling in a unified view of point.
4 Filter selection
In the last section, we build a probability model for a given texture based on a set of
filters SK . For computational reasons SK should be chosen as small as possible, and a
key factor for successful texture modeling is to choose the right set of filters that best
characterizes the features of the texture being modeled. In this section, we propose a
novel method for filter selection.
4.1 Design of the filter bank
To describe a wide variety of textures, we first need to design a filter bank B. B can
include all previously designed multi-scale filters (Daugman 1985, Simoncelli etc 1992) or
wavelets (Mallat 1989, Donoho and Johnstone 1994, Coifman and Wickerhauser 1992).
In this paper, we should not discuss the optimal criterion for constructing a filter bank.
Throughout the experiments in this paper, we use five kinds of filters.
1. The intensity filter ffi(), and it captures the DC component.
2. The isotropic center-surround filters, i.e., the Laplacian of Gaussian filters. Here
we rewrite equation (1) as:
const
2oe stands for the scale of the filter. We choose eight scales with
these filters by LG(T ).
3. The Gabor filters with both sine and cosine components. We choose a simple case
from equation (2):
const
We choose 6 scales
We can see that these filters are not even approximately orthogonal to each other. We
denote by Gcos(T ; ') and Gsin(T ; ') the cosine and sine components of the Gabor filters.
4. The spectrum analyzers denoted by SP (T ; '), whose responses are the power of
the Gabor pairs: j (Gabor I)(x; y) j 2 .
5. Some specially designed filters for one dimensional textures and the textons, see
section (5) and (7).
4.2 A stepwise algorithm for filter selection
For a fixed model complexity K, one way to choose SK from B is to search for all possible
combinations of K filters in B and compute the corresponding p(I; K ; SK ). Then by
comparing the synthesized texture images following each p(I; K ; SK ), we can see which
set of filters is the best. Such a brute force search is computationally infeasible, and for a
specific texture we often do not know what K is. Instead, we propose a stepwise greedy
strategy. We start from S an uniform distribution, and then
sequentially introduce one filter at a time.
Suppose that at the k-th step we have chosen S obtained
a maximum entropy distribution
so that E p(I; k ;S k ) [H (ff) k. Then at the (k 1)-th step, for each
filter F (fi) 2 B=S k , we denote by the distance between
are respectively the marginal distributions of p(I; k
and f(I) with respect to filter F (fi) . Intuitively, the bigger d(fi) is, the more information
F (fi) carries, since it reports a big difference between p(I; k ; S k ) and f(I). Therefore we
should choose the filter which has the maximal distance, i.e.,
Empirically we choose to measure the distance d(fi) in terms of L p -norm, i.e.,
In the experiments of this paper, we choose
To estimate f (fi) and E p(I; k ;S k ) [H (fi) ], we applied F (fi) to the observed image I obs and
the synthesized image I syn sampled from the p(I; k ; S k ), and substitute the histograms
of the filtered images for f (fi) and E p(I; k ;S k ) [H (fi) ], i.e.,
The filter selection procedure stops when the d(fi) for all filters F (fi) in the filter bank
are smaller than a constant ffl. Due to fluctuation, various patches of the same observed
texture image often have a certain amount of histogram variance, and we use such a
variance for ffl.
In summary, we propose the following algorithm for filter selection.
Algorithm 3. Filter Selection
Let B be a bank of filters, S the set of selected filters, I obs the observed texture image, and
I syn the synthesized texture image.
dist. I syn / uniform noise.
For do
Compute I obs(ff) by applying F (ff) to I obs .
Compute of I obs(ff) .
Repeat
For each F (fi) 2 B=S do
Compute I syn(fi) by applying F (fi) to I syn
Compute of I syn(fi)
Choose F (k+1) so that d(k
S
Starting from p(I) and I syn , run algorithm 1 to compute new p (I) and I syn .
p(I) / p (I) and I syn / I syn .
Until d (fi) ! ffl
Before we conclude this section, we would like to mention that the above criterion
for filter selection is related to the minimax entropy principle studied in (Zhu, Wu, and
Mumford 1996). The minimax entropy principle suggests that the optimal set of filters
SK should be chosen to minimize the Kullback-Leibler distance between p(I; K ; SK ) and
f(I), and the latter is measured by the entropy of the model p(I; K ; SK ) up to a constant.
Thus at each step k filter is selected so that it minimizes the entropy of p(I; k
by gradient descent, i.e.,
and + is the new Lagrange parameter. A brief derivation is
given in the appendix.
5 Experiments in one dimension
In this section we illustrate some important concepts of the FRAME model by studying a
few typical examples for 1D texture modeling. In these experiments, the filters are chosen
by hand.
For one-dimensional texture the domain is a discrete array and a pixel
is indexed by x instead of ~v. For any x 2 [0; 255], I(x) is discretized into G grey levels,
with experiment I and III, and experiment II.
Experiment I. This experiment is designed to show the analogy between filters in
texture modeling and vocabulary in language description, and to demonstrate how a
both histograms are normalized to have We note this measure is
robust with respect to the choice of the bin number L (e.q. we can take as well as the
normalization of the filters.
a
Figure
1 The observed and synthesized pulse textures. a) the observed; b) synthesized using only the
intensity filter; c) intensity filter plus rectangular filter with d) Gabor filter with
Gabor filter plus intensity filter.
texture can be specified by the marginal distributions of a few well selected filters.
The observed texture, as shown in figure (1.a), is a periodic pulse signal with period
once every 8 pixels and all the other pixels. First
we choose an intensity filter, and the filter response is the signal itself. The synthesized
texture by FRAME is displayed in figure (1.b). Obviously it has almost the same number
of pulses as the observed one, and so has approximately the same marginal distribution for
intensity. Unlike the observed texture, however, these pulses are not arranged periodically.
To capture the period of the signal, we add one more special filter, an 8 \Theta 1 rectangular
1], and the synthesized signal is shown in figure (1.c), which has
almost the same appearance as in Figure (1.a). We can say that the probability p(I)
specified by these two filters models the properties of the input signal very well.
Figure
(1.d) is the synthesized texture using a nonlinear filter which is an 1D spectrum
analyzer SP (T ) with 8. Since the original periodic signal has flat power spectrum,
and the Gabor filters only extract information in one frequency band, therefore the texture
synthesized under p(I) has power spectrum near frequency 2-but are totally free at
other bands. Due to the maximum entropy principle, the FRAME model allows for the
unconstrained frequency bands to be as noisy as possible. This explains why figure (1.d)
is noise like while having roughly a period of 8. If we add the intensity filter, then
probability p(I) captures the observed signal again, and a synthesized texture is shown in
figure (1.e).
This experiment shows that the more filters we use, the closer we can match the
synthesized images to the observed. But there are some disadvantages for using too many
filters. Firstly it is computationally expensive, and secondly, since we have few observed
examples, it may overly constrain the probability p(I), i.e. it may make p(I) 'colder' than
it should be.
Experiment II. In this second experiment we compare FRAME against Gaussian
MRF models by showing the inadequacy of the GMRF model to express high order
statistics.
To begin with, we choose a gradient filter r with impulse response [\Gamma1; 1] for com-
parison, and the filtered image is denoted by rI.
The GMRF models are concerned with only the mean and variance of the filter re-
sponses. As an example, we put the following two constraints on distribution p(I),
Since we use a circulant boundary, the first constraint always holds, and the resulting
maximum entropy probability is
Z
expf\Gamma-
The numeric solution given by the FRAME algorithm is and two synthesized
texture images are shown in figure (3.b) and (3.c). Figure (3.a) is a white noise texture
for comparison.
x
x
I
x
x
I
x
x
.
.
. 94
a b
Figure
a) The designed marginal distribution of rI; and b) The designed marginal distribution of \DeltaI.
As a comparison, we now ask rI(x) to follow the distribution shown in figure (2.a).
Clearly in this case E p [rI(x)] is a non-Gaussian distribution with first and second moments
as before, i.e., synthesized textures are
displayed in figure (3.d and e). The textures in figure (3.d and e) possess the same first
and second order moments as in figure (3.b and c), but figure (3.d and e) have specific
higher order statistics and looks more specific than in figure (3.b and c). It demonstrates
that the FRAME model has more expressive power than the GMRF model.
Now we add a Laplacian filter \Delta with impulse response [0:5; \Gamma1:0; 0:5], and we ask
\DeltaI(x) to follow the distribution shown in figure (2.b). Clearly the number of peaks and
valleys in I(x) are specified by the two short peaks in figure (2.b), the synthesized texture
c d
Figure
3 a. The uniform white noise texture, b.c. the texture of GMRF, d,e, the texture with higher
order statistics, f. the texture specified with one more filter.
is displayed in figure (3.f). This experiment also shows the analogy between filters and
vocabulary.
Experiment III. This experiment is designed to demonstrate how a single non-linear
Gabor filter is capable of forming global periodic textures. The observed texture
is a perfect sine wave with period T hence it has a single Fourier component.
We choose the spectrum analyzer SP (T ) with period 16. The synthesized texture
is in figure (4.a). The same is done for another sine wave that has period T
and correspondingly the result is shown in figure (4.b). Figure (4) show clear globally
periodic signals formed by single local filters. The noise is due to the frequency resolution
of the filters. Since the input textures are exactly periodic, the optimal resolution will
requires the Gabor filters to be as long as the input signal, which is computationally more
expensive.
Figure
4 The observed textures are the pure sine waves with period T=16, and 32 respectively. Periodic
texture synthesized by a pair of Gabor filters, a. T=16, b. T=32.
6 Experiments in two dimensions
In this section, we discuss texture modeling experiments in two dimensions. We first take
one texture as an example to show in detail the procedure of algorithm 3, then we will
apply algorithm 3 to other textures.
Figure
(5.a) is the observed image of animal fur. We start from the uniform noise
image in figure (5.b). The first filter picked by the algorithm is a Laplacian of Gaussian
filter LG(1:0) and its window size is 5 \Theta 5. It has the largest error
all the filters in the filters bank. Then we synthesize texture as shown in figure (5.c),
which has almost the same histogram at the subband of this filter (the error d(fi) drops
to 0:035).
Comparing figure (5.c) with figure (5.b), we notice that this filter captures local
smoothness feature of the observed texture. Then the algorithm sequentially picks 5 more
filters. They are 2) Gcos(6:0; 120
each of which captures features at various scales and orientations.
The sequential conditional errors for these filters are respectively 0:424; 0:207; 0:132; 0:157; 0:059
and the texture images synthesized using filters are shown in figure (5.d,e,f).
Obviously, with more filters added, the synthesized texture gets closer to the observed
one.
To show more details, we display the subband images of the 6 filters in figure (6), the
histograms of these subbands H (ff) and the corresponding estimated parameters - (ff) are
plotted in figure (7) and figure (8) respectively.
Figure
5 Synthesis of the fur texture. a is the observed texture, b,c,d,e,f are the synthesized textures
using filters respectively. See text for interpretation.
Figure
6 The subband images by applying the 6 filters to the fur image: a Laplacian of Gaussian
a b c
Figure
7 a,b,c,d,e,f are respectively the histograms H (ff) for
lambda i
lambda i
a b c
lambda i
lambda i
3 41.53.5i
lambda i
Figure
8 a,b,c,d,e,f are respectively the - (ff) for
In figure (7), the histograms are approximately Gaussian functions, and correspon-
dently, the estimated - (ff) in figure (8) are close to quadratic functions. Hence in this
example, the high order moments seemly do not play a major role, and the probability
model can be made simpler. But this will not be always true for other textures. In figure
(8), we also notice that the computed - (ff) becomes smaller and smaller when ff gets
bigger, which suggests that the filters chosen in later steps make less and less contribution
to p(I), and thus confirms our early assumption that the marginal distributions of
a small number of filtered images are good enough to capture the underlying probability
distribution f(I).
Figure
(9.a) is the scene of the mud ground with footprints of animals, these footprints
are filled with water and get brighter. This is a case of sparse features. Figure (9.b) is
the synthesized texture using 5 filters chosen by algorithm 3.
Figure
(10.a) is an image taken from the skin of cheetah. the synthesized texture using
6 filters is displayed in figure (10.b). We notice that in figure (10.a) the texture is not
homogeneous, the shapes of the blobs vary with spatial locations and the left upper corner
is darker than the right lower one. The synthesized texture, shown in figure (10.b), also
has elongated blobs introduced by different filters, but we notice that the bright pixels
Figure
9 a. the observed texture-mud, b, the synthesized one using 5 filters
a b
Figure
a. the observed texture-cheetah blob, b, the synthesized one using 6 filters
c d
Figure
11 a. the input image of fabric, b. the synthesized image with two spectrum analyzers plus
the Laplacian of Gaussian filter. c,d The filter response of the two spectrum analyzers for the fabric
texture
spread uniformly across the image.
Finally we show a texture of fabric in figure (11.a), which has clear periods along both
horizontal and vertical directions. We want to use this texture to test the use of non-linear
filters, so we choose two spectrum analyzers to capture the first two salient periods,
one in the horizontal direction, the other in the vertical direction. The filter responses
I (ff) are the sum of squares of the sine and cosine component responses. The
filter responses are shown in figure (11c, d), and are almost constant. We also use the
intensity filter and the Laplacian of Gaussian filter LG(
(with window size 3 \Theta
to take care of the intensity histogram and the smoothness. The synthesized texture
is displayed in figure (11.b). If we carefully look at figure (11.b), we can see that this
synthesized texture has mis-arranged lines at two places, which may indicate that the
sampling process was trapped in a local maximum of p(I).
7 The Sampling strategy for textons
In this section, we study a special class of textures formed from identical textons, which
psychophysicists studied extensively. Such texton images are considered as rising from
a different mechanism from other textures in both psychology perception and previous
texture modeling, and the purpose of this section is to demonstrate that they can still
be modeled by the FRAME model, and to show an annealing strategy for computing
Figure
(12.a,b) are two binary (\Gamma1; +1 for black and white pixels) texton images with
circle and cross as the primitives. These two image are simply generated by sequentially
superimposing 128 15\Theta15 masks on a 256\Theta256 lattice using uniform distribution, provided
that the dropping of one mask does not destroy the existing primitives. At the center of
the mask is a circle (or a cross).
For these textures, choosing filters seems easy: we simply select the above 15 \Theta 15
mask as the linear filter. Take the circle texton as an example. By applying the filter to
the circle image and a uniform noise image, we obtain the histograms H obs (solid curve)
and H(x) (dotted curve) plotted in figure (13.a). We observe that there are many isolated
a b
Figure
Two typical texton images. a circle, b cross
peaks in H obs , which set up " potential wells" so that it becomes extremely unlikely to
change a filter response at a certain location from one peak to another by flipping one
pixel at a time.
-22Filter response
lambda
a b
Figure
a. The solid curve is the histogram of the circle image, and the dotted curve is the histogram
of the noise image; b. the estimated -() function in the probability model for the image of circles
To facilitate the matching process, we propose the following heuristics. We smooth
H obs with a Gaussian window G oe , or equivalently run the "heat" diffusion equation on
H obs (x; t) within the interval [x are respectively the minimal and
maximal filter response.
dH obs (x; t)
dt
@x
@x
The boundary conditions help to preserve the total "heat". Obviously, the larger t is, the
smoother the H obs (x; t) will be. Therefore we start from matching H(x) to H obs (x; t) with
a large t (see figure (14.a)), then gradually decrease t and match H(x) to the histograms
shown in figure (14.b,c,d,e,f) sequentially. This process is similar to the simulated annealing
method. The intuitive idea is to set up "bridges" between the peaks in the original
histogram, which encourages the filter response change to the two ends, where the texton
forms, then we gradually destruct these "bridges".
At the end of the process, the estimated - function for the circle texton is shown in
figure (13.b), and the synthesized images are shown in figure (15). We notice that the
cross texton is more difficult to deal with because it has slightly more complex structures
than the circle, and may need more carefully designed filters.
Histogram
Histogram
Histogram
a b c
Histogram
Histogram
Histogram
Figure
14 The diffused histogram H obs smaller and smaller from a to f.
Figure
Two synthesized texton images.
Although there is a close relationship between FRAME and the previous MRF models,
the underlying philosophies are quite different. Traditional MRF approaches favor the
specification of conditional distributions (Besag 1973). For auto-models, p(I(~v) j I(\Gamma~v))
are linear regressions or logistic regressions, so the modeling, inference, and interpretation
can be done in a traditional way. While it is computationally efficient for estimating the
fi coefficients, this method actually limits our imagination for building a general model.
Since the only way to generalize auto-models in the conditional distribution framework
is to either increase neighborhood size, and thus introduce more explanatory variables in
these auto-regressions, or introduce interaction terms (i.e., high order product terms of
the explanatory variables). However, even with a modest neighborhood (e.g., 13 \Theta 13),
the parameter size will be too large for any sensible inference.
Our FRAME model, on the contrary, favors the specification of the joint distribution
and characterizes local interactions by introducing non-linear functions of filter responses.
This is not restricted by the neighborhood size since every filter introduces the same
number of parameters regardless of its size, which enables us to explore structures at large
scales (e.g., 33 \Theta 33 for the fabric texture). Moreover, FRAME can easily incorporate
local interactions at different scales and orientations.
It is also helpful to appreciate the difference between FRAME and the Gibbs distribution
although both focus on the joint distributions. The Gibbs distribution is specified
via potentials of various cliques, and the fact that most physical systems only have pair
potentials (i.e., no potentials from the cliques with more than two pixels) is another reason
why most MRF models for textures are restricted to auto-models. FRAME , on
the other hand, builds potentials from finite-support filters and emphasizes the marginal
distributions of filter responses.
Although it may take a large number of filters to model a wide variety of textures, when
it comes to modeling a certain texture, only a parsimonious set of the most meaningful
filters needs to be selected. This selectivity greatly reduces the parameter size, thus
allows accurate inference and modest computing. So FRAME is like a language: it has an
efficient vocabulary (of filters) capable of describing most entities (textures), and when
it comes to a specific entity, a few of the most meaningful words (filters) can be selected
from the vocabulary for description. This is similar to the visual coding theory (Barlow
et al 1989, Field 1989) which suggests that the sparse coding scheme has advantages over
the compact coding scheme. The former assumes non-Gaussian distributions for f(I),
whereas the latter assumes Gaussian distributions.
Compared to the filtering method, FRAME has the following advantages: 1) solid
statistical modeling, 2) it does not rely on the reversibility or reconstruction of I from
fI (ff) g, and thus the filters can be designed freely. For example, we can use both linear
and nonlinear filters, and the filters can be highly correlated to each other, whereas in the
filtering method, a major concern is whether the filters form a tight frame (Daubechies
1992).
There are various classifications for textures with respect to various attributes, such as
Fourier and non-Fourier corresponding to whether the textures show periodic appearance;
deterministic and stochastic corresponding to whether the textures can be characterized
by some primitives and placement rules; and macro- and micro-textures in relation to
the scales of local structures. FRAME erases these artificial boundaries and characterizes
them in a unified model with different filters and parameter values. It has been well
recognized that the traditional MRF models, as special cases of FRAME, can be used
to model stochastic, non-Fourier micro-textures. From the textures we synthesized, it
is evident that FRAME is also capable of modeling periodic and deterministic textures
(fabric and pulses), textures with large scale elements (fur and cheetah blob), and textures
with distinguishable textons (circles and cross bars), thus it realizes the full potential of
MRF models.
But the FRAME model is computationally very expensive. The computational complexity
of the FRAME model comes from two major aspects. I). When bigger filters are
adopted to characterize low resolution features, the computational cost will increase proportionally
with the size of the filter window. II). The marginal distributions
are estimated from sampled images, which requires long iterations for high accuracy of
estimation. One promising way to reduce the computational cost is to combine the pyramid
representation with the pseudo-likelihood estimation (Besag 1977). The former cuts
the size of low resolution filters by putting them at the high levels of the pyramid as did
in (Popat and Picard 1993), and the latter approximates E p [H (ff) ] by pseudo-likelihood
and thus avoid the sampling process. But this method shall not be studied in this paper.
No doubt many textures will not be easy to model, for example some human synthesized
textures, such as textures on oriental rugs and clothes. It seems that the synthesis
of such textures requires far more sophisticated or high-level features than those we used
in this paper, and these high-level features may correspond to high-level visual process.
At the same time, many theoretical issues remain yet to be fully understood, for example,
the convergence properties of the sampling process and the definition of the best sampling
procedures; the relationship between the sampling process and the physical process which
forms the textures of nature and so on; and how to apply this texture model to the image
segmentation problem (Zhu and Yuille 1996). It is our hope that this work will simulate
future research efforts in this direction.
Appendix
. Filter pursuit and minimax entropy.
This appendix briefly demonstrates the relationship between the filter pursuit method and the
principle (Zhu, Wu and Mumford 1996).
Let p(I; K ; SK ) be the maximum entropy distribution obtained at step k (see equation (18)),
since our goal is to estimate the underlying distribution f(I), the goodness of p(I; K ; SK ) can be
measured by the Kullback-Leibler distance between p(I; K ; SK ) and f(I) (Kullback and Leibler
Z
f(I) log f(I)
it can be shown that
As entropy(f(I)) is fixed, to minimize KL(f; p(I; K ; SK )) we need to choose SK such that
has the minimum entropy, while given the selected filter set SK , p(I; K ; SK ) is
computed by maximizing entropy(p(I)). In other words, for a fixed filter number K, the best set
of filters is chosen by
p2\Omega K
where\Omega K is defined as equation (14). We call equation (29) the minimax entropy principle (Zhu,
Wu, Mumford 1996).
A stepwise greedy algorithm to minimize the entropy proceeds as the following. At step k
suppose we choose F (fi) , and obtain the ME distribution p(I;
f (ff) for fi. Then the goodness of F (fi) is measured by the decrease of the Kullback-Leibler
distance KL(f(I); p(I; k It can be shown that
where M is a covariance matrix of H (fi) , for details see (Zhu,Wu, Mumford 1996). Equation (31)
measures a distance between f (fi) and E p(I; k;Sk ) [H (fi) ] in terms of variance, and therefore suggests
a new form for the distance D(E p(I; k ;Sk equation (26), and this new form
emphasizes the tails of the marginal distribution where important texture features lies, but the
computational complexity is higher than the L 1 -norm distance. So far we have shown the filter
selection in algorithm 3 is closely related to a minimax entropy principle.
Acknowledgments
This work was supported by the NSF grant DMS-91-21266 to David Mumford. The
second author was supported by a grant to D.B. Rubin.
--R
"Finding minimum entropy codes."
"Theories of visual texture perception."
"Spatial interaction and the statistical analysis of lattice systems (with discussion)."
"Efficiency of pseudolikelihood estimation for simple Gaussian fields."
"Orthogonal distribution analysis: a new approach to the study of texture perception."
"Entropy based algorithms for best basis selection."
"Markov random field texture models."
Ten lectures on wavelets.
"On the Statistics of Vision: the Julesz Conjecture"
"Ideal de-noising in an orthonormal basis chosen from a libary of bases. "
"Relations between the statistics of natural images and the response properties of cortical cells"
"Theory of communication."
"Stochastic relaxation, Gibbs distribution, and the Bayesian restoration of images."
"Markov random field image models and their applications to computer vision."
"Annealing Markov chain Monto Carlo with applications to ancestral inference."
"Statistics and structural approach to texture."
"Pyramid-based texture analysis/synthesis."
"Unsupervised texture segmentation using Gabor filters."
"Information theory and statistical mechanics"
"Visual pattern discrimination."
"On information and sufficiency"
Image representation using 2D Gabor wavelets.
"Mumtiresolution approximations and wavelet orthonormal bases of L 2 (R)."
Texture classification and segmentation using multiresolution simultaneous autoregressive models.
"Time series models for texture synthesis."
"Novel cluster-based probability model for texture synthesis, classification, and compression."
"Multidimensional Markov chain models for image tex- tures."
"Spatial-frequency organization in primate striate cortex."
"Object and texture classification using higher order statistics."
Image Analysis
"Reaction-diffusion textures."
"Spectral estimation for random fields with applications to Markov modeling and texture Classification."
"Region Competition: unifying snakes, region growing, and Bayes/MDL for multi-band image segmentation"
"Minimax Entropy Principle and its applications"
--TR
--CTR
Dmitri Bitouk , Michael I. Miller , Laurent Younes, Clutter Invariant ATR, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.5, p.817-821, May 2005
Katy Streso , Francesco Lagona, Hidden Markov random field and frame modelling for TCA image analysis, Proceedings of the 24th IASTED international conference on Signal processing, pattern recognition, and applications, p.310-315, February 15-17, 2006, Innsbruck, Austria
Krishnamoorthy Sivakumar , John Goutsias, Morphologically Constrained GRFs: Applications to Texture Synthesis and Analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.21 n.2, p.99-113, February 1999
Fei Wu , Changshui Zhang , Jingrui He, An evolutionary system for near-regular texture synthesis, Pattern Recognition, v.40 n.8, p.2271-2282, August, 2007
Alexey Zalesny , Dominik Auf der Maur , Luc Van Gool, Composite textures: emulating building materials and vegetation for 3D models, Proceedings of the 2001 conference on Virtual reality, archeology, and cultural heritage, November 28-30, 2001, Glyfada, Greece
Julian J. McAuley , Tibrio S. Caetano , Alex J. Smola , Matthias O. Franz, Learning high-order MRF priors of color images, Proceedings of the 23rd international conference on Machine learning, p.617-624, June 25-29, 2006, Pittsburgh, Pennsylvania
O. Stahlhut, Extending natural textures with multi-scale synthesis, Graphical Models, v.67 n.6, p.496-517, November 2005
Wen , Xiao-Qing Ding, Visual similarity based document layout analysis, Journal of Computer Science and Technology, v.21 n.3, p.459-465, May 2006
Jeremy S. De Bonet, Multiresolution sampling procedure for analysis and synthesis of texture images, Proceedings of the 24th annual conference on Computer graphics and interactive techniques, p.361-368, August 1997
Kwang In Kim , Keechul Jung , Se Hyun Park , Hang Joon Kim, Support Vector Machines for Texture Classification, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.11, p.1542-1550, November 2002
Uri Lipowezky, Groves decipherment from space photos using prototype matching, Pattern Recognition Letters, v.25 n.13, p.1479-1489, 1 October 2004
R. Fablet , P. Bouthemy, Motion Recognition Using Nonparametric Image Motion Models Estimated from Temporal and Multiscale Cooccurrence Statistics, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.12, p.1619-1624, December
Chung-Ming Chen , Henry Horng-Shing Lu , Yao-Lin Chen, A discrete region competition approach incorporating weak edge enhancement for ultrasound image segmentation, Pattern Recognition Letters, v.24 n.4-5, p.693-704, February
Feng Dong , Gordon Clapworthy , Hai Lin, Technical Section: Pseudo surface-texture synthesis, Computers and Graphics, v.31 n.2, p.252-261, April, 2007
Joshua Gluckman, Visually Distinct Patterns with Matching Subband Statistics, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.2, p.252-264, February 2005
Qiang Chen , Jian Luo , Pheng Ann Heng , Xia De-shen, Fast and active texture segmentation based on orientation and local variance, Journal of Visual Communication and Image Representation, v.18 n.2, p.119-129, April, 2007
Alexei A. Efros , William T. Freeman, Image quilting for texture synthesis and transfer, Proceedings of the 28th annual conference on Computer graphics and interactive techniques, p.341-346, August 2001
K. -O. Cheng , N. -F. Law , W. -C. Siu, Multiscale directional filter bank with applications to structured and random texture retrieval, Pattern Recognition, v.40 n.4, p.1182-1194, April, 2007
Luminita A. Vese , Stanley J. Osher, Modeling Textures with Total Variation Minimization and Oscillating Patterns in Image Processing, Journal of Scientific Computing, v.19 n.1-3, p.553-572, December
Fast texture synthesis using tree-structured vector quantization, Proceedings of the 27th annual conference on Computer graphics and interactive techniques, p.479-488, July 2000
Xin Li, Image resolution enhancement via data-driven parametric models in the wavelet space, Journal on Image and Video Processing, v.2007 n.1, p.12-12, January 2007
Alan L. Yuille , James M. Coughlan, Fundamental Limits of Bayesian Inference: Order Parameters and Phase Transitions for Road Tracking, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.2, p.160-173, February 2000
Aaron D. Lanterman , Ulf Grenander , Michael I. Miller, Bayesian Segmentation via Asymptotic Partition Functions, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.4, p.337-347, April 2000
Zhuowen Tu , Song-Chun Zhu, Image Segmentation by Data-Driven Markov Chain Monte Carlo, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.5, p.657-673, May 2002
Jun-Hong Cui , Michalis Faloutsos , Dario Maggiorini , Mario Gerla , Khaled Boussetta, Measuring and modelling the group mmbership in the internet, Proceedings of the 3rd ACM SIGCOMM conference on Internet measurement, October 27-29, 2003, Miami Beach, FL, USA
Andrew Nealen , Marc Alexa, Hybrid texture synthesis, Proceedings of the 14th Eurographics workshop on Rendering, June 25-27, 2003, Leuven, Belgium
Lin Liang , Ce Liu , Ying-Qing Xu , Baining Guo , Heung-Yeung Shum, Real-time texture synthesis by patch-based sampling, ACM Transactions on Graphics (TOG), v.20 n.3, p.127-150, July 2001
Jingdan Zhang , Kun Zhou , Luiz Velho , Baining Guo , Heung-Yeung Shum, Synthesis of progressively-variant textures on arbitrary surfaces, ACM Transactions on Graphics (TOG), v.22 n.3, July
Ann B. Lee , David Mumford , Jinggang Huang, Occlusion Models for Natural Images: A Statistical Study of a Scale-Invariant Dead Leaves Model, International Journal of Computer Vision, v.41 n.1-2, p.35-59, January-February 2001
Ronan Fablet , Patrick Bouthemy, Non-Parametric Motion Activity Analysis for Statistical Retrieval with Partial Query, Journal of Mathematical Imaging and Vision, v.14 n.3, p.257-270, May 2001
Song Chun Zhu , Xiu Wen Liu , Ying Nian Wu, Exploring Texture Ensembles by Efficient Markov Chain Monte Carlo-Toward a 'Trichromacy' Theory of Texture, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.6, p.554-569, June 2000
James Coughlan , Huiying Shen, Dynamic quantization for belief propagation in sparse spaces, Computer Vision and Image Understanding, v.106 n.1, p.47-58, April, 2007
Simon Osindero , Max Welling , Geoffrey E. Hinton, Topographic Product Models Applied to Natural Scene Statistics, Neural Computation, v.18 n.2, p.381-414, February 2006
John MacCormick , Andrew Blake, A Probabilistic Exclusion Principle for Tracking Multiple Objects, International Journal of Computer Vision, v.39 n.1, p.57-71, Aug. 2000
Xuejie Qin , Yee-Hong Yang, Estimating parameters for procedural texturing by genetic algorithms, Graphical Models, v.64 n.1, p.19-39, January 2002
Wojciech Matusik , Matthias Zwicker , Frdo Durand, Texture design using a simplicial complex of morphable textures, ACM Transactions on Graphics (TOG), v.24 n.3, July 2005
Antonio Torralba , Aude Oliva, Depth Estimation from Image Structure, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.9, p.1226-1238, September 2002
Ziv Bar-Joseph , Ran El-Yaniv , Dani Lischinski , Michael Werman, Texture Mixing and Texture Movie Synthesis Using Statistical Learning, IEEE Transactions on Visualization and Computer Graphics, v.7 n.2, p.120-135, April 2001
Joachim Hornegger , Heinrich Niemann, Probabilistic Modeling and Recognition of 3-D Objects, International Journal of Computer Vision, v.39 n.3, p.229-251, Sept./Oct. 2000
Luminita A. Vese , Stanley J. Osher, Image Denoising and Decomposition with Total Variation Minimization and Oscillatory Functions, Journal of Mathematical Imaging and Vision, v.20 n.1-2, p.7-18, January-March 2004
J. Sullivan , A. Blake , M. Isard , J. MacCormick, Bayesian Object Localisation in Images, International Journal of Computer Vision, v.44 n.2, p.111-135, September 2001
Gabriele Gorla , Victoria Interrante , Guillermo Sapiro, Texture Synthesis for 3D Shape Representation, IEEE Transactions on Visualization and Computer Graphics, v.9 n.4, p.512-524, October
Xinguo Liu , Yizhou Yu , Heung-Yeung Shum, Synthesizing bidirectional texture functions for real-world surfaces, Proceedings of the 28th annual conference on Computer graphics and interactive techniques, p.97-106, August 2001
Ce Liu , Heung-Yeung Shum , William T. Freeman, Face Hallucination: Theory and Practice, International Journal of Computer Vision, v.75 n.1, p.115-134, October 2007
Thomas Leung , Jitendra Malik, Representing and Recognizing the Visual Appearance of Materials using Three-dimensional Textons, International Journal of Computer Vision, v.43 n.1, p.29-44, June 2001
Gozde Unal , Anthony Yezzi , Hamid Krim, Information-Theoretic Active Polygons for Unsupervised Texture Segmentation, International Journal of Computer Vision, v.62 n.3, p.199-220, May 2005
Daniel C. Alexander , Bernard F. Buxton, Statistical Modeling of Colour Data, International Journal of Computer Vision, v.44 n.2, p.87-109, September 2001
Matthias Heiler , Christoph Schnrr, Natural Image Statistics for Natural Image Segmentation, International Journal of Computer Vision, v.63 n.1, p.5-19, June 2005
Aaron Hertzmann , Charles E. Jacobs , Nuria Oliver , Brian Curless , David H. Salesin, Image analogies, Proceedings of the 28th annual conference on Computer graphics and interactive techniques, p.327-340, August 2001
Stefan Roth , Michael J. Black, On the Spatial Statistics of Optical Flow, International Journal of Computer Vision, v.74 n.1, p.33-50, August 2007
Xiaofeng Ren , Charless C. Fowlkes , Jitendra Malik, Learning Probabilistic Models for Contour Completion in Natural Images, International Journal of Computer Vision, v.77 n.1-3, p.47-63, May 2008
D. A. Forsyth , J. Haddon , S. Ioffe, The Joy of Sampling, International Journal of Computer Vision, v.41 n.1-2, p.109-134, January-February 2001
Nikos Paragios , Rachid Deriche, Geodesic Active Regions and Level Set Methods for Supervised Texture Segmentation, International Journal of Computer Vision, v.46 n.3, p.223-247, February-March 2002
Daniel Cremers , Mikael Rousson , Rachid Deriche, A Review of Statistical Approaches to Level Set Segmentation: Integrating Color, Texture, Motion and Shape, International Journal of Computer Vision, v.72 n.2, p.195-215, April 2007 | texture analysis and synthesis;minimax entropy;texture modeling;markov random field;feature pursuit;visual learning;maximum entropy |
290114 | A Router Architecture for Real-Time Communication in Multicomputer Networks. | AbstractParallel machines have the potential to satisfy the large computational demands of real-time applications. These applications require a predictable communication network, where time-constrained traffic requires bounds on throughput and latency, while good average performance suffices for best-effort packets. This paper presents a new router architecture that tailors low-level routing, switching, arbitration, flow-control, and deadlock-avoidance policies to the conflicting demands of each traffic class. The router implements bandwidth regulation and deadline-based scheduling, with packet switching and table-driven multicast routing, to bound end-to-end delay and buffer requirements for time-constrained traffic while allowing best-effort traffic to capitalize on the low-latency routing and switching schemes common in modern parallel machines. To limit the cost of servicing time-constrained traffic, the router includes a novel packet scheduler that shares link-scheduling logic across the multiple output ports, while masking the effects of clock rollover on the represention of packet eligibility times and deadlines. Using the Verilog hardware description language and the Epoch silicon compiler, we demonstrate that the router design meets the performance goals of both traffic classes in a single-chip solution. Verilog simulation experiments on a detailed timing model of the chip show how the implementation and performance properties of the packet scheduler scale over a range of architectural parameters. | Introduction
Real-time applications, such as avionics, industrial process
control, and automated manufacturing, impose strict
timing requirements on the underlying computing system.
As these applications grow in size and complexity, parallel
processing plays an important role in satisfying the
large computational demands. Real-time parallel computing
hinges on effective policies for placing and scheduling
communicating tasks in the system to ensure that critical
operations complete by their deadlines. Ultimately, a
parallel or distributed real-time system relies on an inter-connection
network that can provide throughput and delay
guarantees for critical communication between cooperating
tasks; this communication may have diverse performance
requirements, depending on the application [1]. However,
instead of guaranteeing bounds on worst-case communication
latency, most existing multicomputer network designs
focus on providing good average network throughput and
The work reported in this paper was supported in part by the National
Science Foundation under grant MIP-9203895 and the Office
of Naval Research under grants N00014-94-1-0229. Any opinions,
findings, and conclusions or recommendations expressed in this paper
are those of the authors and do not necessarily reflect the views
of NSF or ONR.
J. Rexford is with AT&T Labs - Research in Florham Park, New
Jersey, and J. Hall and K. G. Shin are with the University of Michigan
in Ann Arbor, Michigan.
packet delay. Consequently, recent years have seen increasing
interest in developing interconnection networks that
provide performance guarantees in parallel machines [2-8].
Real-time systems employ a variety of network architec-
tures, depending on the application domain and the performance
requirements. Although prioritized bus and ring
networks are commonly used in small-scale real-time systems
[9], larger applications can benefit from the higher
bandwidth available in multi-hop topologies. In addition,
multi-hop networks often have several disjoint routes between
each pair of processing nodes, improving the ap-
plication's resilience to link and node failures. However,
these networks complicate the effort to guarantee end-to-
end performance, since the system must bound delay at
each link in a packet's route. To deliver predictable communication
performance in multi-hop networks, we present
a novel router architecture that supports end-to-end delay
and throughput guarantees by scheduling packets at each
network link. Our prototype implementation is geared toward
two-dimensional meshes, as shown in Figure 1; such
topologies have been widely used as the interconnection
network for a variety of commercial parallel machines. The
design directly extends to a broad set of topologies, including
the class of k-ary n-cube networks; with some changes
in the routing of best-effort traffic, the proposed architecture
applies to arbitrary point-to-point topologies.
Communication predictability can be improved by assigning
priority to time-constrained traffic or to packets
that have experienced large delays earlier in their
routes [10]. Ultimately, though, bounding worst-case communication
latency requires prior reservation of link and
buffer resources, based on the application's anticipated
traffic load. Under this traffic contract, the network can
provide end-to-end performance guarantees through effective
link-scheduling and buffer-allocation policies. To handle
a wide range of bandwidth and delay requirements,
the real-time router implements the real-time channel [11-
13] abstraction for packet scheduling, as described in Section
II. Conceptually, a real-time channel is a unidirectional
virtual connection between two processing nodes,
with a source traffic specification and an end-to-end delay
bound. Separate parameters for bandwidth and delay
permit the model to accommodate a wider range and larger
number of connections than other service disciplines [14-
16], at the expense of increased implementation complexity.
The real-time channel model guarantees end-to-end performance
through a combination of bandwidth regulation
and deadline-based scheduling at each link. Implementing
packet scheduling in software would impose a significant
burden on the processing resources at each node and
Router
to/from
processor
Fig. 1. Router in a Mesh Network: This figure shows a router
in a 4 \Theta 4 square mesh of processing nodes. To communicate
with another node, a processor injects a packet into its router;
then, the packet traverses one or more links before reaching the
reception port of the router at the destination node.
would prove too slow to serve multiple high-speed links.
This software would have to rank packets by deadline for
each outgoing link, in addition to scheduling and executing
application tasks. With high-speed links and tight timing
constraints, real-time parallel machines require hardware
support for communication scheduling. An efficient, low-cost
solution requires a design that integrates this run-time
scheduling with packet transmission. Hence, we present a
chip-level router design that handles bandwidth regulation
and deadline-based scheduling, while relegating non-realtime
operations (such as admission control and route se-
lection) to the network protocol software.
Although deadline-based scheduling bounds the worst-case
latency for time-constrained traffic, real-time applications
also include best-effort packets that do not have
stringent performance requirements [10, 11, 15, 17]; for ex-
ample, good average delay may suffice for some status and
monitoring information, as well as the protocol for establishing
real-time channels. Best-effort traffic should be able
to capitalize on the low-latency communication techniques
available in modern parallel machines without jeopardizing
the performance guarantees of time-constrained pack-
ets. Section III describes how our design tailors network
routing, switching, arbitration, flow-control, and deadlock-
avoidance policies to the conflicting performance requirements
of these two traffic classes. Time-constrained traffic
employs packet switching and small, fixed-sized packets
to bound worst-case performance, while best-effort packets
employ wormhole switching [18] to reduce average latency
and minimize buffer space requirements, even for large
packets. The router implements deadlock-free, dimension-
ordered routing for best-effort packets, while permitting
the protocol software to select arbitrary multicast routes
for the time-constrained traffic; together, flexible routing
and multicast packet forwarding provide efficient group
communication between cooperating real-time tasks.
Section IV describes how the network can reserve buffer
and link resources in establishing time-constrained connec-
tions. In addition to managing the packet memory and
connection data structures, the real-time router effectively
handles the effects of clock rollover in computing scheduling
for each packet. The router overlaps communication
scheduling with packet transmission to maximize utilization
of the network links. To reduce hardware complex-
ity, the architecture shares packet buffers and sorting logic
amongst the router's multiple output links, as discussed in
Section V; a hybrid of serial and parallel comparison operations
enables the scheduler to trade space for time to
further reduce implementation complexity. Section VI describes
the router implementation, using the Verilog hardware
description language and the Epoch silicon compiler.
The Epoch implementation demonstrates that the router
can satisfy the performance goals of both traffic classes in
an affordable, single-chip solution. Verilog simulation experiments
on a detailed timing model of the chip show the
correctness of the design and investigate the scaling properties
of the packet scheduler across a range of architectural
parameters. Section VII discusses related work on real-time
multicomputer networks, while Section VIII concludes the
paper with a summary of the research contributions and
future directions.
II. Real-Time Channels
Real-time communication requires advance reservation
of bandwidth and buffer resources, coupled with run-time
scheduling at the network links. The real-time channel
model [11] provides a useful abstraction for bounding end-
to-end network delay, under certain application traffic characteristics
Traffic parameters: A real-time channel is a unidirectional
virtual connection that traverses one or more net-work
links. In most real-time systems, application tasks
exchange messages on a periodic, or nearly periodic, ba-
sis. As a result, the real-time channel model characterizes
each connection by its minimum spacing between messages
resulting in a maximum transfer rate of Smax =I min bytes
per unit time. To permit some variation from purely periodic
traffic, a connection can generate a burst of up to
Bmax messages in excess of the periodic restriction I min .
Together, these three parameters form a linear bounded
arrival process [19] that governs a connection's traffic generation
at the source node.
End-to-end delay bound: In addition to these traffic pa-
rameters, a connection has a bound D on end-to-end message
delay, based on the minimum message spacing I min .
At the source node, a message m i
generated at time t i
has
a logical arrival time
ae
By basing performance guarantees on these logical arrival
times, the real-time channel model limits the influence an
ill-behaving or malicious connection can have on other traffic
in the network. The run-time link scheduler guarantees
that message m i
reaches its destination node by its deadline
Per-hop delay bounds: The network does not admit a
new connection unless it can reserve sufficient buffer and
bandwidth resources without violating the requirements of
Traffic Data Structure
Queue 1 On-time time-constrained traffic Priority queue (by deadline '(m)+d)
Queue 2 Best-effort traffic First-in-first-out queue
Queue 3 Early time-constrained traffic Priority queue (by logical arrival time '(m))
I
Real-Time Channel Scheduling Model: Under the real-time channel model, each link transmits traffic from three
scheduling queues. To provide delay guarantees to time-constrained connections, the link gives priority to the on-time
time-constrained messages in Queue 1 over the best-effort traffic in Queue 2. Queue 3 serves as a staging area for holding
any early time-constrained messages.
existing connections [11, 20]. A connection establishment
procedure decomposes the connection's end-to-end delay
bound D into local delay bounds d j
for each hop in its
route such that d j - I min and
D. Based on the
local delay bounds, a message m i
has a logical arrival time
at node j in its route, where j =0 corresponds to the source
node. Link scheduling ensures that message m i
arrives
at node j no later than time
local
deadline at node j \Gamma1. In fact, message m i
may reach node
earlier, due to variations in delay at previous hops in the
route. The scheduler at node j ensures that such "early"
arrivals do not interfere with the transmission of "on-time"
messages from other connections.
Run-time link scheduling: Each link schedules time-constrained
traffic, based on logical arrival times and dead-
lines, in order to bound message delay without exceeding
the reserved buffer space at intermediate nodes. The sched-
uler, which employs a multi-class variation of the earliest
due-date algorithm [21], gives highest priority to time-constrained
messages that have reached their logical arrival
time (i.e., ' j
transmitting the message with
the smallest deadline ' j
, as shown in Table I.
If Queue 1 is empty, the link services best-effort traffic
from Queue 2, ahead of any early time-constrained messages
This improves the average performance
of best-effort traffic without violating the delay
requirements of time-constrained communication. Queue
3 holds early time-constrained traffic, effectively absorbing
variations in delay at the previous node. Upon reaching
its logical arrival time, a message moves from Queue 3 to
Queue 1.
Link horizon parameter: By delaying the transmission
of early time-constrained messages, the link scheduler
can avoid overloading the buffer space at the downstream
node [11, 15, 16]. Still, the scheduler could potentially improve
link utilization and average latency by transmitting
early messages from Queue 3 when the other two scheduling
queues are empty. To balance this trade-off between
buffer requirements and average performance, the link can
transmit an early time-constrained message from Queue
3, as long as the message is within a small horizon h - 0
of its logical arrival time (i.e., ' j
values of h permit the link to transmit more early time-constrained
traffic, at the expense of increased memory requirements
at the downstream node. Although each connection
could conceivably have its own h value, employing
a single horizon parameter allows the link to transmit early
traffic directly from the head of Queue 3, without any per-connection
data structures.
Buffer requirements: To avoid buffer overflow or message
loss, a connection must reserve sufficient memory for
storing traffic at each node in its route. The required buffer
space at node j depends on the connection's local delay
bound d j
, as well as the horizon parameter h j \Gamma1 for the
incoming link. In particular, node j can receive a message
from node j \Gamma 1 as early as ' j
transmits the message at the earliest possible
time. In the worst case, node j can hold a message
until its deadline ' j
. Hence, for this connection,
messages m i
stored
at node j at time t. If a connection has messages arrive as
early as possible, and depart as late as possible, then node
could have to store as many as
I min
messages from this connection at the same time. By reserving
buffer and bandwidth resources in advance, the real-time
channel model guarantees that every message arrives
at its destination node by its deadline, independent of other
best-effort and time-constrained traffic in the network.
III. Mixing Best-Effort and Time-Constrained
Traffic
Although the real-time channel model bounds the
worst-case performance of time-constrained messages, the
scheduling model in Table I can impose undue restrictions
on the packet size and flow-control schemes for best-effort
traffic. To overcome these limitations, we propose a router
architecture that tailors its low-level communication policies
to the unique demands of the two traffic classes. Fine-
grain, priority-based arbitration at the network links permits
the best-effort traffic to capitalize on the low-latency
techniques in modern multicomputer networks without sacrificing
the performance guarantees of the time-constrained
connections. Figure 2 shows the high-level architecture of
the real-time router, with separate control and data path
for the two traffic classes.
incoming links
Address
reception
injection
control
Packet Scheduling Logic
new address
Memory
Packet
outgoing links
Free
Table
Connection
best-effort
data
connection id
address
time-constrained
best-effort
time-constrained
Fig. 2. Real-Time Router: This figure shows the real-time router architecture, with separate control and data path for best-effort and
time-constrained packets. The router includes a packet memory, connection routing table, and scheduling logic to support delay and
bandwidth guarantees for time-constrained traffic. To connect to the local processor, the router exports a control interface, a reception
port, and separate injection ports for each traffic class.
A. Complementary Switching Schemes
To ensure that time-constrained connections meet their
delay requirements, the router must have control over
bandwidth and memory allocation. For example, suppose
that a time-constrained message arrives with a tight deadline
(i.e., '(m i
while the outgoing link is
busy transmitting other traffic. To satisfy this tight timing
requirement, the outgoing link must stop servicing any
lower-priority messages within a small, bounded amount
of time. This introduces a direct relationship between connection
admissibility and the maximum packet size of the
time-constrained and best-effort traffic sharing the link. In
most real-time systems, time-constrained communication
consists of 10-20 byte exchanges of command or status information
[9]. Consequently, the real-time router restricts
time-constrained traffic to small, fixed-size packets that
can support a distributed memory read or write operation.
This bounds link access latency and buffering delay while
simplifying memory allocation in the router.
To ensure predictable consumption of link and buffer re-
sources, time-constrained traffic employs store-and-forward
packet switching . By buffering packets at each node, packet
switching allows each router to independently schedule
packet transmissions to satisfy per-hop delay requirements.
To improve average performance, the time-constrained
traffic could conceivably employ virtual cut-through switching
[22] to allow an incoming packet to proceed directly
to an idle outgoing link. However, in contrast to traditional
virtual cut-through switching of best effort traf-
fic, the real-time router cannot forward a time-constrained
packet without first assessing its logical arrival time (to
ensure that the downstream router has sufficient buffer
space for the packet) and computing the packet deadline
(which serves as the logical arrival time at the down-stream
router). To avoid this extra complexity and over-
head, the initial design of the real-time router implements
store-and-forward packet switching, which has the same
worst-case performance guarantees as virtual cut-through
switching. A future implementation could employ virtual
cut-through switching to reduce the average latency of the
time-constrained traffic.
Although packet switching delivers good, predictable
performance to small, time-constrained packets, this approach
would significantly degrade the average latency of
long, best-effort packets. Even in a lightly-loaded network,
end-to-end latency under packet switching is proportional
to the product of packet size and the length of the route.
Instead, the best-effort traffic can employ wormhole switching
[18] for lower latency and reduced buffer space require-
ments. Similar to virtual cut-through switching, wormhole
switching permits an arriving packet to proceed directly
to the next node in its route. However, when the outgoing
link is not available, the packet stalls in the network
instead of buffering entirely within the router.
In effect, wormhole switching converts the best-effort
scheduling "queue" in Table I into a logical queue that
spans multiple nodes. The router simply includes small
five-byte flit (flow control unit) buffers [23] to hold a few
bytes of a packet from each input link. When an incoming
packet fills these buffers, inter-node flow control halts further
transmission from the previous node until more space
is available; once the five-byte chunk proceeds to a buffer
at the outgoing link, the router transmits an acknowledg-
1virtual channel id
data byte
strobe/enable
flit acknowledgement
Fig. 3. Link Encoding: In the real-time router, each link can
transmit a byte of data, along with a strobe signal and a virtual
channel identifier. In the reverse direction, an acknowledgment
bit indicates that the router can store another flit on the best-effort
virtual channel.
x offset
y offset
length
data bytes
connection id
data bytes (18)
(a) Best-effort packet (b) Time-constrained packet
Fig. 4. Packet Formats: This figure illustrates the packet formats
for best-effort and time-constrained packets in the real-time
router. Best-effort packets consist of a two-byte routing
header and a one-byte length field, along with the variable-length
data. Time-constrained packets are 20 bytes long and include the
connection identifier and the deadline from the previous hop in
the route, which serves as the logical arrival time at the current
router.
ment bit to signal the upstream router to start sending the
next flit. This fine-grain, per-hop flow control permits best-effort
traffic to use large variable-sized packets, reducing or
even avoiding packetization overheads, without increasing
buffer complexity in the router. The combination of wormhole
and packet switching, with best-effort traffic consuming
small flit buffers and time-constrained connections reserving
packet buffers, results in an effective partitioning
of router resources.
B. Separate Logical Resources
Even though wormhole and packet switching exercise
complementary buffer resources, best-effort and time-constrained
traffic still share access to the same net-work
links. To provide tight delay guarantees for time-constrained
connections, the router must bound the time
that the variable-sized, wormhole packets can stall the forward
progress of on-time, time-constrained traffic. How-
ever, a blocked wormhole packet can hold link resources at
a chain of consecutive routers in the network, indirectly delaying
the advancement of other traffic that does not even
use the same links. This complicates the effort to provision
the network to bound worst-case end-to-end latency,
as discussed in the treatment of related work in Section VII.
In order to control the interaction between the two traffic
classes, the real-time router divides each link into two virtual
channels [23]. A single bit on each link differentiates
between time-constrained and best-effort packets, as shown
in
Figure
3; each link also includes an acknowledgment bit
for flow control on the best-effort virtual channel.
Each wormhole virtual channel performs round-robin arbitration
on the input links to select an incoming best-effort
packet for service, while the packet-switched virtual
channel transmits time-constrained packets based on their
deadlines and logical arrival times. Priority arbitration between
the two virtual channels tightly regulates the intrusion
of best-effort traffic on time-constrained packets on
each outgoing link. This effectively provides flit-level pre-emption
of best-effort traffic whenever an on-time time-constrained
packet awaits service, while permitting wormhole
flits to consume any excess link bandwidth. In a separate
simulation study, we have demonstrated the effectiveness
of using flit-level priority arbitration policies to mix
best-effort wormhole traffic and time-constrained packet-switched
traffic [24-26].
While the real-time router gives preferential treatment to
time-constrained traffic, the outgoing links transmit best-effort
flits ahead of any early time-constrained packets, consistent
with the policies in Table I. Although this arbitration
mechanism ensures effective scheduling of the traffic
on the outgoing links and the reception port, the best-effort
and time-constrained packets could still contend for
resources at the injection port at the source node. The
local processor could solve this problem by negotiating between
best-effort and time-constrained traffic at the injection
port, but this would require the processor to perform
flit-level arbitration. Instead, the real-time router includes
a dedicated injection port for each traffic class. The two
injection ports, coupled with the low-level arbitration on
the outgoing links, ensure that time-constrained traffic has
fine-grain preemption over the best-effort packets across
the entire path through the network, while allowing best-effort
packets to capitalize on any remaining link band-width
C. Buffering and Packet Forwarding
To support the multiple incoming and outgoing ports,
the real-time router design requires high throughput for re-
ceiving, storing, and transmitting packets. Internally, the
router isolates the best-effort and time-constrained traffic
on separate buses to increase the throughput and reduce
the complexity of the arbitration logic. Each incoming
and outgoing port includes nominal buffer space to avoid
stalling the flow of data while waiting for access to the bus.
The best-effort bus is one flit wide and performs round-robin
arbitration among the flit buffers at the incoming
ports. Running at the same speed as the byte-wide input
ports, this five-byte bus has sufficient throughput to
accommodate a peak load of best-effort traffic. Transferring
best-effort packets in five-byte chunks incurs a small
initial transmission delay at each router, which could be
reduced by using a crossbar switch; however, we employ a
shared bus for the sake of simplicity. Other recent multi-computer
router architectures have used a wide bus for flit
transfer [27, 28].
The structure and placement of packet buffers plays a
large role in the router's ability to accommodate the performance
requirements of time-constrained connections. The
simplest solution places a separate queue at each input link.
However, input queuing has throughput limitations [29],
since a packet may have to wait behind other traffic destined
for a different outgoing link. In addition, queuing
packets at the incoming links complicates the effort to
schedule outgoing traffic based on delay and throughput
requirements. Instead, the real-time router queues time-constrained
packets at the output ports; the router shares
a single packet memory among the multiple output ports
to maximize the network's ability to accommodate time-constrained
connections with diverse buffer requirements.
To accommodate the aggregate memory bandwidth of the
five input and five output ports, the router stores packets
in 10-byte chunks, with demand-driven round-robin arbitration
amongst the ports.
Since time-constrained traffic is not served in a first-in
first-out order, the real-time router must have a data
structure that records the idle memory locations in the
packet buffer. Similar to many shared-memory switches
in high-speed networks, the real-time router maintains an
idle-address pool [29], implemented as a stack. This stack
consists of a small memory, which stores the address of
each free location in the packet buffer, and a pointer to
the first entry. Initially, the stack includes the address of
each location in the packet memory. An incoming packet
retrieves an address from the top of the stack and increments
the stack pointer to point to the next available entry.
Upon packet departure, the router decrements this pointer
and returns the free location to the top of stack. The idle-
address stack always has at least one free address when
a new packet arrives, since the real-time channel model
never permits the time-constrained traffic to overallocate
the buffer resources.
D. Routing and Deadlock-Avoidance
Although wormhole switching reduces the buffer requirements
and average latency for best-effort traffic, the low-level
inter-node flow control could potentially introduce
cyclic dependencies between stalled best-effort packets.
To avoid these cycles, the real-time router implements
dimension-ordered routing, a shortest-path scheme that
completely routes a packet in the x-direction before proceeding
in the y-direction to the destination, as shown by
the shaded nodes in Figure 1. Dimension-ordered routing
avoids packet deadlock in a square mesh [30] and also facilitates
an efficient implementation based on x and y offsets
in the packet header, as shown in Figure 4(a); the offsets
reach zero when the packet has arrived at its destination
node. To improve the performance of best-effort traffic,
an enhanced version of the router could support adaptive
wormhole routing and additional virtual channels, at the
expense of increased implementation complexity [31, 32].
In particular, non-minimal adaptive routing would enable
best-effort packets to circumvent links with a heavy load
of time-constrained traffic.
Although routing is closely tied with deadlock-avoidance
for best-effort packets, the real-time router need not dictate
a particular routing scheme for the time-constrained traf-
fic. Instead, each time-constrained connection has a fixed
path through the network, based on a table in each router;
this table is indexed by the connection identifier field in the
header of each time-constrained packet, as shown in Figure
4(b). As part of establishing a real-time channel, the
network protocol software can select a fixed path from the
source to the destination(s), based on the available band-width
and buffer resources at the routers. The protocol
software can employ a variety of algorithms for selecting
unicast and multicast routes based on the resources available
in the network [33]. Once the connection establishment
protocol reserves buffer and bandwidth resources for
a real-time channel, the combination of bandwidth regulation
and packet scheduling prevents packet deadlock for
time-constrained traffic. Table II summarizes how the real-time
router employs these and other policies to accommodate
the conflicting performance requirements of the two
traffic classes.
IV. Managing Time-Constrained Connections
A real-time multicomputer network must have effective
mechanisms for establishing connections and scheduling
packets, based on the delay and throughput requirements
of the time-constrained traffic. To permit a single-chip im-
plementation, the real-time router offloads non-real-time
operations, such as route selection and admission control,
to the network protocol software. At run-time, the router
coordinates access to buffer and link resources by managing
the packet memory and the connection data struc-
tures. In addition, the router architecture introduces efficient
techniques for bounding the range of logical arrival
times and deadlines, to limit scheduler delay and implementation
complexity.
A. Route Selection and Admission Control
Establishing a real-time channel requires the application
to specify the traffic parameters and performance requirements
for the new connection. Admitting a new con-
nection, and selecting a multi-hop route with suitable local
delay parameters, is a computationally-intensive procedure
[10, 11, 20]. Fortunately, channel establishment typically
does not impose tight timing constraints, in contrast
to the actual data transfer which requires explicit guarantees
on minimumthroughput and worst-case delay. In fact,
in most cases, the network can establish the required time-constrained
connections before the application commences.
To permit a single-chip solution, the real-time router relegates
these non-real-time operations to the protocol soft-
ware. The network could select routes and admit new connections
through a centralized server or a distributed pro-
tocol. In either case, this protocol software can use the
best-effort virtual network, or even a set of dedicated time-constrained
connections, to exchange information to select
a route and provision resources for each new connection.
The route selected for a connection depends on the traffic
characteristics and performance requirements, as well
as the available buffer and bandwidth resources in the net-
work. As part of establishing a new real-time channel, the
protocol software assigns a unique connection identifier at
each hop in the route. Then, each node in the route writes
Time-Constrained Best-Effort
Switching Packet switching Wormhole switching
Packet size Small, fixed size Variable length
Link arbitration Deadline-driven Round-robin on input links
Routing Table-driven multicast Dimension-ordered unicast
Buffers Shared output queues Flit buffers at input links
Flow control Rate-based Flit acknowledgments
II
Architectural Parameters: This table summarizes how the real-time router supports the conflicting performance
requirements of time-constrained and best-effort traffic.
Write Command Fields
Connection parameters outgoing connection id
local delay bound d
bit-mask of output ports
incoming connection id
Horizon parameter bit-mask of output ports
horizon value h
III
Control Interface Commands: This table summarizes the
control commands used to configure the real-time router.
control information into the router's connection table, as
shown in Table III. At run-time, this table is indexed
by the connection identifier field of each incoming time-constrained
packet, as shown in Figure 4(b). To minimize
the number of pins on the router chip, the controlling processor
updates this table as a sequence of four, one-byte operations
that specify the incoming connection identifier and
the three fields in the table. After closing a connection, the
network protocol software can reuse the connection identifier
by overwriting the entry in the routing table. The
processor uses the same control interface to set the horizon
parameters h for each of the five outgoing ports.
As shown in Table III, the routing table stores the con-
nection's identifier at the next node, the local delay bound
d, and a bit mask for directing traffic to the appropriate
outgoing port(s). When a packet arrives, the router indexes
the table with the incoming connection identifier and replaces
the header field with the new identifier for the down-stream
router. At the same time, the router computes the
packet's deadline from the logical arrival time in the packet
header and the local delay bound in the connection ta-
ble. Finally, the bit mask permits the router to forward an
incoming packet to multiple outgoing ports, allowing the
network protocol software to establish multicast real-time
channels. This facilitates efficient, timely communication
between a set of cooperating nodes. To simplify the design,
the real-time router requires a multicast connection to use
the same value of d for each of its outgoing ports at a single
node. Then, based on the bit mask in the routing table,
the router queues the updated packet for transmission on
the appropriate outgoing port(s).
By implementing a shared packet memory, the real-time
router can store a single copy of each multicast packet, removing
the packet only after it has been transmitted by
each output port selected in the bit mask. The shared
packet memory also permits the network protocol software
to employ a wide variety of buffer allocation policies. On
the one extreme, the route selection and admission control
protocols could allocate packet buffers to any new connec-
tion, independent of its outgoing link. However, this could
allow a single link to consume the bulk of the memory loca-
tions, reducing the chance of establishing time-constrained
connections on the other outgoing links. Instead, the admission
control protocol should bound the amount of buffer
space available to each of the five outgoing ports. Simi-
larly, the network could limit the size of the link horizon
parameters h to reduce the amount of memory required by
each connection. In particular, at run-time, a higher-level
protocol could reduce the h values of a router's incoming
links when the node does not have sufficient buffer space
to admit new connections.
B. Handling a Clock with Finite Range
The packet deadline at one node serves as the logical
arrival time at the downstream node in the route. Carrying
these logical arrival times in the packet header, as
shown in Figure 4(b), implicitly assumes that the net-work
routers have a common notion of time, within some
bounded clock skew. Although this is not appropriate in
a wide-area network context, the tight coupling in parallel
machines minimizes the effects of clock skew. Alternatively,
the router could store additional information in the connection
table to compute ' j
) from a packet's actual arrival
time and the logical arrival time of the connection's previous
packet [34]; however, this approach would require the
router to periodically refresh this connection state to correctly
handle the effects of clock rollover. Instead, the real-time
router avoids this overhead by capitalizing on the tight
coupling between nodes to assume synchronized clocks.
Even with synchronized clocks, the real-time router cannot
completely ignore the effects of clock rollover. To
schedule time-constrained traffic, the router architecture
includes a real-time clock, implemented as a counter that
increments once per packet transmission time. For a practical
implementation, the router must limit the number
of bits b used to represent the logical arrival times and
deadlines of time-constrained packets. Since logical arrival
times continually increase, the design must use modulo
arithmetic to compute packet deadlines and schedule
traffic for transmission. As a result, the network must restrict
the logical arrival times that can exist in a router at
the same time; otherwise, the router cannot correctly distinguish
between different packets awaiting access to the
outgoing link.
Selecting a value for b introduces a fundamental trade-off
between connection admissibility and scheduler complex-
ity. To select a packet for transmission, the scheduler must
compare the deadlines and logical arrival times of the time-constrained
packets; for example, the data structures in
Table
I require comparison operations to enqueue/dequeue
packets. Larger values of b would increase the hardware
cost and latency for performing these packet comparison
operations. However, smaller values of b would restrict the
network's ability to select large delay bounds d and horizon
parameters h for time-constrained connections. The
network protocol software can limit the delay and horizon
parameters, based on the value of b imposed by the
router implementation. Alternatively, in implementing the
router, a designer could select a value for b based on typical
requirements for the expected real-time applications.
To formalize the trade-off between complexity and ad-
missibility, consider a connection traversing consecutive
links local delay parameters d j \Gamma1 and d j
respectively, where link
As discussed in Section II, a packet can arrive as much as
units ahead of its logical arrival time ' j
and depart as late as its deadline ' j
. Consequently,
for any messages m i
from this connection at time t. The
network must ensure that the router can differentiate between
the full range of logical arrival times in this set.
The router can correctly interpret logical arrival times and
deadlines, even in the presence of clock rollover, as long
as every connection has
values that are
less than half the range of the on-chip clock. That is, the
router requires d j
connections sharing the link.
Under this restriction, the router can compare packets
based on their logical arrival times and deadlines by using
modulo arithmetic. For example, suppose (i.e., the
clock has a range of 256 time units) and the connections all
configuration corresponds to Figure 5. Any early packets
have logical arrival times between 240 and 346, modulo 256.
For example, a packet with '(m)=80 would be considered
early traffic (since Similarly,
any on-time packets have logical arrival times between 200
and 240. For example, a packet with would
be considered on-time traffic (since
deadlines
Hence, these
deadlines also fall within the necessary range in Figure 5,
allowing the router to compute (' j
range of packet
logical arrival times128
Fig. 5. Handling Clock Rollover: This figure illustrates the effects
of clock rollover with an 8-bit clock, where the current time is
t =240(mod 256). In the example, all connections satisfy d
and d
ensuring that the router can correctly
compare
to t to distinguish between on-time and early
packets.
compare on-time packets based on their deadlines.
V. Scheduling Time-Constrained Packets
To satisfy connection delay, throughput, and buffer
requirements, each outgoing port must schedule time-constrained
packets based on their logical arrival times and
deadlines, as well as the horizon parameter. The real-time
router reduces implementation complexity by sharing a single
scheduler amongst the early and on-time traffic on each
of the five output ports. Extensions to the scheduler architecture
further reduce the implementation cost by trading
space for time.
A. Integrating Early and On-Time Packets
To maximize link utilization and channel admissibility,
each outgoing port should overlap packet scheduling operations
with packet transmission. As a result, packet
size determines the acceptable worst-case scheduling de-
lay. Scheduling time-constrained traffic, based on delay or
throughput parameters, typically requires a priority queue
to rank the outgoing packets. Priority queue architectures
introduce considerable hardware complexity [35-39], particularly
when the link must handle a wide range of packet
priorities or deadlines. For example, most high-speed solutions
require O(n) hardware complexity to rank n packets,
using a systolic array or shift register consisting of n comparators
[35, 40, 41]. Additional technical challenges arise
in trying to integrate packet scheduling with bandwidth
regulation [42], since the link cannot transmit a packet unless
it has reached its logical arrival time.
To perform bandwidth regulation and deadline-based
scheduling, the real-time router could include two priority
queues for each of its five outgoing ports, as suggested
by
Table
I. However, this approach would be extremely expensive
and would require additional logic to transfer packets
from the "early" queue to the "on-time" queue; this is
particularly complicated when multiple packets reach their
eligibility times simultaneously. In the worst case, an out-going
port could have to dequeue a packet from Queue 1
or Queue 3, enqueue several arriving packets to Queue 1
adder
logic
(l>
bit mask
l
l -t
l
adder
logic
on-time
enable
(from
ineligible
eligible/
check
mask
select
port
comparators
horizon
parameter
Fig. 6. Comparator Tree Scheduler: This figure shows the scheduling architecture in the real-time router. The leaf nodes at the base of
the comparator tree stores a small amount of per-packet state information.
On-time:
Fig. 7. Scheduler Keys: This figure illustrates how the real-time
router assigns a key to each time-constrained packet awaiting
transmission on an outgoing port. A single bit differentiates on-time
and early packets; ineligible traffic refers to packets that are
not destined to this port.
and/or Queue 3, and move a large number of packets from
Queue 3 to Queue 1, all during a single packet transmission
time. To avoid this complexity, the real-time router
does not attempt to store the time-constrained packets in
sorted order. Instead, the router selects the packet with the
smallest key via a comparator tree, as shown in Figure 6.
Like the systolic and shift register approaches, the tree architecture
introduces O(n) hardware complexity. For the
moderate size of n in a single-chip router, the comparator
tree can overlap the O(lgn) stages of delay with packet
transmission.
To avoid this excessive complexity, the real-time router
integrates early and on-time packets into a single data
structure. Each link schedules time-constrained packets
based on sorting keys, as shown in Figure 7, where smaller
have higher priority. A single bit differentiates between
early and on-time packets. For on-time traffic, the
lower bits of the key represent packet laxity , the time remaining
till the local deadline expires, whereas the key for
early traffic represents the time left before reaching the
packet's logical arrival time. The packet keys are normal-
ized, relative to current time t, to allow the scheduler to
perform simple, unsigned comparison operations, even in
the presence of clock rollover. Each scheduling operation
operates independently to locate the packet with the minimum
sorting key, permitting dynamic changes in the values
of keys. The base of the tree computes a key for each
packet, based on the packet state and the current time
t, as shown in the right side of Figure 6; the base of the
tree stores per-packet state information, whereas the packet
memory stores the actual packet contents.
B. Sharing the Scheduler Across Output Ports
By using a comparator tree, instead of trying to store the
packets in sorted order, the router can allow all five out-going
ports to share access to this scheduling logic, since
the tree itself does not store the packet keys. As shown
in
Figure
6, each leaf in the tree stores a logical arrival
time '(m), a deadline '(m)+d, and a bit mask of outgoing
ports, assigned at packet arrival based on the connection
state. The bit mask determines if the leaf is eligible to compete
for access to a particular outgoing port. When a port
transmits a selected packet, it clears the corresponding field
in the leaf's bit mask; a bit mask of zero indicates an empty
packet leaf slot and a corresponding idle slot in the packet
memory. The base of the tree also determines if packets
are early ('(m) ? t) or on-time ('(m) - t) and computes
the sorting keys based on the current value of t. At the top
of the sorting tree, an additional comparator checks to see
if the winner is an early packet that falls within the port's
horizon parameter; if so, the port transmits this packet,
unless best-effort flits await service.
Still, to share the comparator logic, the scheduler must
operate quickly enough to overlap run-time scheduling with
packet transmission on each of the outgoing ports. Conse-
quently, the real-time router pipelines access to the comparator
tree. With p stages of pipelining, the scheduler has
a row of latches at in the tree, to store the sorting
key and buffer location for the winning packet in the
subtrees. Every few cycles, another link begins its scheduling
operation at the base of the tree. Similarly, every few
horizon
parameter
levels
packets
Fig. 8. Logic Sharing: This figure illustrates how the scheduler can
trade space for time by sharing comparator logic amongst groups
of k packets.
cycles, another link completes a scheduling operation and
can initiate a packet transmission. As a result, the router
staggers packet departures on the five outgoing ports. The
necessary amount of pipelining depends on the latency of
the comparator tree, relative to the packet transmission
delay.
C. Balancing Hardware Complexity and Scheduler Latency
The pipelined comparator tree has relatively low hardware
cost, compared to alternate approaches that implement
separate priority queues for the early and on-time
packets on each outgoing port. However, as shown in Section
VI, the scheduler logic is still the main source of complexity
in the real-time router architecture. To handle n
packets, the scheduler in Figure 6 has a total of 2+ lg n
stages of logic, including the operations at the base of the
tree as well as the comparator for the horizon parameter.
In terms of implementation cost, the tree requires n comparators
and n leaf nodes, for a total of 2n elements of
similar complexity. As n grows, the number of leaf nodes
can have a significant influence on the bus loading at the
base of the tree. Fortunately, for certain values of n, the
comparator tree has low enough latency to avoid the need
to fully pipeline the scheduling logic. This suggests that
the scheduler could reduce the number of comparators by
trading space for time.
Under this approach, the scheduler combines several leaf
units into a single module with a small memory (e.g., a
register file) to store the deadlines and logical arrival times
for k packets, as shown in Figure 8. At the base of the
tree, each of the n=k modules can sequentially compare its k
sorting keys, using a single comparator, to select the packet
with the minimum incurs k stages of delay. Then,
a smaller comparator tree finds the smallest key amongst
n=k packets. As a result, the scheduler incurs
stages of delay. Note that, for the architecture reduces
to the comparator tree in Figure 6, with its 2
stages of logic. For larger values of k, the scheduler has
larger arbitration delay but reduced implementation com-
plexity. The architecture in Figure 8 has 2n=k compara-
tors, as well as a lighter bus loading of n=k elements at
the base of the tree. In addition, larger values of k allow
the base of the tree to consist of n=k k-element register
files, instead of n individual registers, with a reduction
in chip complexity. With a careful selection of n and
k, the real-time router can have an efficient, single-chip
implementation that performs bandwidth regulation and
deadline-based scheduling on multiple outgoing ports.
VI. Performance Evaluation
To demonstrate the feasibility of the real-time router,
and study its scaling properties, a prototype chip has been
designed using the Verilog hardware description language
and the Epoch silicon compiler from Cascade Design Au-
tomation. This framework facilitates a detailed evaluation
of the implementation and performance properties of the
architecture. The Epoch tools compile the structural and
behavioral Verilog models to generate a chip layout and
an annotated Verilog model for timing simulations. These
tools permit extensive testing and performance evaluation
without the expense of chip fabrication.
A. Router Complexity
Using a three-metal, 0:5-m CMOS process, the 123-pin
chip has dimensions 8:1 mm \Theta 8:7 mm for an implementation
with 256 time-constrained packets and up to 256
connections, as shown in Table IV. The scheduling logic
accounts for the majority of the chip area, with the packet
memory consuming much of the remaining space, as shown
in
Table
V. Operating at 50 MHz, the chip can transmit or
receive a byte of data on each of its ten ports every 20 nsec.
This closely matches the access time of the 10-byte-wide,
single-ported SRAM for storing time-constrained traffic;
the memory access latency is the bottleneck in this realization
of the router. Since time-constrained packets are
20-bytes long, the scheduling logic must select a packet for
transmission every 400 nsec for each of the five output ports
To match the memory and link throughputs, the comparator
tree consists of a two-stage pipeline, where each stage
requires approximately 50 nsec.
Although the tree could incorporate up to five pipeline
stages, the two-stage design provides sufficient throughput
to satisfy the output ports. This suggests that the link
scheduler could effectively support a larger number of packets
or additional output ports, for a higher-dimensional
mesh topology. Alternately, the router design could reduce
the hardware cost of the comparator tree by sharing
comparator logic between multiple leaves of the tree, as
Parameter Value
Connections 256
Time-constrained packets 256
(sorting
Comparator tree pipeline 2 stages
Flit input buffer 10 bytes
(a) Architectural parameters
Parameter Value
Process 0.5-m 3-metal CMOS
Signal pins 123
Transistors 905; 104
Area 8:1 mm \Theta 8:7 mm
Power 2:3 watts
(b) Chip complexity
IV
Router Specification: This table summarizes the architectural parameters and chip complexity of the prototype
implementation of the real-time router.
Unit Area Transistors
Packet scheduler 34.02 mm 2 555025
Memory and control 5.97 mm 2 268161
Connection table 0.65 mm 2 20966
Idle-address pool 0.35 mm 2 15600
Router Components: This table summarizes the area
contribution and transistor count for the main components
of the router.
discussed in Section V-C. Figure 9 highlights the cost-performance
trade-offs of logic sharing, based on Epoch
implementations and Verilog simulation experiments. As
k increases, the scheduler complexity decreases in terms
of area, transistor count, and power dissipation, with reasonable
increases in scheduler latency. The results start
with a grouping size of k= 4, since the Epoch library does
not support static RAM components with fewer than four
lines. 1, the graphs plot results from the router
implementation in Table V, which uses flip-flops to store
packet state at the base of the tree. The Epoch silicon compiler
generates a better automated layout of these flip-flops
than of the small SRAMs, resulting in better area statistics
in
Table
V, despite the larger transistor count. A manual
layout would significantly improve the area statistics for
still, the area graph shows the relative improvement
for larger values of k.)
These plots can help guide the trade-off between hardware
complexity and scheduler latency in the router imple-
mentation. For example, a group size of k =4 reduces the
number of transistors by 45% (from 555; 025 to 306; 829).
The number of transistors does not decrease by a factor of
four, since the smaller scheduler still has to store the state
information for each packet; in addition, the scheduler requires
additional logic and registers to serialize access to
the shared comparators. Still, logic sharing significantly
reduces implementation complexity. Larger values of k further
reduce the number of comparators and improve the
density of the memory at the base of the tree. Scheduler
latency does not grow significantly for small values of k. For
4, delay in the comparator tree increases by just 67%
(from 0:115 -sec to 0:192 -sec). The lower bus loading at
the base of the tree helps counteract the increased latency
from serializing access to the first layer of comparators and
significantly reduces power dissipation.
B. Simulation Experiments
Since Verilog simulations of the full chip are extremely
memory and CPU intensive, we focus on a modest set of
timing experiments, aimed mainly at testing the correctness
of the design. A preliminary experiment tests the
baseline performance of best-effort wormhole packets. To
study a multi-hop configuration, the router connects its
links in the x and y directions. The packet proceeds from
the injection port to the positive x link, then travels from
the negative x input link to the positive y direction; after
reentering the router on the negative y link, the packet
proceeds to the reception port. In this test, a b byte wormhole
packet incurs an end-to-end latency of
where the link transmits one byte in each cycle. This delay
is proportional to packet length, with a small overhead
for synchronizing the arriving bytes, processing the packet
header, and accumulating five-byte chunks for access to the
router's internal bus. In contrast, packet switching would
introduce additional delay to buffer the packet at each hop
in its route.
An additional experiment illustrates how the router
schedules time-constrained packets to satisfy delay and
throughput guarantees, while allowing best-effort traffic to
capitalize on any excess link bandwidth. Figure 10 plots
the link bandwidth consumed by best-effort traffic and each
of three time-constrained connections with the following
parameters, in units of 20-byte slots:
d I min
All three connections compete for access to a single network
link with horizon parameter h= 0, where each connection
has a continual backlog of traffic. The time-constrained
connections receive service in proportion to their through-put
requirements, since a packet is not eligible for service
k (group size)100000300000500000Number
of
transistors
Number of transistors
Area
(square
millimeters)
Area of tree
(microseconds)
Scheduling latency
(milliwatts)
Power dissipation
Fig. 9. Evaluating Logic Sharing: These plots compare different implementations of the comparator tree, with different group sizes k.
As k grows, implementation complexity decreases but scheduler latency increases.
till its logical arrival time. Similarly, the link transmits
each packet by its deadline, with best-effort flits consuming
any remaining link bandwidth.
VII. Related Work
This paper complements recent work on support for real-time
communication in parallel machines [2-7]. Several
projects have proposed mechanisms to improve predictability
in the wormhole-switched networks common in modern
multicomputers. In the absence of hardware support for
priority-based scheduling, application and operating system
software can control end-to-end performance by regulating
the rate of packet injection at each source node [7].
However, this approach must limit utilization of the communication
network to account for possible contention between
packets, even from lower-priority traffic. This is a
particularly important issue in wormhole networks, since
a stalled packet may indirectly block the advancement of
other traffic that does not even use the same links. The underlying
router architecture can improve predictability by
favoring older packets when assigning virtual channels or
arbitrating between channels on the same physical link [23].
Although these mechanisms reduce variability in end-to-
Time (clock cycles)100300500
Connection
service
(bytes) best-effort
connection 2
connection 1
connection 0
Fig. 10. Timing Experiment: This experiment evaluates a mixture
of time-constrained and best-effort packets competing for
access to a single outgoing link with horizon The scheduler
satisfies the deadlines of the time-constrained packets, while
permitting best-effort flits to capitalize on any additional band-width
latency, more aggressive techniques are necessary to
guarantee performance under high network utilization. A
router can support multiple classes of traffic, such as user
and system packets, by partitioning traffic onto different
virtual channels, with priority-based arbitration for access
to the network links [23]. Flit-level preemption of low-priority
virtual channels can significantly reduce intrusion
on the high-priority packets. Still, these coarse-grain priorities
do not differentiate between packets with different
latency tolerances. With additional virtual channels, the
network has greater flexibility in assigning packet priority,
perhaps based on the end-to-end delay requirement, and
restricting access to virtual channels reserved for higher-priority
traffic [4, 5].
Coupled with restrictions on the source injection rate,
these policies can bound end-to-end packet latency by limiting
the service and blocking times for higher-priority traffic
[3]. Although assigning priorities to virtual channels
provides some control over packet scheduling, this ties priority
resolution to the number of virtual channels. The
router can support fine-grain packet priorities by increasing
the number of virtual channels, at the expense of additional
implementation complexity; these virtual channels
incur the cost of additional flit buffers and larger virtual
channel identifiers, as well as more complex switching and
arbitration logic [32]. Instead of dedicating virtual channels
and flit buffers to each priority level, a router can increase
priority resolution by adopting a packet-switched design.
The priority-forwarding router chip [6] follows this approach
by employing a 32-bit priority field in small, 8-
packet priority queues at each input port. The router incorporates
a priority-inheritance protocol to limit the effects
of priority inversion when a full input buffer limits
the transmission of high-priority packets from the previous
node; the input buffer's head packet inherits the priority
of the highest-priority packet still waiting at the up-stream
router. In contrast, the real-time router implements
a single, shared output buffer that holds up to 256 time-constrained
packets, with a link-scheduling and memory
reservation model that implicitly avoids buffer overflow. By
dynamically assigning an 8-bit packet priority at each node,
the real-time router can satisfy a diverse range of end-to-
delay bounds, while permitting best-effort wormhole
traffic to capitalize on any excess link bandwidth.
VIII. Conclusion
Parallel real-time applications impose diverse communication
requirements on the underlying interconnection net-
work. The real-time router design supports these emerging
applications by bounding packet delay for time-constrained
traffic, while ensuring good average performance for best-effort
traffic. Low-level control over routing, switching,
and flow control, coupled with fine-grain arbitration at
the network links, enables the router to effectively mix
these two diverse traffic classes. Careful handling of clock
rollover enables the router to support connections with diverse
delay and throughput parameters with small keys
for logical arrival times and deadlines. Sharing scheduling
logic and packet buffers amongst the five output ports
permits a single-chip solution that handles up to 256 time-constrained
packets simultaneously. Experiments with a
detailed timing model of the router chip show that the design
can operate at 50 MHz with appropriate pipelining of
the scheduling logic. Further experiments show that the
design can trade space for time to reduce the complexity
of the packet scheduler.
As ongoing research, we are considering alternate link-
scheduling algorithms that would improve the router's scal-
ability. In this context, we are investigating efficient hardware
architectures for integrating bandwidth regulation
and packet scheduling [42]; these algorithms include approximate
scheduling schemes that balance the trade-off
between accuracy and complexity, allowing the router to
efficiently handle a larger number of time-constrained pack-
ets. We are also exploring the use of the real-time router as
a building block for constructing large, high-speed switches
that support the quality-of-service requirements of real-time
and multimedia applications. The router's delay and
throughput guarantees for time-constrained traffic, combined
with good best-effort performance and a single-chip
implementation, can efficiently support a wide range of
modern real-time applications, particularly in the context
of tightly-coupled local area networks.
--R
"Client requirements for real-time communication services,"
"Architectural support for real-time systems: Issues and trade-offs,"
"Using rate monotonic scheduling technology for real-time communications in a wormhole network,"
"Priority based real-time communication for large scale wormhole networks,"
"Simulator for real-time parallel processing architec- tures,"
"Design and implementation of a priority forwarding router chip for real-time interconnection networks,"
"Real-time communications scheduling for massively parallel processors,"
"Providing message delivery guarantees in pipelined flit-buffered multiprocessor networks,"
"Smart networks for control,"
"Real-time communication in packet-switched networks,"
"Real-time communication in multi-hop networks,"
"Delay jitter control for real-time communication in a packet switching network,"
"A scheme for real-time channel establishment in wide-area networks,"
"Rate-controlled service disciplines,"
"Providing end-to-end performance guarantees using non-work-conserving disciplines,"
"Effi- cient network QoS provisioning based on per node traffic shap- ing,"
"The integrated MetaNet architecture: A switch-based multimedia LAN for parallel computing and real-time traffic,"
"The torus routing chip,"
"A calculus for network delay, part I: Network elements in isolation,"
"On the ability of establishing real-time channels in point-to-point packet-switched networks,"
"Scheduling algorithms for multi-programming in a hard real-time environment,"
"Virtual new computer communication switching technique,"
"Virtual-channel flow control,"
"Hardware support for controlled interaction of guaranteed and best-effort communi- cation,"
"Support for multiple classes of traffic in multicomputer routers,"
"PP-MESS-SIM: A flexible and extensible simulator for evaluating multicomputer networks,"
"Bandwidth requirements for wormhole switches: A simple and efficient design,"
"The SP2 high-performance switch,"
"Fast packet switch architectures for broadband integrated services digital networks,"
"Deadlock-free message routing in multiprocessor interconnection networks,"
"A survey of wormhole routing techniques in direct networks,"
"Cost of adaptivity and virtual lanes in a wormhole router,"
"Routing subject to quality of service constraints in integrated communication networks,"
"Real-time communication in ATM,"
"A novel architecture for queue management in the ATM network,"
"VLSI priority packet queue with inheritance and overwrite,"
"Exact admission control for networks with bounded delay services,"
"Hardware-efficient fair queueing architectures for high-speed networks,"
"Scalable hardware priority queue architectures for high-speed packet switches,"
"A VLSI sequencer chip for ATM traffic shaper and queue manager,"
"Systolic priority queues,"
"Scalable architectures for integrated traffic shaping and link scheduling in high-speed ATM switches,"
--TR
--CTR
David Whelihan , Herman Schmit, Memory optimization in single chip network switch fabrics, Proceedings of the 39th conference on Design automation, June 10-14, 2002, New Orleans, Louisiana, USA
Sung-Whan Moon , Kang G. Shin , Jennifer Rexford, Scalable Hardware Priority Queue Architectures for High-Speed Packet Switches, IEEE Transactions on Computers, v.49 n.11, p.1215-1227, November 2000
G. Campobello , G. Patan , M. Russo, Hardware for multiconnected networks: a case study, Information SciencesInformatics and Computer Science: An International Journal, v.158 n.1, p.173-188, January 2004
Evgeny Bolotin , Israel Cidon , Ran Ginosar , Avinoam Kolodny, QNoC: QoS architecture and design process for network on chip, Journal of Systems Architecture: the EUROMICRO Journal, v.50 n.2-3, p.105-128, February 2004
Evgeny Bolotin , Israel Cidon , Ran Ginosar , Avinoam Kolodny, Cost considerations in network on chip, Integration, the VLSI Journal, v.38 n.1, p.19-42, October 2004
Kees Goossens , John Dielissen , Jef van Meerbergen , Peter Poplavko , Andrei Rdulescu , Edwin Rijpkema , Erwin Waterlander , Paul Wielage, Guaranteeing the quality of services in networks on chip, Networks on chip, Kluwer Academic Publishers, Hingham, MA, | link scheduling;packet switching;multicomputer router;wormhole switching;real-time communication |
290115 | Minimum Achievable Utilization for Fault-Tolerant Processing of Periodic Tasks. | AbstractThe Rate Monotonic Scheduling (RMS) policy is a widely accepted scheduling strategy for real-time systems due to strong theoretical foundations and features attractive to practical uses. For a periodic task set of n tasks with deadlines at the end of task periods, it guarantees a feasible schedule on a single processor as long as the utilization factor of the task set is below n(21/n$-$ 1) which converges to 0.69 for large n. We analyze the schedulability of a set of periodic tasks that is scheduled by the RMS policy and is susceptible to a single fault. The recovery action is the reexecution of all uncompleted tasks. The priority of the RMS policy is maintained even during recovery. Under these conditions, we guarantee that no task will miss a single deadline, even in the presence of a fault, if the utilization factor on the processor does not exceed 0.5. Thus, 0.5 is the minimum achievable utilization that permits recovery from faults before the expiration of the deadlines of the tasks. This bound is better than the trivial bound of 0.69/2 = 0.345 that would be obtained if computation times were doubled to provide for reexecutions in the RMS analysis. Our result provides scheduling guarantees for tolerating a variety of intermittent and transient hardware and software faults that can be handled simply by reexecution. In addition, we demonstrate how permanent faults can be tolerated efficiently by maintaining common spares among a set of processors that are independently executing periodic tasks. | Introduction
In the realm of real-time computation, we frequently encounter systems where the tasks are required
to execute periodically. Applications where this requirement is common are often found in, for
example, process control, space applications, avionics and others. Even when the external events
that trigger tasks are not periodic, many real-time systems sample the occurrence of these events
periodically and execute the associated tasks during the time slots reserved for them. The sampling
rate depends on the expected frequency of the external event. The reason why aperiodic or sporadic
tasks are executed in a periodic manner is because the periodic execution is well understood and
predictable.
A variety of scheduling policies for periodic real-time systems have been studied. A scheduling
policy is defined as optimal if it can schedule any feasible set of tasks if any other policy can also
do the same. A system is called a fixed-priority system if all the tasks have fixed priorities and
these priorities do not change during run time. Rate Monotonic Scheduling (RMS) has been proven
to be an optimal scheduling policy for scheduling a set of fixed priority tasks on a uniprocessor.
Earliest-Deadline-First (EDF) is the optimal scheduling policy for a variable priority system. Note
that a priority of a task is different from its criticality. The former is some measure that is assigned
to the tasks by the scheduling policy to facilitate scheduling whereas the latter is the measure of
importance of the task as defined by the application.
RMS is widely used in practice because it can be easily implemented. It is a preemptive policy
where the priority of the tasks are assigned in increasing order of their periods and the task of a
particular priority preempts any lower priority task. Liu and Layland proved that as long as the
utilization factor of a task set consisting of n tasks is less than n(2 1=n \Gamma 1), the task set is guaranteed
a feasible schedule on a uniprocessor [1]. This bound approaches 0:69 as n goes to infinity. However,
there may exist task sets which have utilization factors above this bound and still may be feasibly
scheduled. The stochastic analysis of the breakdown utilization factor for randomly generated task
sets is presented in [2].
The problem for scheduling periodic tasks on multiprocessors is considered in [3] [4] [5]. It
is easy to demonstrate that neither the RMS nor the EDF algorithms are optimal for scheduling
a set of periodic tasks on a multiprocessor system among fixed and variable priority algorithms
respectively [3]. In fact, no scheduling policy is proven to be optimal for a multiprocessor system.
Another issue in real-time computing that is currently gaining increased attention of researchers
is fault tolerance. Computers are being introduced to a great extent in critical applications and more
reliance is being placed on them while reducing human intervention to a minimum. In situations
where the demand for hard real-time processing merges with catastrophic consequences of failures,
it is not difficult to imagine why fault tolerance must be provided. Responsive systems [6] which
must perform computations to successfully meet their deadlines even in the presence of faults are
indispensable in many applications. This paper contributes to an evolving framework for the design
and implementation of responsive systems. Our goal in this paper is to investigate the issues of fault
tolerance in a system of real-time periodic tasks employing Rate Monotonic Scheduling. Previous
work has usually addressed software faults where each task has primary and an alternate code. In
[7], an off-line scheduling strategy is considered for periodic tasks where the period of a particular
task is an integral multiple of the next lower task period. The alternates are scheduled by RMS
policy first and then an effort is made to include the maximum number of primary executions in the
schedule. A similar problem of scheduling alternate versions of programs called ghosts is considered
in [8]. Dynamic programming is used to perform scheduling and an attempt is made to minimize
a cost function. A load balancing scheme is presented for periodic task sets scheduled by RMS in
[9] where the neighbors of a faulty processor on a ring take over its tasks which are then eventually
distributed to the other processors. However, there is no consideration of missing deadlines due to
an overload caused by task migration in response to a fault.
In this paper, we address the schedulability criterion of a set of periodic tasks for fault-tolerant
processing. Specifically, we prove that the minimum achievable utilization is 0.5 for a set of periodic
tasks executing in an environment that is susceptible to the occurrence of a single fault where the
recovery action is to recompute all the partially executed tasks. This result guarantees that all the
tasks will meet their deadlines even in the presence of a fault if the utilization factor of the task set
on a processor is less than 0.5. The classes of faults that can be tolerated include intermittent and
transient hardware and software faults. In addition, permanent crash and incorrect computation
faults can also be handled by providing spares to perform recovery and subsequent execution of the
task set.
The paper is organized as follows: in Section 2, we provide the background, explain the problem
and declare the assumptions. In the following section we present the proof of our assertion that
the minimum achievable utilization is 0.5. In Section 4, we address practical and implementation
issues. Our conclusions are given in the final section.
Background, problem statement and assumptions
As has been mentioned in the Introduction, RMS has a strong theoretical foundation and is widely
used in practice due to its simplicity. Rate Monotonic Scheduling policy assigns priorities to tasks
in the increasing order of their periods. Consider a set S of n tasks. Each Task i is described by a
tuple is the execution time of the task, T i is the period and R i is its release
time, i.e., the time when the first invocation of the task occurs. Thus
ng
We will assume that tasks are labeled in such a manner that A task is expected
to complete its computation prior to the end of its period. Thus the j th instance (j = 1; 2; .) of
the Task i is ready for execution at time R i and has a deadline for completion at
We assume that we are dealing with a hard real-time system and the aim is to meet
the deadlines under all conditions as opposed to soft real-time systems where the deadlines may be
missed and the aim is to reduce the delay. In this paper, when not explicitly mentioned, all R i 's
are assumed to be zero.
The execution of the tasks is preemptive, i.e., during the execution of a Task i, if any higher
priority Task k is ready for execution, the computation of Task i is interrupted and it remains
suspended until Task k completes its execution. Then Task i continues from the state at which
it was suspended, provided no other task of higher priority is waiting for execution. It is usually
assumed that the time to swap tasks is negligible, or that it is accounted for in the computation
time. Note that the definition of preemption is recursive, i.e, if Task k has interrupted Task i, it
can itself be interrupted by another task of still higher priority. The RMS is a fixed priority policy
since the priorities of tasks remain static and do not change during the course of execution of the
tasks. The priorities are assigned in the increasing order of the task periods. The task with the
smallest period is assigned the highest priority and the task with the largest period the lowest.
We will call the arrival time of the task as that instant at which it is ready for execution, i.e.,
. and its deadline as the next arrival of the same task. The departure time
of a task is defined as the time instant when the task completes its execution. Thus the arrival
time of the j th instance of Task i is R i its departure time cannot be defined easily
because it depends on the parameters of higher priority tasks.
The utilization factor U of a task set is defined as
For a single processor system, a task set is said to fully utilize the processor under a scheduling
algorithm if the task set can be feasibly scheduled using the algorithm but increasing any of the
cause the schedule to be infeasible. The least upper bound of the utilization factor is the
minimum of the utilization factors for all possible task sets that fully utilize the processor [1] and
is also called the minimum achievable utilization [3]. If the task set has a utilization factor which
is less than the minimum achievable utilization, then it is guaranteed a feasible schedule. From
[1], for a task set with n tasks, the minimum achievable utilization is n(2 1=n \Gamma 1). As n !1, the
minimum achievable utilization converges to ln2 which is approximately 0.69.
2.1 Fault classification
In any discussion on fault tolerance, it is necessary to consider the issue of fault assumptions because
it has a significant impact on the design of the system. Under a crash fault model, the processor
is either operating correctly or, if a fault occurs, does not respond at all to any event, internal
or external. An incorrect computation fault assumption considers that the processor may fail to
produce a correct result in response to correct inputs. For issues related to fault diagnosis and
consensus in fault-tolerant processing, the reader can refer to [10].
In addition, faults are also classified as permanent, intermittent and transient [11]. A permanent
or hard fault is an erroneous state that is continuous and stable. An intermittent fault
occurs occasionally due to unstable nature of hardware. A transient fault results from temporary
environmental conditions. A permanent fault can be tolerated only by providing spares which take
over the tasks of a primary processor when the fault occurs. Intermittent and transient faults can
be tolerated by repeating the computations.
2.2 Analysis of the problem
In general, scheduling problem is concerned with allocating shared resources to multiple processes
who need the resources simultaneously. This allocation is performed while attempting to achieve
certain prespecified goals. In traditional computers, the goal is usually to minimize the total time or
increase the response time for all the requests. However, in real-time systems, the goal is simply to
allocate the resources in such a manner that the deadlines associated with the tasks are met. In this
paper, as we are dealing with scheduling tasks for execution, the resources are the processors. For
hard real-time systems, the scheduler has to be such that all tasks are guaranteed to be completed
before their deadlines.
When real-time systems are to be used for critical applications, it is necessary that the system
survives in spite of faults that may arise in the system. Unlike non-real-time systems where the
occurrence of faults and subsequent recovery may be permitted to cause delays, it is imperative
that the results of computations in real-time systems meet the deadline even in presence of faults.
Thus the notion of guaranteeing a feasible schedule has to be extended to cover the random events
of fault occurrences. This is a challenging endeavor which has to be addressed nevertheless. In this
paper, we will consider fault tolerance strategies for a set of periodic tasks executed under RMS
policy which will guarantee that no task will miss even a single deadline due to the occurrence
of a fault at any random moment subject to the fault assumptions explicitly stated therein and
maintaining the priority of the RMS policy.
When one considers introducing fault tolerance into the computation, a host of issues need to
be considered in addition to those already existing. The only means of providing fault tolerance
is by introducing redundancy in the system. The selection of the appropriate level of time and/or
space redundancy is driven by the requirements of the application. Redundancy is provided by
creating replicas at some level of computation, usually at the task level in real-time systems. Time
redundancy is provided by re-executing the task multiple number of times. The original execution
and re-executions can all be performed on a single processor or on different processors. The choice
is dependent on the fault model assumption. For real-time systems, time redundancy is the most
desirable choice, provided that there is sufficient laxity in the deadlines and there is enough spare
capacity that other tasks do not miss their deadlines. This will allow maximum utilization of
the available resources. However, if the deadlines are stringent and very little laxity is available,
space redundancy is the only choice. Thus an ideal design is one which effectively resolves a tradeoff
between these two choices such that minimum cost overhead is incurred and all tasks are guaranteed
to meet their deadlines under the fault assumptions. This space-time tradeoff is fundamental to
the design of responsive computer systems. The result presented here optimizes the tradeoff to
provide scheduling guarantees for a single fault in an environment for periodic tasks.
2.3 Single fault with re-execution of task for recovery
We analyze the following scenario:
ffl A set of tasks is executing on a single processor and the tasks are scheduled by the RMS
policy.
ffl All the tasks are independent.
ffl A fault may occur at any instant.
ffl The interval between successive faults is greater than the largest period in the task set.
ffl The fault is detected before the next occurrence of a departure of a task from the processor.
For example, if a lower priority tasks is executing during the occurrence of a fault and some
time later another higher priority task is supposed to preempt the first task, the fault should
be detected before the higher priority task is expected to depart under normal execution.
ffl The recovery action is to re-execute all the partially executed tasks at the instant of the fault
detection. This includes the currently executing task and all the preempted tasks.
ffl The tasks are required to meet their deadlines even if they have to be re-executed due to the
occurrence of a fault.
ffl The priorities of the RMS policy are maintained even during recovery. Maintaining the
priorities of tasks is very important since RMS is a fixed priority scheduling policy and the
priorities are assigned at system design time. This approach simplifies the design process
because the designer does not have to worry about assigning separate priorities for recovery
and analyze the effect of the change in priorities on the schedulability of the task set.
One should note that at this stage that we do not place any restrictions on the kind of faults that
can be tolerated or the architecture of the system. As long as these conditions are satisfied by
the design, the results of this paper are valid. For example, if one were to consider a hardware
permanent crash fault, the recovery and subsequent computation would have to be performed on
a) Regular execution with no faults.
(b) Primary processor, fault occurs just prior to time 17.
(c) Spare processor.
Figure
1: Feasible schedule in presence of a fault.
a spare processor. On the other hand, if a software fault occurs, the recovery is possible on the
primary processor itself. An incorrect computation fault can be handled if the fault is detected,
perhaps by consistency checks, before the task is expected to depart. In addition, the recovery
program for a task need not be the same as the one that is normally executed as long as its
computation time is less than or equal to the computation time of the primary code.
Two examples are shown in Figures 1 and 2. Both of them consider a task set consisting of two
tasks with periods 5 and 7. In these examples, we assume crash faults of processors. In Figure 1,
and the processor state as a function of the time is shown under regular execution
in
Figure
1(a). We observe that the schedule is feasible when no fault occurs. Figures 1(b) and 1(c)
show the state of the processor and the spare respectively when a fault occurs just prior to the time
instant 17. The fault occurs before Task 2 could complete and so it is re-started on the spare and
it meets the deadline of time 21.
Figure
2(a) shows the execution profile of the two tasks whose periods are again 5 and 7
respectively. However, in this example 2. Though the schedule is feasible when
no fault occurs, the same is not true when a fault causes the recovery action to be taken. The
arrival of Task 1 at time 15 preempts Task 2 and a fault occurs just prior to its completion at time
17. So the spare restarts the execution of both tasks, starting with Task 1 as it is a higher priority
task. Task 1 completes at time 19 and manages to meet its deadline of time 20. The re-execution
of the Task 2 starts at time 19 but is preempted at time 20 by the arrival of the next instance of
Task 1 and so Task 2 misses its deadline of time 21.
It seems obvious from these examples that certain amount of time redundancy should be provided
for recovery and that the RMS scheduling criteria (U ! 0:69) is not sufficient. A trivial
a) Regular execution with no faults.
(b) Primary processor, fault occurs just prior to time 17.
(c) Spare processor.
Figure
2: Infeasible schedule in presence of a fault.
solution is to "reserve" enough space for all tasks so that in the event of a fault, there is enough
spare capacity in terms of time such that the task can be re-executed and still meet its deadline.
Since the worst possible time for a fault to occur is just prior to the completion of the task, the
amount of extra time to be devoted to task i for recovery is an additional C i . Thus in the Rate
Monotonic Analysis of the schedulability of the entire task set, the computation time for all tasks
have to be assumed to be 2C i . This means that, in a general case, the effective minimum achievable
utilization on each processor is just 0.345, i.e., half of 0.69. However, the situation is not as pessimistic
as it appears. We will prove in the following section that a minimum achievable utilization
of 0.5 guarantees enough time redundancy to complete recovery before the deadlines. Thus as long
as the utilization factor of a task set on a processor is less than or equal to 0.5, the task set is
guaranteed a feasible schedule in presence of a single fault.
2.4 Motivation
One of the popular traditional approaches to the design of fault-tolerant system is the use of N-
modular-redundancy (NMR) [11]. In this technique, every processor is provided with extra spares.
The spares may be hot, warm or cold. For real-time systems, hot spares is the preferred choice
as no time is wasted to perform recovery. A spare is said to be hot if it synchronously performs
all the computations with the primary processor and takes over if the primary processor fails. For
fault models such as incorrect computation and Byzantine faults, there may not be any distinction
between the primary and the spares as they all perform the same computation and vote on the result
to mask faulty results. If we assume crash or fail-stop model, NMR requires that each processor
be duplicated to tolerate a single fault and so the number of processors in a fault-tolerant system
is 2m where m is the number of processors in the original system. Such a system, called a duplex
system, can tolerate up to one fault between the primary and the spare and up to m faults as long
as no more than one fault affects a particular primary and its spare. This is achieved by having
the space overhead equal to the size of the original system, i.e., by doubling the space resources.
The space overhead of duplex system is very high for many applications and it is usually desirable
to have a single spare for the group of m processors so that if any processor fails, the spare can be
substituted in its place. Whereas providing a single spare is a simple feat in non-real-time systems,
ensuring that the recovery will be performed within the deadlines is not easy. The contributions
of this paper makes it easy to guarantee recovery by limiting the utilization factor on a processor
at 0.5. If U S is the total utilization factor of a large set of tasks, the number of processors needed
in a system with a single spare is dU S =0:5e + 1. This assumes crash faults and even distribution
of the utilization factor. This is likely to be significantly less than 2 dU S =0:69e in the duplex
system. Interestingly, the trivial solution to ensure recovery by doubling the computation time
requirements will require dU S =0:35e which is nearly the same as that required by
the duplex system.
In addition to tolerating hardware crash faults, a major application of the result is towards
tolerating software faults. We will deal with this in greater detail in Section 4.
3 Determination of minimum achievable utilization
Before we prove that the minimum achievable utilization is 0.5, we present the definitions of some
terms used in the proof. The recovery is defined as re-execution of all the partially executed tasks
where the priority of the RMS is maintained. Thus during the recovery of a lower priority task, if a
higher priority task arrives, the higher priority task will preempt the recovery of the lower priority
task. In addition, if the fault affects multiple tasks, higher priority tasks will perform recovery
action first. A schedule is said to be feasible for a set of tasks if the task set can be guaranteed a
schedule under Rate Monotonic Algorithm (i.e., all tasks will meet their deadlines) even if recovery
has to be performed due to a single fault that can occur at any arbitrary instant of time. A set
of tasks is said to fully utilize a processor if the task set has a feasible schedule and increasing the
computation time of any task in the set causes the schedule to become infeasible. The minimum
achievable utilization is the minimum of the utilization factor of every possible sets of tasks that
fully utilize the processor. We define a critical instant for a task to be that instant at which an
arrival of the task will have the largest response time in the presence of some fault. The schedule
of a set of tasks that fully utilizes the processor will have at least one critical instant for some task
i where the response time is the period of that task. We shall call that time interval between the
arrival and the deadline of the task i as the critical period.
A fault that occurs just prior to the completion of a task creates the maximum delay for that
task and any lower priority tasks that have been interrupted by it. Hence we only need to examine
the effects of a fault at the instants when the tasks are about to be completed.
We will consider a number of cases that will lead to the proof of theorem that the minimum
achievable utilization is 0.5.
3.1 Case 1: Task set with one task
Consider a task set comprising of a single task (C In this case, the release time does not
matter.
Observation 1 The minimum achievable utilization for a task set with one task is 0.5.
Proof: This is obvious since C 1 cannot exceed T 1 =2. If C 1 equals some value x such that
occurs at some instant t such that kT 1
44 48 54 55
(a)
(b)
Recovery of Task 2
Recovery of Task 2
Recovery of Task 1
Figure
3: Schedule of two tasks with periods 6 and 11.
then there is not sufficient time to re-execute the task and still meet the deadline at
. The processor is fully utilized when C
It is important to note that even if the task set has more than one task, the computation time
of each of the tasks cannot exceed half the value of its period, i.e., C
n is the number of tasks in the set.
3.2 Case 2: Task set with two tasks
Before we begin the analysis of the minimum achievable utilization for this case, let us consider the
issue of release times. In the traditional RMS analysis the worst delay for the Task 2 is observed
when it arrives simultaneously with the Task 1. If the first arrival of the Task 2 can then be feasibly
scheduled, any subsequent arrivals will also meet their deadlines and so one has only to consider
the feasibility conditions of the simultaneous arrivals of the tasks. This is not necessarily true when
one considers the possibility of faults. For example, consider the task set f(1; 6); (4:5; 11)g where
release times are zero. By considering just the first arrival, it would appear that the task set has
a feasible schedule and the processor is fully utilized. This is shown in Figure 3(a). Tasks 1 and 2
are released simultaneously and since Task 1 has higher priority, it starts execution and departs at
begins. A fault occurs just prior to the completion of Task 2 at time instant
5.5 and it is restarted to perform recovery. Task 1 again arrives at time 6 and it preempts recovery.
The recovery just completes at time 11 when the next arrival of Task 2 occurs. However, if a fault
occurs just before time instant 49, the schedule is infeasible. This is shown in Figure 3(b). Task 2
arrives at time 44 and is preempted by Task 1 which arrives at time 48. A fault occurs just prior
to the completion of Task 1 at time 49 and so both tasks have to be re-executed. Task 1 recovers
in time at time instant 50 when the recovery of Task 2 begins. However, the next arrival of Task 1
occurs at time 54 and it preempts the recovery of Task 2 and causes it to miss the deadline at time
55. Only 4 units of time are available to Task 2 for recovery in the time interval 50-54 whereas its
computation time is 4.5. Thus the correct value of C 2 that fully utilizes the processor is C
Hence, in our analysis, we have to consider all possible values of release times.
Consider a set of two tasks, arbitrary release times. We will
first consider the case when T 2 - 1:5T 1 . Next we will consider various subcases when
3.2.1 Case 2a:
Theorem 1 The minimum achievable utilization is 0.5 for a set of two tasks satisfying the condition
Proof: We first prove that as long as the utilization factor is less than or equal to 0.5, a feasible
schedule is guaranteed for the task set; then we give a particular instance where the processor is
fully utilized and the utilization factor is 0.5.
From Observation 1, it is clear that C 1 - T 1 =2. Within any interval [R 2
there are at most dT 2 =T 1 e arrivals of Task 1. The worst possible scenario is when Task
2 is about to be completed and is preempted by the arrival of Task 1 and the fault occurs just
prior to completion of Task 1. In this case both Tasks 1 and 2 need to be executed again. Task 1
will meet its deadline since C 1 - T 1 =2. Task 2 will meet its deadline if the following condition is
i.e. if '- T 2
Under traditional RMS analysis, the feasibility condition is (dT 2 =T 1 But in a
fault-tolerant system, each task will have to be executed once more under the worst case scenario
of the occurrence of a single fault.
Assume that the utilization factor of the task set is less than or equal to 0.5, i.e.,
Therefore,
Thus the feasibility condition given by Equation 2 is guaranteed if
Equation 5 is satisfied when T . Thus any task set satisfying the
condition of the Theorem 1 is guaranteed a feasible schedule if the utilization factor is less that or
equal to 0.5.
(a)
(b)
(c)
Figure
4: Modeling subsequent arrivals of the tasks.
Now consider the cases when C In each of these two
cases, the processor is fully utilized since increasing C 2 in the first case and C 1 in the second case
causes the schedule to become infeasible. In both cases, the utilization factor is 0.5. We have also
proved that as long as the utilization factor is less than or equal to 0.5, the tasks can be feasibly
scheduled. Hence, when T 2 - 1:5T 1 , the minimum achievable utilization is 0.5. 2
3.2.2 Case 2b:
We will take the following approach in our proof for this proof: We will first show that each instance
of a task can be modeled as the arrival of the first instance with some values of release times R 0and R 0
. Then we will prove that the first instances can be feasibly scheduled for all possible values
of release times as long as the utilization factor is less than or equal to 0.5, i.e., we will prove that
the minimum achievable utilization among all task sets that fully utilize the processor during the
first instance is 0.5. Also, without loss of generality, we can assume that one of R 1 or R 2 is zero
Consider Figure 4 where we are interested in the feasibility of meeting the deadline at time
instant of the th instance of Task 2 that arrives at time instant R 2 +kT 2 . We
consider various cases below where R 1
ffl If R 2 shown in Figure 4(a), the th instance of the Task 2
can be modeled as the first instance of the Task 2 in the task set
0)g. This is possible because any fault during the execution
of the (j th instance of Task 1 does not affect the schedulability of the th instance
of Task 2.
ffl If R 2 shown in Figures 4(b) and (c), the th instance of
the Task 2 can be modeled as the first instance of the Task 2 in a task set
In the Appendix, we consider all possible cases of the release times and the periods of the tasks.
For each of those cases, we present the value of the task computation times that fully utilize the
processor during the first instance of the Task 2. For each of those cases, we prove that when the
processor is fully utilized during the first instance of Task 2, the utilization factor is greater then
0.5.
Theorem 2 The minimum achievable utilization for a set of two tasks satisfying condition
1:5T 1 is 0.5.
Proof: We have shown that any subsequent instance of two tasks after the first instance can be
modeled as the first instance with some release times. Then we have proved in the Lemmas 3-14
in the Appendix that for all possible values of release times, if the processor is fully utilized for the
first instance, the utilization factor is greater than or equal to 0.5. Hence, the minimum achievable
utilization for a set of two tasks satisfying condition
3.3 Case 3: Task set with n ? 2 tasks
Consider a set of n tasks
whose utilization is
We will prove by induction that the minimum achievable utilization for a set of n tasks is 0.5. Let
us assume that the minimum achievable utilization for a set of tasks is 0.5. We will prove
that this is also true for a set of n tasks.
Consider the set of the first n \Gamma 1 tasks
whose utilization is
If both sets S n and S n\Gamma1 have a feasible schedule and U (because
Thus we need to consider only those cases where U 0:5. But since U
have a feasible schedule because of our assumption. Thus we only need to consider the
feasibility of scheduling the Task n.
3.3.1 Case 3a:
Theorem 3 The minimum achievable utilization is 0.5 for a set of n tasks satisfying the condition
assuming that the minimum achievable utilization of any set of tasks is
As in the case of a set of two tasks, if the following condition representing the worst possible scenario
is satisfied, the corresponding task set has a feasible schedule. Note that the reverse is not true,
i.e., the task set may not satisfy the following condition and still have a feasible schedule.
i.e.,
Assume that U n - 0:5. Therefore
Thus the condition in Equation 7 is guaranteed if
If
Whenever
- 0. Thus the sum is also
non-negative and Equation 7 is satisfied and the task set is guaranteed a feasible schedule. Thus
for all sets of n tasks, the minimum achievable utilization is 0.5 if T n - 1:5T
3.3.2 Case 3b:
When we will consider two subcases in the following lemmas. Assume that the set of
tasks S n fully utilizes the processor. We note that the set S n\Gamma1 does not fully utilize the processor
since U 0:5. Add task n to the set S n\Gamma1 and increment its computation time till the processor
is fully utilized and this value of the computation time is C n . Hence only the task n has at least one
critical period where the occurrence of a fault and subsequent recovery will cause the task to just
its deadline. There are two possible cases: the worst case instant of the occurrence of a fault
is just prior to completion of the task n itself in which case the recovery is solely the re-execution
of only the task n, or, the worst case instant of occurrence of a fault is just prior to the completion
of some other task In the former case,
is the fraction of the time that the processor spends executing the task i in the critical
period of Task n and x i - dT n =T i e. In the latter case,
where y i is the fraction of the time that the processor spends in the normal execution and recovery
of the Task i in the critical period of Task n. Here, y
Lemma 1 The minimum achievable utilization is 0.5 for a set of n tasks satisfying the conditions
e, assuming that
the minimum achievable utilization for a set of tasks is 0.5.
Construct a set S 0
tasks as follows:
The utilization factor U 0
of the set S 0
is the same as that of S n , i.e. U 0
. Now consider
a fault just prior to the completion of the task (C
during an interval which
is a critical interval for the set S n . The time to completion of the task is
Thus the last task misses the deadline and so the set S 0
has an infeasible schedule. But since we
have assumed that the minimum achievable utilization of a set of tasks 0.5, the utilization
factor of S 0
must exceed 0.5. However, U
and so U n ? 0:5. Thus the minimum achievable
utilization of every set of n tasks that satisfy the conditions of this lemma is 0.5. 2
Here we have proved that every set of n tasks that fully utilizes the processor and satisfies the
conditions of the Lemma 1 can be converted into another set of tasks that has an infeasible
schedule. As an example, consider a set of three tasks S 0)g. This
task set fully utilizes the processor. From this task set, we construct the set S 0
5)g.
The set S 0
2 has an infeasible schedule because if a fault occurs at a time just prior to the completion
of Task 2 at time instant 2.625, there is not enough spare time to recover.
Lemma 2 The minimum achievable utilization is 0.5 for a set of n tasks satisfying the conditions
assuming that
the minimum achievable utilization of a set of tasks is 0.5.
Assume that the set of tasks S n fully utilizes the processor. Since the set S n\Gamma1 does not fully
utilize the processor, increment the computation time of the Task n\Gamma1 in S n\Gamma1 so that the utilization
factor is 0.5. Let this increase be \Delta and let the new set be S 0
with the utilization factor U 0
\Delta. It is easy to observe that C n - 2\Delta.
Since the Task is the lowest priority task in the set of any reduction of the
computation time of \Delta from C 0
frees up at least 2\Delta amount of time for Task n that will not be
interrupted by the other tasks. The amount is 2\Delta because reduction of \Delta also frees up an extra \Delta
from recovery. Thus,
? 0:5We now prove the following theorem for the general case.
Theorem 4 For a set of n tasks, the minimum achievable utilization is 0.5.
Proof: In Theorem 3 and Lemmas 1 and 2 we have proved that the minimum achievable utilization
for a set of n tasks is 0.5 provided that the minimum achievable utilization for a set of
tasks is 0.5. In addition, this theorem is true for one task as shown in Observation 1 and has also
been proven to be true for a set of two tasks in Theorems 1 and 2. Hence, by induction it is true
for all n. 2
Implementation Issues
4.1 Tolerating hardware crash faults
Consider a distributed system with a common spare. The spare is not idle but it monitors the state
of the processors. After completion of each instance of each task, a processor sends a message to
the spare indicating that the task is successfully completed. The spare maintains a list of all tasks
in the system and the processor on which they are executing. From this information, it can either
be provided a look-up table of all completion times of the tasks or these completion times can be
easily computed "on-the-fly". Let Ccomm be the maximum communication latency of the network.
If some task was supposed to be completed at time t c , the spare expects a confirmation by the time
In case this message is not received, the processor is declared faulty and the spare
takes over the faulty processor's task set and initiates recovery. In the rate monotonic analysis
of the task set on each processor, the communication time and the overhead in reconfiguration is
assumed to be included in the computation time of the task. So, if some task i has computation
requirement of C i , then C 0
used for analysis. This technique assumes
that the communication delays are finite and bounded, which is not an unreasonable assumption
for practical applications. It also requires that the executable code of all tasks be accessible to the
spare.
As we have discussed in Section 2, the space overhead for guaranteeing deadlines in the presence
of a single fault for duplex systems is 2 dU=0:69e processors. However, the number of processors
needed for a system with a single spare with recovery is dU=0:5e + 1. U is the total utilization
factor of the task set and we assume that the task set is partitioned so that the utilization factor is
evenly distributed. Table 1 shows the number of processors required by each scheme for different
values of the utilization factors. We observe that providing a common spare significantly reduces
the size of the system and the effect is more pronounced for large values of utilization factors.
Table
1: Number of processors m in systems where computation times are doubled for RMS analysis,
duplex systems and in a system with a common spare for recovery for different values of utilization
factor
U Doubling computation Duplex system Common spare
time in RMS analysis with recovery
l U
0:345
l U
0:69
l U
0:5
5
6 19
7 22 22 15
9 28 28 19
100 291 290 200
4.2 Tolerating incorrect computation faults caused by hardware fault
Triple Modular Redundancy (TMR) systems are required to tolerate incorrect computation faults.
A duplex system can only detect the presence of an incorrect computation fault because the results
of the two processors do not agree. A third processor is required so the majority result is assumed
to be correct. A similar technique as described above can be used to tolerate a single incorrect
computation fault. Rather than having a TMR system, a duplex system with a spare can be used.
In case the duplex pair detects an error, the spare is used to perform recovery. The number of
processors required for a TMR system is 3 dU=0:69e whereas the number of processors required
for a duplex with a spare for recovery is 2 dU=0:5e + 1. Again, U is the total utilization factor
for the entire task set. The number of processors required for both schemes is shown in Table 2.
We notice that the duplex with a spare again requires less space overhead as compared to a TMR
system. However, the benefit is not as large as that observed for crash faults.
4.3 Tolerating software faults and intermittent and transient hardware faults
We believe that the greatest application of the results of this paper would be towards tolerating
software faults and intermittent and transient hardware faults. In space and hostile industrial
applications, outside environment conditions such as alpha particles, electrostatic interference, etc.,
cause transient errors. In addition software faults such as stack overflows in the operating systems,
etc., are best handled by re-execution. By limiting the utilization factor to 0.5 on a processor, we
can guarantee that recovery can be performed within the deadlines. Even though we consider the
re-execution of all partially executed tasks, it is not necessary if a fault affects a single task. That
task can be re-executed to meet its deadline and we can be confident that the re-execution will not
cause other tasks to miss their deadlines. In addition, the recovery code need not be the same as the
Table
2: Number of processors m in TMR system and in a duplex system with a common spare
for recovery for different values of utilization factor
U TMR Duplex system with
system common spare
l U
0:69
l U
0:5
5
9 42 37
100 435 401
primary code. This is especially true for software faults where an alternate program is desirable.
As long as the time to execute the recovery program is less than or equal to the execution time of
the primary program, we can be certain that the deadlines will be met.
4.4 Tolerating Multiple Faults
Multiple faults can be tolerated under our analysis as long as the interval between successive faults
is larger than the largest period in the task set. Under this assumption, unlimited transient faults
can be tolerated and k permanent crash faults can be tolerated by providing k spares and limiting
the utilization factor on each processor to 0.5. For certain task sets and k, NMR system will yield
lesser space overhead and greater fault coverage and would be easier to implement. This is the case
if
0:5
0:69
where U is the utilization factor for the entire task set. Again we assume that the task set is
partitioned so that the utilization factor is evenly distributed. For example, if the total utilization
uses only two processors whereas our approach would require
three processor. But for most general cases, providing k common spares would result in lesser
overheads.
Conclusions
We have provided a theoretical foundation for fault-tolerant processing of periodic real-time tasks
scheduled by the Rate Monotonic Scheduling policy. Under the scenario that recovery from a fault
involves restarting all the partially executed tasks while maintaining the priority levels of RMS pol-
icy, we show that the minimum achievable utilization on a processor is 0.5. This result guarantees
that all tasks will meet their deadlines even in the presence of a fault if the utilization factor on
a processor is restricted to 0.5. This bound is much better than the maximum utilization factor
of 0.345 (0.69/2) that would be obtained if the computation times of all tasks were naively doubled
in RMS analysis to provide for recovery time. The result provides a framework for tolerating
transient and intermittent hardware and software faults where re-execution is the preferred recovery
technique. In addition, this result is applicable to tolerating permanent crash and incorrect
computation faults where spares must be employed to replace faulty processors. In such a system
we show that the space redundancy achieved by maintaining a common pool of spares is, in most
cases, less than that of an NMR system.
The contributions of this paper form an important component in the evolution of Responsive
Systems. The concept of providing guarantees of meeting the deadlines in the system in spite
of the occurrence of faults is integral to the design of fault-tolerant real-time systems for critical
applications. Providing a simple criterion to ensure the feasibility of meeting all deadlines in the
presence of a single fault considerably reduces the complexity encountered by designers. This will
lead to safer and dependable use of real-time systems for critical applications.
--R
"Scheduling Algorithms for Multiprogramming in a Hard Real-Time Environment"
"The Rate Monotonic Scheduling Algorithm: Exact Characterization and Average Case Behavior"
"On a Real-Time Scheduling Problem"
"Scheduling Periodically Occurring Tasks on Multiple Proces- sors"
"A Note on Preemptive Scheduling of Periodic, Real-Time Tasks"
"Responsive Systems: A Challenge for the Nineties"
"A Fault-Tolerant Scheduling Problem"
"On Scheduling Tasks with a Quick Recovery from Failure"
"The Diffusion Model Based Task Remapping for Distributed Real-Time Systems"
"The Consensus Problem in Fault-Tolerant Com- puting"
The Theory and Practice of Reliable System Design
--TR
--CTR
Rodrigo M. Santos , Jorge Santos , Javier D. Orozco, A least upper bound on the fault tolerance of real-time systems, Journal of Systems and Software, v.78 n.1, p.47-55, October 2005
Sylvain Lauzac , Rami Melhem , Daniel Moss, An Improved Rate-Monotonic Admission Control and Its Applications, IEEE Transactions on Computers, v.52 n.3, p.337-350, March
Sasikumar Punnekkat , Alan Burns , Robert Davis, Analysis of Checkpointing for Real-Time Systems, Real-Time Systems, v.20 n.1, p.83-102, Jan. 2001
Frank Liberato , Rami Melhem , Daniel Moss, Tolerance to Multiple Transient Faults for Aperiodic Tasks in Hard Real-Time Systems, IEEE Transactions on Computers, v.49 n.9, p.906-914, September 2000
Tarek F. Abdelzaher , Vivek Sharma , Chenyang Lu, A Utilization Bound for Aperiodic Tasks and Priority Driven Scheduling, IEEE Transactions on Computers, v.53 n.3, p.334-350, March 2004 | minimum achievable utilization;real-time systems;rate monotonic scheduling;periodic tasks;fault tolerance |
290123 | Efficient Region Tracking With Parametric Models of Geometry and Illumination. | AbstractAs an object moves through the field of view of a camera, the images of the object may change dramatically. This is not simply due to the translation of the object across the image plane. Rather, complications arise due to the fact that the object undergoes changes in pose relative to the viewing camera, changes in illumination relative to light sources, and may even become partially or fully occluded. In this paper, we develop an efficient, general framework for object trackingone which addresses each of these complications. We first develop a computationally efficient method for handling the geometric distortions produced by changes in pose. We then combine geometry and illumination into an algorithm that tracks large image regions using no more computation than would be required to track with no accommodation for illumination changes. Finally, we augment these methods with techniques from robust statistics and treat occluded regions on the object as statistical outliers. Throughout, we present experimental results performed on live video sequences demonstrating the effectiveness and efficiency of our methods. | Introduction
Visual tracking has emerged as an important component of systems in several application
areas including vision-based control [1, 32, 38, 15], human-computer interfaces [10, 14, 20],
surveillance [30, 29, 19], agricultural automation [27, 41], medical imaging [12, 4, 45] and
visual reconstruction [11, 42, 48]. The central challenge in visual tracking is to determine the
image position of a target region (or features) of an object as it moves through a camera's field
of view. This is done by solving what is known as the temporal correspondence problem: the
problem of matching the target region in successive frames of a sequence of images taken at
closely-spaced time intervals. The correspondence problem for visual tracking has, of course,
much in common with the correspondence problems which arise in stereopsis and optical flow.
It differs, however, in that the goal is not to determine the exact correspondence for every
image location in a pair of images, but rather to determine, in a gross sense, the movement of
an entire target region over a long sequence of images. What makes tracking difficult is the
extreme variability often present in the images of an object over time. This variability arises
from three principle sources: variation in object pose, variation in illumination, and partial
or full occlusion of the target. When ignored, any one of these three sources of variability is
enough to cause a tracking algorithm to lose its target.
In this paper, we develop a general framework for region tracking which includes models
for image changes due to motion, illumination, and partial occlusion. In the case of motion,
all points in the target region are presumed to be part of the same object allowing us the
luxury - at least for most applications - of assuming that these points move coherently
in space. This permits us to develop low-order parametric models for the image motion of
points within a target region-models that can be used to predict the movement of the points
and track the target through an image sequence. In the case of illumination, we exploit the
observations of [25, 17, 5] to model image variation due to changing illumination by low-dimensional
linear subspaces. The motion and illumination models are then woven together
in an efficient algorithm which establishes temporal correspondence of the target region
by simultaneously determining motion and illumination parameters. These parameters not
only shift and deform image coordinates, but also adjust brightness values within the target
region to provide the best match to a fixed reference image. Finally, in the case of partial
occlusion, we apply results from robust statistics [16] to show that this matching algorithm is
easily extended to include automatic rejection of outlier pixels in a computationally efficient
manner.
The approach to matching described in this paper is based on comparing the so-called
sum-of-squared differences (SSD) between two regions, an idea that has been explored in
a variety of contexts including stereo matching [35], optical flow computation [2], hand-eye
coordination [38], and visual motion analysis [44]. Much of the previous work using SSD
matching for tracking has modeled the motion of the target region as pure translation in
the image plane [48, 38], which implicitly assumes that the underlying object is translating
parallel to the image plane and is being viewed orthographically. For inter-frame calculations
such as those required for optical flow or motion analysis, pure translation is typically
adequate. However, for tracking applications in which the correspondence for a finite size
image patch must be computed over a long time span, the pure translation assumption is
soon violated [44]. In such cases, both geometric image distortions such as rotation, scaling,
shear, and illumination changes introduce significant changes in the appearance of the target
region and, hence, must be accounted for in order to achieve reliable matching.
Attempts have been made to include more elaborate models for image change in region
tracking algorithms, but with sizable increases in the computational effort required to establish
correspondence. For example, Rehg and Witkin [40] describe energy-based algorithms
for tracking deforming image regions, and Rehg and Kanade [39] consider articulated objects
undergoing self-occlusion. More recently, Black and Yacoob [8] describe an algorithm
for recognizing facial expressions using motion models which include both affine and simple
polynomial deformations of the face and its features. Black and Jepson [7] develop a robust
algorithm for tracking a target undergoing changes in pose or appearance by combining a
simple parametric motion model with an image subspace method [37]. These algorithms
require from several seconds to several minutes per frame to compute, and most do not
address the problems of changes in appearance due to illumination.
In contrast, we develop a mathematical framework for the region tracking problem that
naturally incorporates models for geometric distortions and varying illumination. Using this
framework, we show that the computations needed to perform temporal matching can be
factored to greatly improve algorithm efficiency. The result is a family of region-tracking
algorithms which can easily track large image regions (for example the face of a user at a
workstation, at a using no special hardware other than a standard digitizer.
To date, most tracking algorithms achieving frame-rate performance track only a sparse
collection of features (or contours). For example, Blake et al. [9] and Isaard and Blake [33]
describe a variety of novel methods for incorporating both spatial and temporal constraints
on feature evolution for snake-like contour tracking. Lowe [34] and Gennery [21] describe
edge-based tracking methods using rigid three-dimensional geometric models. Earlier work
by Ayache [3] and Crowley [13] use incrementally constructed rigid models to constrain image
matching.
In practice, feature-based and region-based methods can be viewed as complementary
techniques. In edge-rich environments such as a manufacturing floor, working with sparse
features such as edges has the advantage of computational simplicity - only a small area
of the image contributes to the tracking process, and the operations performed in that
region are usually very simple. Furthermore, edge-based methods use local derivatives and,
hence, tend to be insensitive to global changes in the intensity and/or composition of the
incident illumination. However, in less structured situations strong edges are often sparsely
distributed in an image, and are difficult to detect and match robustly without a strong
predictive model [33]. In such cases, the fact that region-based methods make direct and
complete use of all available image intensity information eliminates the need to identify and
model a special set of features to track. By incorporating illumination models and robust
estimation methods and by making the correspondence algorithm efficient, the robustness
and performance of our region tracking algorithms closely rivals that achieved by edge-based
methods.
The remainder of this article is organized as follows. Section 2 establishes a framework for
posing the problem of region tracking for parametric motion models and describes conditions
under which an efficient tracking algorithm can be developed. Section 3 then shows how
models of illumination can be incorporated with no loss of computational efficiency. Section
4 details modifications for handling partial target occlusion via robust estimation techniques.
Section 5 presents experimental results from an implementation of the algorithms. Finally,
Section 6 presents a short discussion of performance improving extensions to our tracking
algorithm.
Tracking Moving Objects
In this section, we describe a framework for the efficient tracking of a target region through
an image sequence. We first write down a general parametric model for the set of allowable
image motions and deformations of the target region. We then pose the tracking problem as
the problem of finding the best (in a least squares sense) set of parameter values describing
the motions and deformations of the target through the sequence. Finally, we describe how
the best set of parameters can be efficiently computed.
2.1 On Recovering Structured Motion
We first consider the problem of describing the motion of a target region of an object through
a sequence of images. Points on the surface of the object, including those in the target region,
are projected down into the image plane. As the object moves through space, the projected
points move in the image plane. If the 3-D structure of the object is known in advance,
then we could exactly determine the set of possible motions of the points in the images. In
general, this information is not known in advance. Therefore, we approximate the set of
possible motions by a parametric model for image motions.
Let I(x; t) denote the brightness value at the location in an image acquired at
time t and let r x
I(x; t) denote the spatial gradient at that location and time. The symbol
denotes an identified "initial" time and we refer to the image at time t 0 as the reference
image. Let the set be a set of N image locations which define a target
region. We refer to the brightness values of the target region in the reference image as the
reference template.
Over time, the relative motion between the target object and the camera causes the image
of the target to shift and to deform. Let us model the image motion of the target region
of the object by a parametric motion model f(x; -) parameterized by -
with We assume that f is differentiable in both - and x: We call
- the motion parameter vector. We consider recovering the motion parameter vector for
each image in the tracking sequence as the equivalent to "tracking the object." We write
- (t) to denote the ground truth values of these parameters at time t, and -(t) to denote
the corresponding estimate. The argument t will be suppressed when it is obvious from its
context.
Suppose that a reference template is acquired at time t 0 and that initially -
us assume for now that the only changes in subsequent images of the target
are completely described by f ; i.e. there are no changes in the illumination of the target. It
follows that for any time t ? t 0 there is a parameter vector - (t) such that
This a generalization of the so-called image constancy assumption [28]. The motion parameter
vector of the target region can be estimated at time t by minimizing the following least
squares objective function
For later developments, it is convenient to rewrite this optimization problem in vector
notation. To this end, let us consider images of the target region as vectors in an N dimensional
space. The image of the target region at time t, under the change of coordinates f
with parameters -; is written as
This vector is subsequently referred to as the rectified image at time t with parameters -:
We also make use of the partial derivatives of I with respect to the components of - and
the time parameter t: These are written as
I - i
I - i
I - i
I - i
and
I t (-;
@t
I
I t (f(x
I t (f(x N ; -); t)
Using this vector notation, the image constancy assumption (1) can be rewritten as
I(- (t);
and (2) becomes
In general, (6) is a non-convex objective function. Thus, in the absence of a good starting
point, this problem will usually require some type of costly global optimization procedure to
solve [6].
In the case of visual tracking, the continuity of motion provides such a starting point.
Suppose that, at some arbitrary time the geometry of the target region is described
by -(t): We recast the tracking problem as one of determining a vector of offsets, ffi- such
that -(t ffi- from an image acquired at t + -: Incorporating this modification
into (6), we redefine the objective function as a function on ffi-
If the magnitude of the components of ffi- are small, then it is possible to apply continuous
optimization procedures to a linearized version of the problem [7, 28, 35, 47, 44]. The
linearization is carried out by expanding I(- in a Taylor series about - and t;
where h:o:t denotes higher order terms of the expansion, and M is the Jacobian matrix of I
with respect to -; i.e. the N \Theta n matrix of partial derivatives which can be written in column
form as
While the notation above explicitly indicates that the values of the partial derivatives are
a function of the evaluation point (-; t); these arguments will be suppressed when obvious
from their context.
By substituting (8) into (7) and ignoring the higher order terms, we have
With the additional approximation -I t (-;
O(ffi-
Solving the set of equations yields the solution
provided the matrix M is full rank. When this is not the case, we are faced with a generalization
of the aperture problem, i.e. the target region does not have sufficient structure to
determine all of the elements of - uniquely.
In subsequent developments, it will be convenient to define the error vector
Incorporating this definition into (12), we see that the solution of
(6) at time t a solution at time t is
where M is evaluated at (-; t):
2.2 An Efficient Tracking Algorithm
From (13), we see that to track the target region through the image sequence, we must
compute the Jacobian matrix M(-; t): Each element of this matrix is given by
where r f I is the gradient of I with respect to the components of the vector f . Recall that
the Jacobian matrix of the transformation f regarded as a function of - is the 2 \Theta n matrix
By making use of (15), M can be written compactly in row form as
Because M depends on time-varying quantities, it may appear that it must be completely
recomputed at each time step-a computationally expensive procedure involving the calculation
of the image gradient vector, the calculation of a 2 \Theta n Jacobian matrix, and n 2 \Theta 1
vector inner products for each of the N pixels of the target region. However, we now show
that it is possible to reduce this computation by both eliminating the need to recompute
image gradients and by factoring M:
First, we eliminate the need to compute image gradients. To do so, let us assume that
our estimate is exact, i.e. differentiating both sides of (1) we obtain
is the 2 \Theta 2 Jacobian matrix of f treated as a function of
@x
@y
Combining (17) with (16), we see that M can be written as
r x I(x
It follows that for any choice of image deformations, the image spatial gradients need only
be calculated once on the reference template. This is not surprising given that the target at
only a distortion of the target at time t 0 ; and so its image gradients are also
a distortion of those at t 0 : This transformation also allows us to drop the time argument of
M and regard it solely as a function of -:
The remaining non-constant factor in M is a consequence of the fact that, in general, f x
and f - involve components of - and, hence, implicitly vary with time. However, suppose
that we choose f so that f \Gamma1
can be factored into the product of a 2 \Theta k matrix \Gamma which
depends only on image coordinates, and a k \Theta n matrix \Sigma which depends only on - as
For example, as discussed in more detail below, one family of such factorizations results
when f is a linear function of the image coordinate vector x:
Combining (19) with (20), we have
r x
r x I(x
As a result, we have shown that M can be written as a product of an constant N \Theta k matrix
M 0 and a time-varying k \Theta n matrix \Sigma:
We can now exploit this factoring to define an efficient tracking algorithm which operates
as follows:
offline:
ffl Define the target region.
ffl Acquire and store the reference template.
ffl Compute and store M 0
and
online:
ffl Use the most recent motion parameter estimate -(t) to rectify the target region
in the current image.
by taking the difference between the rectified image and the
reference template.
ffl Solve the system \Sigma T \Sigma
is evaluated at
The online computation performed by this algorithm is quite small, and consists of two n \Theta k
matrix multiplies, k N-vector inner products, n k-vector inner products, and an n \Theta n linear
system solution, where k and n are typically far smaller than N:
We note that the computation can be further reduced if \Sigma is invertible. In this case, the
solution to the linear system can be expressed as
where \Sigma is evaluated at -(t): The factor (M 0
T can be computed
offline, so the online computation is reduced to n N-vector inner products and n n-vector
inner products.
2.3 Some Examples
2.3.1 Linear Models
Let us assume that f(x; -) is linear in x: Then we have
and, hence, f x
It follows that f \Gamma1
x
f - is linear in the components of x and the factoring
defined in (20) applies. We now present three examples illustrating these concepts.
Pure Translation: In the case of pure translation, the allowed image motions are parameterized
by the vector
It follows immediately that f x and f - are both the 2 \Theta 2 identity matrix, and therefore
and \Sigma is the 2 \Theta 2 identity matrix.
The resulting linear system is nonsingular if the image gradients in the template region
are not all collinear, in which case the solution at each time step is just
Note that in this case
constant matrix which can be computed
offline.
Translation, Rotation and Scale: Objects which are viewed under scaled orthography
and which do not undergo out-of-plane rotation can be modeled using a four parameter model
consisting an image-plane rotation through an angle '; a scaling by s; and a translation by
u: The change of coordinates is given by
where R(') is a 2 \Theta 2 rotation matrix. After some minor algebraic manipulations, we obtain
and
From this M 0 can be computed using (21) and, since \Sigma is invertible, the solution to the
linear system becomes
This result can be explained as follows. The matrix M 0
is the linearization of the system
about the target has orientation '(t) and s(t): Image rectification
effectively rotates the target by \Gamma' and scales by 1
s
so the displacements of the target are
computed in the original target coordinate system. \Sigma \GammaT then applies a change of coordinates
to rotate and scale the computed displacements from the original target coordinate system
back to the actual target coordinates.
Affine Motion: The image distortions of planar objects viewed under orthographic projection
are described by a six-parameter linear change of coordinates. Suppose that we
define
a c
b d
After some minor algebraic manipulations, we obtain
and
Note that \Sigma is once again invertible which allows for additional computational savings as
before.
2.3.2 Nonlinear Motion Models
The separability property needed for factoring does not hold for any type of nonlinear motion.
However, consider a motion model of the form
Intuitively, this model performs a quadratic distortion of the image
according to the equation example, a polynomial model of this form was
used in [8] to model the motions of lips and eyebrows on a face. Again, after several algebraic
steps we arrive at
and
Note this general result holds for any distortion which can be expressed exclusively as either
adding more freedom to the motion model, for example
combining affine and polynomial distortion, often makes factoring impossible. One possibility
in such cases is to use a cascaded model in which the image is first rectified using an affine
distortion model, and then the resulting rectified image is further rectified for polynomial
distortion.
2.4 On the Structure of Image Change
The Jacobian matrix M plays a central role in the algorithms described above, so it is informative
to digress briefly on its structure. If we consider the rectified image as a continuous
time-varying quantity, then its total derivative with respect to time is
dI
dt
dt
I t or -
Note that this is simply a differential form of (8). Due to the image constancy assumption
(1), it follows that -
This is, of course, a parameterized version of Horn's
optical flow constraint equation [28].
In this form, it is clear that the role of M is to relate variations in motion parameters
to variations in brightness values in the target region. The solution given in (13) effectively
reverses this relationship and provides a method for interpreting observed changes in brightness
as motion. In this sense, we can think of the algorithm as performing correlation on
temporal changes (as opposed to spatial structure) to compute motion.
To better understand the structure of M; recall that in column form, it can be written
in terms of the partial derivatives of the rectified image:
Thus, the model states that the temporal variation in image brightness in the target region
is a weighted combination of the vectors I - i
: We can think of each of these columns (which
have an entry for every pixel in the target region) as a "motion template" which directly
represents the changes in brightness induced by the motion represented by the corresponding
motion parameter. For example, in the top row of Figure 1, we have shown these templates
for several canonical motions of an image of a black square on a white background. Below,
we show the corresponding templates for a human face.
The development in this section has assumed that we start with a given parametric
motion model from which these templates are derived. Based on that model, the structure
of each entry of M is given by (15) which states that
The image gradient r f I defines, at each point in the image, the direction of strongest
intensity change. The vector f - j
evaluated at x i is the instantaneous direction and magnitude
of motion of that image location captured by the parameter The collection of the latter
for all pixels in the region represents the motion field defined by the motion parameter
Target Image X Translation Y Translation Rotation Scale
Target Image X Translation Y Translation Rotation Scale
Figure
1: Above, the reference template for a bright square on a dark background the motion
template for four canonical motions. Below, the same motion templates for a human face.
Thus, the change in the brightness of the image location x i due to the motion parameter - j
is the projection of the image gradient onto the motion vector.
This suggests how our techniques can be used to perform structured motion estimation
without an explicit parametric motion model. First, if the changes in images due to motion
can be observed directly (for example, by computing the differences of images taken before
and after small reference motions are performed), then these can be used as the motion
templates which comprise M: Second, if a one or more motion fields can be observed (for
example, by tracking a set of fiducial points in a series of training images), then projecting
each element of the motion field onto the corresponding image gradient yields motion templates
for those motion fields. The linear estimation process described above can be used to
time-varying images in terms of those basis motions.
Illumination-Insensitive Tracking
The systems described above are inherently sensitive to changes in illumination of the target
region. This is not surprising, as the incremental estimation step is effectively computing a
structured optical flow, and optical flow methods are well-known to be sensitive to illumination
changes [28]. Thus, shadowing or shading changes of the target object over time lead
to bias, or, in the worst case, complete loss of the target.
Recently, it has been shown that a relatively small number of "basis" images can often
be used to account for large changes in illumination [5, 17, 22, 24, 43]. Briefly, the reason for
this is as follows. Consider a point p on a Lambertian surface and a collimated light source
characterized by a vector s 2 IR 3 , such that the direction of s gives the direction of the light
rays and ksk gives the intensity of the light source. The irradiance at the point p is given by
where n is the unit inwards normal vector to the surface at p and a is the non-negative absorption
coefficient (albedo) of the surface at the point p [28]. This shows that the irradiance
at the point p, and hence the gray level seen by a camera, is linear on s 2 IR 3 .
Therefore, in the absence of self-shadowing, given three images of a Lambertian surface
from the same viewpoint taken under three known, linearly independent light source di-
rections, the albedo and surface normal can be recovered; this is the well-known method
of photometric stereo [50, 46]. Alternatively, one can reconstruct the image of the surface
under a novel lighting direction by a linear combination of the three original images [43]. In
other words, if the surface is purely Lambertian and there is no shadowing, then all images
under varying illumination lie within a 3-D linear subspace of IR N , the space of all possible
images (where N is the number of pixels in the images).
A complication comes when handling shadowing: all images are no longer guaranteed to
lie in a linear subspace [5]. Nevertheless, as done in [24], we can still use a linear model as
an approximation: a small set of basis images can account for much of the shading changes
that occur on patches of non-specular surfaces. Naturally, we need more than three images
(we use between 8 and 15) and a higher than three dimensional linear subspace (we use 4 or
if we hope to provide good approximation to these effects.
Returning to the problem of region tracking, suppose now that we have a basis of image
vectors where the ith element of each of the basis vectors corresponds to
the image location x us choose the first basis vector to be the template image,
To model brightness changes, let us choose the second basis vector to be
a column of ones, i.e. us choose the remaining basis vectors by
In practice, choosing a value close to the mean of the brightness of the image produces a more numerically
performing SVD (singular value decomposition) on a set of training images of the target,
taken under varying illumination. We denote the collection of basis vectors by the matrix
Suppose now that so that the template image and the current target region
are registered geometrically at time t: The remaining difference between them is due to
illumination. From the above discussion, it follows that interframe changes in the current
target region can be approximated by the template image plus a linear combination of the
basis vectors B, i.e.
where the vector - . Note that because the template image and an image
of ones are included in the basis B, we implicitly handle both variation due to contrast
changes and variation due to brightness changes. The remaining basis vectors are used to
handle more subtle variation - variation that depends both on the geometry of the target
object and on the nature of the light sources.
Using the vector-space formulation for motion recovery established in the previous sec-
tion, it is clear that illumination and geometry can be recovered in one global optimization
step solved via linear methods. Incorporating illumination into (7) we have the following
modified optimization:
Substituting (42) into (43) and performing the same simplifications and approximations
as before, we arrive at
Solving
ffi-
In most tracking applications, we are only interested in the motion parameters. We can
eliminate explicit computation of these parameters by first optimizing over - in (44). Upon
stable linear system.
substituting the resulting solution back into (44) and then solving for ffi- we arrive at
Note that if the columns of B are orthogonal vectors, B T B is the identity matrix.
It is easy to show that in both equations, factoring M into time-invariant and time-varying
components as described above leads to significant computational savings. Since
the illumination basis is time-invariant, the dimensionality of the time-varying portion of
the computation depends only on the number of motion fields to be computed, not on the
illumination model. Hence, we have shown how to compute image motion while accounting
for variations in illumination using no more online computation than would be required to
compute pure motion.
Making Tracking Resistant to Occlusion
As a system tracks objects over a large space, it is not uncommon that other objects "intrude"
into the picture. For example, the system may be in the process of tracking a target region
which is the side of a building when, due to observer motion, a parked car begins to occlude
a portion of that region. Similarly the target object may rotate, causing the tracked region
to "slide off" and pick up a portion of the background. Such intrusions will bias the motion
parameter estimates and, in the long term can potentially cause mistracking. In this section,
we describe how to avoid such problems. For the sake of simplicity, we develop a solution for
the case where we are only recovering motion parameters; the modifications for combined
motion and illumination models are straightforward.
A common approach to this problem is to assume that occlusions create large image
differences which can be viewed as "outliers" by the estimation process [7]. The error metric
is then modified to reduce sensitivity to "outliers" by solving a robust optimization problem
of the form
where ae is one of a variety of "robust" regression metrics [31].
It is well-known that optimization of (47) is closely related to another approach to robust
estimation-iteratively reweighted least squares (IRLS). We have chosen to implement the
optimization using a somewhat unusual form of IRLS due to Dutter and Huber [16]. In
order to formulate the algorithm, we introduce the notation of an "inner iteration" which
is performed one or more times at each time step. We will use a superscript to denote this
iteration.
Let ffi- i denote the value of ffi- computed by the ith inner iteration with ffi-
the vector of residuals in the ith iteration r i as
We introduce a diagonal weighting matrix W which has entries
The inner iteration cycle at time t + - is consists of performing an estimation step by
solving the linear system
where \Sigma is evaluated at -(t)) and r i and W i are given by (48) and (49), respectively. This
process is repeated for k iterations.
This form of IRLS is particularly efficient for our problem. It does not require recomputation
of or \Sigma and, since the weighting matrix is diagonal, does not add significantly to
the overall computation time needed to solve the linear system. In addition, the error vector
e is fixed over all inner iterations, so these iterations do not require the additional overhead
of acquiring and warping images.
As discussed in [16], on linear problems this procedure is guaranteed to converge to a
unique global minimum for a large variety of choices of ae: In this article, ae is taken to be a
so-called "windsorizing" function [31] which is of the form
where r is normalized to have unit variance. The parameter - is a user-defined threshold
which places a limit on the variations of the residuals before they are considered outliers.
This function has the advantage of guaranteeing global convergence of the IRLS method
while being cheap to compute. The updating function for matrix entries is
As stated, the weighting matrix is computed anew at each iteration, a process which can
require several inner iterations. However, given that tracking is a continuous process, it is
natural to start with an initial weighting matrix that is closely related to that computed at
the end of the previous estimation step. In doing so, two issues arise. First, the fact that
the linear system we are solving is a local linearization of a nonlinear system means that, in
cases when inter-frame motion is large, the effect of higher-order terms of the Taylor series
expansion will cause areas of the image to masquerade as outliers. Second, if we assume that
areas of the image with low weights correspond to intruders, it makes sense to add a "buffer
zone" around those areas for the next iteration to pro-actively cancel the effects of intruder
motion.
Both of these problems can be dealt with by noting that the diagonal elements of W
themselves form an image where "dark areas" (those locations with low value ) are areas of
occlusion or intrusion, while "bright areas" (those with value 1) are the expected target. Let
Q(x) to be the pixel values in the eight-neighborhood of the image coordinate x plus the
value at x itself. We use two common morphological operators [26]
and
When applied to a weighting matrix image, close has the effect of removing small areas of
outlier pixels, while open increases their size. Between frames of the sequence we propagate
the weighting matrix forward after applying one step of close to remove small areas of outliers
followed by two or three steps of open to buffer detected intruders.
5 Implementation and Experiments
This section illustrates the performance of the tracking algorithm under a variety of circum-
stances, noting particularly the effects of image warping, illumination compensation, and
outlier detection. All experiments were performed on live video sequences by an SGI Indy
equipped with a 175Mhz R4400 SC processor and VINO image acquisition system.
Rotation Scale Aspect Ratio Shear
Figure
2: The columns of the motion Jacobian matrix for the planar target and their geometric
interpretations.
5.1 Implementation
We have implemented the methods described above within the X Vision environment [23].
The implemented system incorporates all of the linear motion models described in Section
2, non-orthonormal illumination bases as described in Section 3, and outlier rejection using
the algorithm described in Section 4.
The image warping required to support the algorithm is implemented by factoring linear
transformations into a rotation matrix and a positive-definite upper-diagonal matrix. This
factoring allows image warping to be implemented in two stages. In the first stage, an image
region surrounding the target is acquired and rotated using a variant on standard Bresenham
line-drawing algorithms [18]. The acquired image is then scaled and sheared using a bilinear
interpolation. The resolution of the region is then reduced by averaging neighboring pixels.
Spatial and temporal derivatives are computed by applying Prewitt operators on the reduced
scale images. More details on this level of the implementation can be found in [23].
Timings of the algorithm 2 indicate that it can perform frame rate (30 Hz) tracking of
image regions of up to 100 \Theta 100 pixels at one-half resolution undergoing affine distortions
and illumination changes. Similar performance has been achieved on a 120Mhz Pentium
processor and 70 Mhz Sun SparcStation. Higher performance is achieved for smaller regions,
lower resolutions, or fewer parameters. For example, tracking the same size region while
computing just translation at one-fourth resolution takes just 4 milliseconds per cycle.
5.2 Planar Tracking
As a baseline, we first consider tracking a non-specular planar object-the cover of a book.
Affine warping augmented with brightness and contrast compensation is the best possible
linear approximation to this case (it is exact for an orthographic camera model and purely
Lambertian surface). As a point of comparison, recent work by Black and Jepson [7] used
the rigid motion plus scaling model for SSD-based region tracking. Their reduced model is
more efficient and may be more stable since fewer parameters must be computed, but it does
ignore the effects of changing aspect ratio and shear.
We tested both the rigid motion plus scale (RM+S) and full affine motion models on
the same live video sequence of the book cover in motion. Figure 2 shows the set of motion
templates (the columns of the motion matrix) for an 81 \Theta 72 region of a book cover tracked
at one third resolution. Figure 3 shows the results of tracking. The upper series of images
shows several images of the object with the region tracked indicated with a black frame (the
RM+S algorithm) and a white frame (the FA algorithm). The middle row of images shows
the output of the warping operator from the RM+S algorithm. If the computed parameters
were error-free, these images would be identical. However, because of the inability to correct
for aspect ratio and skew, the best fit leads to a skewed image. The bottom row shows
the output of the warping operator for the FA algorithm. Here we see that the full affine
warping is much better at accommodating the full range of image distortions. The graph at
the bottom of the figure shows the least squares residual (in squared gray-values per pixel).
Here, the difference between the two geometric models is clearly evident.
5.3 Human Face Tracking
There has been a great deal of recent interest in face tracking in the computer vision literature
[8, 14, 36]. Although faces can produce images with significant variation due to
empirical results suggest that a small number of basis images of a face gathered
under different illuminations is sufficient to accurately account for most gross shading
and illumination effects [24]. At the same time, the depth variations exhibited by facial features
are small enough to be well-approximated by an affine warping model. The following
2 Because of additional data collection overhead, the tracking performance in the experiments presented
here is slower than the stated figures.
Frame 0 Frame 50 Frame 70 Frame 120 Frame 150 Frame 230
Residuals: Planar Test
FA
RM+S
Gray values
Frames10.0020.0030.0040.00
Figure
3: Top, several images of a planar region and the corresponding warped image computed
by a tracker computing position, orientation and scale (RM+S), and one computing
a full affine deformation (FA). The image at the left is the initial reference image. Bottom,
the graph of the SSD residuals for both algorithms.
experiments demonstrate the ability of our algorithm to track a face as it undergoes changes
in pose and illumination, and under partial occlusion. Throughout, we assume the subject is
roughly looking toward the camera, so we use the rigid motion plus scaling (RM+S) motion
model.
Figure
1 on page 14 shows the columns of the motion matrix for this model.
5.3.1 Geometry
We first performed a test to determine the accuracy of the computed motion parameters
for the face and to investigate the effect of the illumination basis on the sensitivity of those
estimates. During this test, we simultaneously executed two tracking algorithms: one using
the rigid motion plus scale model (RM+S) and one which additionally included an illumination
model for the face (RM+S+I). The algorithms were executed on a sequence which
did not contain large changes in the illumination of the target. The top row of Figure 4
shows images excerpted from the video sequence. In each image, the black frames denote
the region selected as the best match by RM+S and the white frames correspond to the
best match computed by RM+S+I. For this test, we would expect both algorithms to be
quite accurate and to exhibit similar performance unless the illumination basis significantly
affected the sensitivity of the computation. As is apparent from the figures, the computed
motion parameters of both algorithms are extremely similar for the entire run - so close
that in many cases one frame is obscured by the other.
In order to demonstrate the absolute accuracy of the tracking solution, below each live
image in Figure 4 we have included the corresponding rectified image computed by RM+S+I.
The rectified image at time 0 is the reference template. If the motion of the target fit the
RM+S motion model, and the computed parameters were exact, then we would expect each
subsequent rectified image to be identical to the reference template. Despite the fact that
the face is non-planar and we are using a reduced motion model, we see that the algorithm
is quite effective at computing an accurate geometric match.
Finally, the graph in Figure 4 shows the residuals of the linearized SSD computation at
each time step. As is apparent from the figures, the residuals of both algorithms are also
extremely similar for the entire run. From this experiment we conclude that, in the absence
of illumination changes, the performance of both algorithms is quite similar - including
illumination models does not appear to reduce accuracy.
Frame 0 Frame
Residuals: Face with No Lighting Changes
RM+S+I
RM+S
Gray values
Frames20.00
Figure
4: Top row, excerpts from a sequence of tracked images of a face. The black frames
represent the region tracked by an SSD algorithm using no illumination model (RM+S) and
the white frames represent the regions tracked by an algorithm which includes an illumination
model (RM+S+I). In some cases the estimates are so close that only one box is visible.
Middle row, the region within the frame warped by the current motion estimate. Bottom
row, the residuals of the algorithms expressed in gray-scale units per pixel as a function of
time.
Figure
5: The illumination basis for the face (B). The left two images are included to
compensate for brightness and contrast, respectively, while the remaining four images compensate
for changes in lighting direction.
5.3.2 Illumination
In a second set of experiments, we kept the face nearly motionless and varied the illumination.
We used an illumination basis of four orthogonal image vectors. This basis was computed
offline by acquiring ten images of the face under various lighting conditions. A singular value
decomposition (SVD) was applied to the resulting image vectors and the vectors with the
maximum singular values were chosen to be included in the basis. The illumination basis is
shown in Figure 5.
Figure
6 shows the effects of illumination compensation for the illumination situations
depicted in the first row. As with warping, if the compensation were perfect, the images of
the bottom row would appear to be identical up to brightness and contrast. In particular,
note how the strong shading effects of frames 70 through 150 have been "corrected" by the
illumination basis.
5.3.3 Combining Illumination and Geometry
Next, we present a set of experiments illustrating the interaction of geometry and illumi-
nation. In these experiments we again executed two algorithms again labeled RM+S and
RM+S+I. As the algorithms were operating, a light was periodically switched on and off
and the face moved slightly. The results appear in Figure 7. In the residual graph, we see
that the illumination basis clearly "accounts" for the shading on the face quite well, leading
to a much lower fluctuation of the residuals. The sequence of images shows an excerpt near
the middle of the sequence where the RM+S algorithm (which could not compensate for il-
0Figure
The first row of images shows excerpts of a tracking sequence. The second row is
a magnified view of the region in the white frame. The third row contains the images in the
second row after adjustment for illumination using the illumination basis shown in Figure 5
(for sake of comparison we have not adjusted for brightness and contrast).
Frame 90 Frame 100 Frame 110 Frame 120 Frame 130 Frame 140 Frame 150
Residuals: Face with Illumination Changes
RM+S+I
RM+S
Gray values
Frames20.000.00 50.00 100.00 150.00 200.00 250.00 300.00
Figure
7: Top, an excerpt from a tracking sequence containing changes in both geometry and
illumination. The black frame corresponds to the algorithm without illumination (RM+S)
and the write frame corresponds to the algorithm with an illumination basis (RM+S+I).
Note that the algorithm which does not use illumination completely looses the target until
the original lighting is restored. Bottom, the residuals, in gray scale units per pixel, of the
two algorithms as a light is turned on and off.
lumination changes) completely lost the target for several frames, only regaining it after the
original lighting was restored. Since the target was effectively motionless during this period,
this can be completely attributed to biases due to illumination effects. Similar sequences
with larger target motions often cause the purely geometric algorithm to loose the target
completely.
5.3.4 Tracking With Outliers
Finally, we illustrate the performance of the method when the image of the target becomes
partially occluded. We again track a face. The motion and illumination basis are the same
as before. In the weighting matrix calculations the pixel variance was set to 5 and the outlier
threshold was set to 5 variance units.
The sequence is an "office" sequence which includes several "intrusions" including the
background, a piece of paper, a telephone, a soda can, and a hand. As before we executed two
versions of the tracker, the non-robust algorithm from the previous experiment (RM+S+I)
and a robust version (RM+S+I+O). Figure 8 shows the results. The upper series of images
shows the region acquired by both algorithms (the black frame corresponds to RM+S+I, the
white to RM+S+I+O). As is clear from the sequence, the non-robust algorithm is disturbed
significantly by the occlusion, whereas the robust algorithm is much more stable. In fact, a
slight motion of the head while the soda can is in the image caused the non-robust algorithm
to mistrack completely. The middle series of images shows the output of the warping operation
for the robust algorithm. The lower row of images depicts the weighting values attached
to each pixel in the warped image. Dark areas correspond to "outliers." Note that, although
the occluded region is clearly identified by the algorithm, there are some small regions away
from the occlusion which received a slightly reduced weight. This is due to the fact that
the robust metric used introduces some small bias into the computed parameters. In areas
where the spatial gradient is large (e.g. near the eyes and mouth), this introduces some false
rejection of pixels.
It is also important to note that the dynamical performance of the tracker is significantly
reduced by including outliers. Large, fast motions tend to cause the algorithm to "turn
off" areas of the image where there are large gradients, slowing convergence. At the same
time, performing outlier rejection is more computationally intensive as it requires explicit
computation of both the motion and illumination parameter to calculate the residual values.
6 Discussion and Conclusions
We have shown a straightforward and efficient solution to the problem of tracking regions
undergoing geometric distortion, changing illumination, and partial occlusion. The method
is simple, yet robust, and it builds on an already popular method for solving spatial and
temporal correspondence problems.
Although the focus in this article has been on parameter estimation techniques for tracking
using image rectification, the same estimation methods can be used for directly controlling
devices. For example, instead of computing a parameter estimate -; the incremental solutions
ffi- can be used to control the position and orientation of a camera so to stabilize the
target image by active motion. Hybrid combinations of camera control and image warping
are also possible.
Frame 0 Frame
Residuals: Face with Partial Occlusion
RM+S+I+O
RM+S+I
gray values
Figure
8: The first row of images shows excerpts of a tracking sequence with occurrences
of partial occlusion. The black frame corresponds to the algorithm without outlier rejection
(RM+S+I) and the write frame corresponds to the algorithm with outlier rejection
(RM+S+I+O). The second row is a magnified view of the region in the white frame. The
third row contains the corresponding outlier images where darker areas mark outliers. The
graph at the bottom compares the residual values for both algorithms.
One possible objection to the methods is the requirement that the change from frame
to frame is small, limiting the speed at which objects can move. Luckily, there are several
means for improving the dynamical performance of the algorithms. One possibility is to
include a model for the motion of the underlying object and to incorporate prediction into
the tracking algorithm. Likewise, if a model of the noise characteristics of images is available,
the updating method can modified to incorporate this model. In fact, the linear form of the
solution makes it straightforward to incorporate the estimation algorithm into a Kalman
filter or similar iterative estimation procedure.
Performance can also be improved by operating the tracking algorithm at multiple levels
of resolution. One possibility, as is used by many authors [7, 44], is to perform a complete
coarse to fine progression of estimation steps on each image in the sequence. Another possi-
bility, which we have used successfully in prior work [23], is to dynamically adapt resolution
based on the motion of the target. That is, when the target moves quickly estimation is performed
at a coarse resolution, and when it moves slowly the algorithm changes to a higher
resolution. The advantage of this approach is that it not only increases the range over which
the linearized problem is valid, but it also reduces the computation time required on each
image when motion is fast.
We are actively continuing to evaluate the performance of these methods, and to extend
their theoretical underpinnings. One area that still needs attention is the problem of determining
an illumination basis online, i.e. while tracking the object. Initial experiments in this
direction have shown that online determination of the illumination basis can be achieved,
although we have not included such results in this paper. As in [7], we are also exploring
the use of basis images to handle changes of view or aspect not well addressed by warping.
We are also looking at the problem of extending the method to utilize shape information
on the target when such information is available. In particular, it is well known [49] that
under orthographic projection, the image deformations of a surface due to motion can be
described with a linear motion model. This suggests that our methods can be extended
to handle such models. Furthermore, as with the illumination basis, it may be possible
to estimate the deformation models online, thereby making it possible to efficiently track
arbitrary objects under changes in illumination, pose, and partial occlusion.
Acknowledgments
This research was supported by ARPA grant N00014-93-1-1235, Army DURIP grant DAAH04-95-
1-0058, National Science Foundation grant IRI-9420982, Army Research Office grant DAAH04-95-
1-0494, and by funds provided by Yale University. The second author would like to thank David
Mumford, Alan Yuille, David Kriegman, and Peter Hallinan for discussions contributing the ideas
in this paper.
--R
A computational framework and an algorithm for the measurement of structure from motion.
Maintaining representations of the environment of a mobile robot.
Tracking medical 3D data with a deformable parametric model.
What is the set of images of an object under all possible lighting conditions.
Fast object recognition in noisy images using simulated annealing.
Robust matching and tracking of articulated objects using a view-based representation
Tracking and recognizing rigid and non-rigid facial motions using local parametric models of image motion
A framework for spatio-temporal control in the tracking of visual contour
A state-based technique for the summarization of recognition of gesture
Object models from contour sequences.
Measurement and integration of 3-D structures by tracking edge lines
Active face tracking and pose estimation in an interactive room.
Dynamic monocular machine vision.
Numerical methods for the nonlinear robust regression problem.
Computer Graphics.
Tracking of occluded vehicles in traffic scenes.
Tracking humans in action: A 3D model-based approach
Visual tracking of known three-dimensional objects
A portable substrate for real-time vision applications
A Deformable Model for Face Recognition Under Arbitrary Lighting Condi- tions
Computer and Robot Vision.
A fruit-tracking system for robotic harvesting
Computer Vision.
Visual surveillance monitoring and watching.
Using stereo vision to pursue moving agents with a mobile robot.
Robust Statistics.
A tutorial introduction to visual servo control.
Contour tracking by stochastic propagation of conditional density.
Robust model-based motion tracking through the integration of search and estimation
An iterative image registration technique with an application to stereo vision.
Face tracking and pose representation.
Visual learning and recognition of 3-D objects from appearence
Visual tracking of a moving target by a camera mounted on a robot: A combination of control and vision.
Visual tracking of high DOF articulated structures: An application to human hand tracking.
Visual tracking with deformation models.
Learning dynamics of complex motions from image sequences.
Affine Analysis of Image Sequences.
Geometry and Photometry in 3D Visual Recognition.
Good features to track.
A model-based integrated approach to track myocardial deformation using displacement and velocity constraints
Determining Shape and Reflectance Using Multiple Images.
Image mosaicing for tele-reality applications
Shape and motion from image streams under orthography: A factorization method.
Recognition by a linear combination of models.
Analysing images of curved surfaces.
--TR
--CTR
Guopu Zhu , Qingshuang Zeng , Changhong Wang, Rapid and brief communication: Efficient edge-based object tracking, Pattern Recognition, v.39 n.11, p.2223-2226, November, 2006
Qinghong Guo , Mrinal K. Mandal , Micheal Y. Li, Efficient Hodge-Helmholtz decomposition of motion fields, Pattern Recognition Letters, v.26 n.4, p.493-501, March 2005
Simon Lucey , Iain Matthews, Face refinement through a gradient descent alignment approach, Proceedings of the HCSNet workshop on Use of vision in human-computer interaction, p.43-49, November 01-01, 2006, Canberra, Australia
Kuang-Chih Lee , Jeffrey Ho , Ming-Hsuan Yang , David Kriegman, Visual tracking and recognition using probabilistic appearance manifolds, Computer Vision and Image Understanding, v.99 n.3, p.303-331, September 2005
Alastair Reid , John Peterson , Greg Hager , Paul Hudak, Prototyping real-time vision systems: an experiment in DSL design, Proceedings of the 21st international conference on Software engineering, p.484-493, May 16-22, 1999, Los Angeles, California, United States
Iain Matthews , Takahiro Ishikawa , Simon Baker, The Template Update Problem, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.6, p.810-815, June 2004
Frdric Jurie , Michel Dhome, Hyperplane Approximation for Template Matching, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.7, p.996-1000, July 2002
David Hasler , Luciano Sbaiz , Sabine Ssstrunk , Martin Vetterli, Outlier Modeling in Image Matching, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.3, p.301-315, March
Timothy F. Cootes , Gareth J. Edwards , Christopher J. Taylor, Active Appearance Models, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.6, p.681-685, June 2001
A. Del Bue , F. Smeraldi , L. Agapito, Non-rigid structure from motion using ranklet-based tracking and non-linear optimization, Image and Vision Computing, v.25 n.3, p.297-310, March, 2007
Jaime Ortegn-Aguilar , Eduardo Bayro-Corrochano, Lie Algebra and System Identification Techniques for 3D Rigid Motion Estimation and Monocular Tracking, Journal of Mathematical Imaging and Vision, v.25 n.2, p.173-185, September 2006
Louis-Philippe Morency , Trevor Darrell, From conversational tooltips to grounded discourse: head poseTracking in interactive dialog systems, Proceedings of the 6th international conference on Multimodal interfaces, October 13-15, 2004, State College, PA, USA
Masao Shimizu , Masatoshi Okutomi, Multi-Parameter Simultaneous Estimation on Area-Based Matching, International Journal of Computer Vision, v.67 n.3, p.327-342, May 2006
Oliver Williams , Andrew Blake , Roberto Cipolla, Sparse Bayesian Learning for Efficient Visual Tracking, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.8, p.1292-1304, August 2005
David Schreiber, Robust template tracking with drift correction, Pattern Recognition Letters, v.28 n.12, p.1483-1491, September, 2007
Eduardo Bayro-Corrochano , Jaime Ortegn-Aguilar, Lie algebra approach for tracking and 3D motion estimation using monocular vision, Image and Vision Computing, v.25 n.6, p.907-921, June, 2007
Simon Baker , Iain Matthews, Lucas-Kanade 20 Years On: A Unifying Framework, International Journal of Computer Vision, v.56 n.3, p.221-255, February-March 2004
Iain Matthews , Jing Xiao , Simon Baker, 2D vs. 3D Deformable Face Models: Representational Power, Construction, and Real-Time Fitting, International Journal of Computer Vision, v.75 n.1, p.93-113, October 2007
Stephen Benoit , Frank P. Ferrie, Towards direct recovery of shape and motion parameters from image sequences, Computer Vision and Image Understanding, v.105 n.2, p.145-165, February, 2007
Jie Wei , Izidor Gertner, MRF-MAP-MFT visual object segmentation based on motion boundary field, Pattern Recognition Letters, v.24 n.16, p.3125-3139, December
Charles S. Wiles , Atsuto Maki , Natsuko Matsuda, Hyperpatches for 3D Model Acquisition and Tracking, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.12, p.1391-1403, December 2001
S. Benhimane , E. Malis, Homography-based 2D Visual Tracking and Servoing, International Journal of Robotics Research, v.26 n.7, p.661-676, July 2007
Horst W. Haussecker , David J. Fleet, Computing Optical Flow with Physical Models of Brightness Variation, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.6, p.661-673, June 2001
Hayman , Torfi Thrhallsson , David Murray, Tracking While Zooming Using Affine Transfer and Multifocal Tensors, International Journal of Computer Vision, v.51 n.1, p.37-62, January
Luca Vacchetti , Vincent Lepetit , Pascal Fua, Stable Real-Time 3D Tracking Using Online and Offline Information, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.10, p.1385-1391, October 2004
Marco La Cascia , Stan Sclaroff , Vassilis Athitsos, Fast, Reliable Head Tracking under Varying Illumination: An Approach Based on Registration of Texture-Mapped 3D Models, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.4, p.322-336, April 2000
Christopher Rasmussen , Gregory D. Hager, Probabilistic Data Association Methods for Tracking Complex Visual Objects, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.6, p.560-576, June 2001
King Yuen Wong , Minas E. Spetsakis, Tracking based motion segmentation under relaxed statistical assumptions, Computer Vision and Image Understanding, v.101 n.1, p.45-64, January 2006
Iain Matthews , Simon Baker, Active Appearance Models Revisited, International Journal of Computer Vision, v.60 n.2, p.135-164, November 2004
Rmi Coudarcher , Florent Duculty , Jocelyn Serot , Frdric Jurie , Jean-Pierre Derutin , Michel Dhome, Managing algorithmic skeleton nesting requirements in realistic image processing applications: the case of the SKiPPER-II parallel programming environment's operating model, EURASIP Journal on Applied Signal Processing, v.2005 n.1, p.1005-1023, 1 January 2005
Gang Hua , Ying Wu, Sequential mean field variational analysis of structured deformable shapes, Computer Vision and Image Understanding, v.101 n.2, p.87-99, February 2006
Peter N. Belhumeur , James S. Duncan , Gregory D. Hager , Drew V. Mcdermott , A. Stephen Morse , Steven W. Zucker, Computational Vision at Yale, International Journal of Computer Vision, v.35 n.1, p.5-12, Nov. 1999
Kentaro Toyama , Gregory D. Hager, Incremental Focus of Attention for Robust Vision-Based Tracking, International Journal of Computer Vision, v.35 n.1, p.45-63, Nov. 1999
Markus Quaritsch , Markus Kreuzthaler , Bernhard Rinner , Horst Bischof , Bernhard Strobl, Autonomous multicamera tracking on embedded smart cameras, EURASIP Journal on Embedded Systems, v.2007 n.1, p.35-35, January 2007
Muriel Pressigout , Eric Marchand, Real-time Hybrid Tracking using Edge and Texture Information, International Journal of Robotics Research, v.26 n.7, p.689-713, July 2007
Stan Sclaroff , John Isidoro, Active blobs: region-based, deformable appearance models, Computer Vision and Image Understanding, v.89 n.2-3, p.197-225, February
Pedro M. Q. Aguiar , Jos M. F. Moura, Rank 1 Weighted Factorization for 3D Structure Recovery: Algorithms and Performance Analysis, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.9, p.1134-1049, September
David J. Fleet , Michael J. Black , Oscar Nestares, Bayesian inference of visual motion boundaries, Exploring artificial intelligence in the new millennium, Morgan Kaufmann Publishers Inc., San Francisco, CA,
Fernando De la Torre , Michael J. Black, Robust parameterized component analysis: theory and applications to 2D facial appearance models, Computer Vision and Image Understanding, v.91 n.1-2, p.53-71, July
V. Javier Traver , Filiberto Pla, Similarity motion estimation and active tracking through spatial-domain projections on log-polar images, Computer Vision and Image Understanding, v.97 n.2, p.209-241, February 2005
Jos M. Buenaposada , Luis Baumela, A computer vision based human-robot interface, Autonomous robotic systems: soft computing and hard computing methodologies and applications, Physica-Verlag GmbH, Heidelberg, Germany,
W. Zhao , R. Chellappa , P. J. Phillips , A. Rosenfeld, Face recognition: A literature survey, ACM Computing Surveys (CSUR), v.35 n.4, p.399-458, December
Richard Szeliski, Image alignment and stitching: a tutorial, Foundations and Trends in Computer Graphics and Vision, v.2 n.1, p.1-104, January 2006
Vincent Lepetit , Pascal Fua, Monocular model-based 3D tracking of rigid objects, Foundations and Trends in Computer Graphics and Vision, v.1 n.1, p.1-89, September 2006 | motion estimation;real-time vision;visual tracking;illumination;robust statistics |
290124 | Scale-Space Derived From B-Splines. | AbstractIt is well-known that the linear scale-space theory in computer vision is mainly based on the Gaussian kernel. The purpose of the paper is to propose a scale-space theory based on B-spline kernels. Our aim is twofold. On one hand, we present a general framework and show how B-splines provide a flexible tool to design various scale-space representations: continuous scale-space, dyadic scale-space frame, and compact scale-space representation. In particular, we focus on the design of continuous scale-space and dyadic scale-space frame representation. A general algorithm is presented for fast implementation of continuous scale-space at rational scales. In the dyadic case, efficient frame algorithms are derived using B-spline techniques to analyze the geometry of an image. Moreover, the image can be synthesized from its multiscale local partial derivatives. Also, the relationship between several scale-space approaches is explored. In particular, the evolution of wavelet theory from traditional scale-space filtering can be well understood in terms of B-splines. On the other hand, the behavior of edge models, the properties of completeness, causality, and other properties in such a scale-space representation are examined in the framework of B-splines. It is shown that, besides the good properties inherited from the Gaussian kernel, the B-spline derived scale-space exhibits many advantages for modeling visual mechanism with regard to the efficiency, compactness, orientation feature, and parallel structure. | Introduction
Scale is a fundamental aspect of early image representation. Koenderink [1] emphasized that the
problem of scale must be faced in any imaging situation. A multiscale representation is of crucial
importance if one aims at describing the structure of the world. Both the psychophysical and
physiological experiments have confirmed that multiscale transformed information appears in the
visual cortex of mammals. This leads to the motivation for the interpretation of image structures in
terms of spatial scale in computer vision. Some researchers such as Burt and Adelson [6], Koenderink
[1], Marr [10], Witkin [11], Rosenfeld [2] had exposed the necessity and advantages of using operators
of different sizes for extracting multiscale information in an image. For a more detailed review, see
[3].
The Gaussian scale-space approach of a signal as introduced by Witkin [11], is an embedding
of the original signal into a one-parameter family of derived signal constructed by convolution with
Gaussian kernels of increasing width. One reason the traditional scale-space is mainly based on
the Gaussian kernel is that the Gaussian function is the unique kernel which satisfy the causality
property as guaranteed by the scaling theorem [14], [15], [16], [19]; it states that no new feature points
are created with increasing scale. Another reason is that the response of human retina resembles a
Gaussian function. Neurophysiological research by Young [22] has shown that there are receptive
field profiles in the mammalian retina and visual cortex, whose measured response profiles can be
well-modeled by superposition of Gaussian derivatives. Therefore, the Gaussian function is suitable
for modeling human visual system.
In practice, since the computational load becomes extremely heavy when the scale gets larger,
many techniques are proposed for efficient implementation of scale-space filtering. Among them
B-splines or binomials have been widely used to approximate the Gaussian kernel. Such examples
include, Wells [5], Ferrari et al. [8], [9], Poggio et al. [20], Unser, Aldroubi and Eden [25], [26], etc.
For a more compact representation, pyramid technique is another widely used representation that
combine the sub-sampling operation with a smoothing step. Historically they have yielded important
steps towards the scale-space theory. The low pass pyramid representation proposed by Burt [6], [7]
is a famous example, which is also closely related to B-spline techniques.
The general idea of representing a signal at multiple scales is not new to us. It is through wavelet
theory that these early ideas have been well formulated and refined. In fact, this is largely due
to the contribution of B-spline techniques. As will be shown later, the orthogonal multiresolution
pyramid originally proposed by S. Mallat [40] and the biorthogonal pyramid [28], [46], [47] in wavelet
theory can all be derived from B-splines [30], [33], [36]. Other types of wavelets such as the wavelets
on a interval [48], the periodic wavelets [37] and the cardinal spline wavelets [29] are all related to
B-splines.
Motivated by these observations, the purpose of this paper is to build a more general framework
of scale-space representation in the context of B-splines as an improvement of the traditional scale-space
theory. By and large, the paper is divided into two parts. Firstly, we present a systematic
development of the scale-space representation in the framework of B-splines. In particular, we
focus on two classes of scale-space design. It is shown that if an image is represented as a B-spline
efficient subdivision algorithm can be designed to give a geometric description at
varying degrees of detail. The various scale-space representations are derived from B-splines in
different forms. For continuous scale-space representation, a general algorithm is derived for fast
and parallel implementation at rational scales. Several classical fast algorithms are shown to be
special cases under some conditions. Differential operators have been used for multiscale geometric
description of an image. However, it is not clear whether an image can be synthesized from these
differential descriptors. Using B-spline techniques frame algorithms are designed to express the
image as combinations of multiscale local partial derivatives. These operators include the gradient
operator, second directional operator, Laplacian operator and oriented operators. At the same time
the intrinsic relationship between wavelet theory and the traditional linear scale-space approach is
exhibited. Although B-spline has been used in practice in place of Gaussian, there is little effort
to consider its scale-space behavior directly. Therefore the second part of the paper is devoted
to examining the properties of B-spline derived scale-space. In particular, the advantages of such
a scale-space representation are highlighted. It was shown that the B-spline derived scale-space
inherits most of the nice properties of the Gaussian derived scale-space. Nevertheless, the B-spline
kernel outperforms the Gaussian kernel in that it can provide more meaningful, efficient and flexible
description of image information for multiscale feature extraction.
The organization of the paper is as follows. In Section 2, some fundamental properties of B-splines
are reviewed, which also explain why a B-spline kernel is a good kernel for scale-space design.
Following this section, we categorize three types of scale-space representation. In particular, we focus
on the design of continuous scale-space and dyadic scale-space frame representation. The relation
with compact scale-space is discussed. From the viewpoint of B-splines the evolution of wavelets
from classical continuous scale-space is well understood. Moreover, the equivalence of several famous
scale-space methods is explored. In Section 4 a general procedure is presented to study the edge
models in the B-spline scale-space. In Section 5 some basic properties of the B-spline derived scale-space
are investigated in parallel with that of the Gaussian derived scale-space. These include the
the completeness property, causality property, orientation feature and so on. Conclusions are given
in Section 6. Finally, Appendix A, B are given in Sections 7, 8 respectively in order to make the
paper mathematically complete.
2.1 Notations and definitions
We adopt the convention of [25]. Let L 2 (R) be the Hilbert space of measurable, square integrable
functions on R and l 2 (Z) be the vector space of square summable sequences. We denote the central
continuous B-spline of order n by fi n (x), which can be generated by repeated convolution of a
B-spline of order 0,
z -
where the 0th-order B-spline fi 0 (x) is the pulse function with support [
]. The Fourier transform
of fi n (x) is
For an integer m - 1, fi n
m (x) is defined as the nth-order continuous B-spline dilated by a scale
factor m, i.e.,
The discrete B-spline of order n at scale m is defined as
z -
is a normalized sampled pulse of width m.
The discrete sampled B-spline b n
m (k) of order n and integer coarseness m - 1 is obtained by
directly sampling the nth-order continuous B-spline at the scale m:
We write b n
. Consequently, the frequency response of the directly sampled B-spline is an
aliased version of the frequency response of the continuous B-spline since
which is due to the Possion's summation formula.
The discrete convolution between two sequences a and b in l 2 (Z) is the sequence b a:
Under this definition, the convolution is commutative. The convolution inverse (b) \Gamma1 of a sequence
b is defined by
where ffi(k) is the unit impulse whose value is 1 at 0 and 0 elsewhere.
The decimation operation [b] #m down-samples the sequence b by the integer factor m, i.e.,
Conversely, the operator [b] "m up-samples by the integer factor m, i.e., it takes a discrete signal b
and expands it by padding consecutive samples:
0; otherwise:
2.2 The similarity between Gaussian and B-spline
B-splines are good approximations of the Gaussian kernel which is commonly used in computer
vision. This is the consequences of the central limit theorem. For a review of the B-spline window
and other famous filters, see Torachi et al. [39]. In [28] Unser, Aldroubi and Eden have presented a
more general proof that B-splines converge to the Gaussian function in L p (R); 8p 2 [2; +1) as the
order of the spline n tends to infinity. Since the variance of the n-th order of B-spline is n+1, the
approximation relation is as follows:
Furthermore, by numerical computation [28] it was shown that the cubic B-spline is already near
optimal in terms of time/frequency localization in the sense that its variance product is within 2% of
the limit specified by the uncertainty principle. A graphical comparison between the Gaussian kernel
and the cubic B-spline is given in Fig. 1. Moreover, both the physiological and biological experiments
[22] have shown that the human visual system can be modeled with the Gaussian kernel. Therefore,
B-splines are also suitable for modeling biological vision due to their close approximation to the
Gaussian kernel.
2.3 Stable hierarchical representation of a signal by B-splines
Another significant property of the B-spline of a given order n is that it is the unique compactly
supported refinable spline function of order n which can provide a stable hierarchical representation
of a signal at different scales. It has been proven [38] that a compactly supported spline is m-refinable
and stable if and only if it is a shifted B-spline. Let h ? 0 and define the polynomial spline space S n
consisting of the dilated and shifted B-splines of order n (n is odd, which we will assume throughout
the paper) by
Then
and
The embedding property (13) follows from the fact that the B-spline fi n (x) is m-refinable, i.e., it
satisfies the following m-scale relation,m fi n ( x
The m-refinability of the B-splines can be easily verified [29], [38]. It also establishes the intrinsic
relationship between the continuous B-spline and discrete B-spline. If we take it is just the
commonly used two-scale relation and B n
2 (k) is the discrete binomial. H. Olkkonen [53] has used the
binomial kernels for designing multiresolution wavelet bases.
From the m-scale relation (15), we can also establish the relationship between the discrete sampled
B-spline and the discrete B-spline:
Since B-splines provide a stable multiresolution representation of a signal at multiple scales, it
is preferable to select B-splines as smoothing kernels to extract multiscale information inherent in
an image. Therefore, it is not surprising that many vision models [40], [29], [30] are derived from
B-splines.
One can refer to [25], [26], [33], [47] for a more complete and extensive exposition of the B-spline
methods. For example, its minimal support and m-refinability properties have led to fast
implementation of the scale-space algorithms [27], [5], [8], [9] in computer vision. In the following
section, we will classify scale-space representations into three types and show how B-splines are used
as a flexible tool for designing an efficient visual model according to the above requirements.
3 Scale-space representations designed from B-splines
In this section we focus on the fast implementation of continuous scale-space filtering and the design of
dyadic scale-space frame representation. Their relations with the compact scale-space representation
or compact wavelet models are indicated.
3.1 Implementation of continuous scale-space filtering using B-splines
3.1.1 Discrete signal approximation using B-spline bases
In practice, a discrete sets of points are given. Because spline spaces S n
provide close and stable
approximations of L 2 , it is reasonable to parameterize the discrete signal or image using B-spline
bases. We use the translated B-splines of order n 1 as bases to approximate the signal,
i.e., the signal f(x) 2 L 2 (R) is projected into the spline space S n 1
1 at resolution 1. In (17) we have
assumed that the sampling rate is 1 for convenience. We can call this procedure the generalized
sampling of the original data, where the sampling basis function is taken as the B-spline. There are
different types of approximation (see [25]). A common approach ([25], [26]) is the direct B-spline
transform where the exact or reversible representation of a discrete signal f(k) in the space of B-splines
is obtained by imposing the interpolation condition: 8k 2 Z; ~
f(k). Thus, the
coefficient c(k) can be computed as,
where (b n
denotes the inverse of the discrete sampled B-spline which can be computed recursively.
3.1.2 Fast algorithm for continuous scale-space filtering at rational scales
In this section we use the B-splines to derive a filter bank algorithm for fast implementation of
continuous scale-space filtering.
The linear scale-space representation is to make a map of a signal at multiple scales by changing
the scale parameter continuously. In the language of wavelet transform the traditional scale-space
approach can be regarded as a continuous wavelet transform of the signal f 2 L 2 ,
Z
is the scaled wavelet. Because the geometric features of an image are
characterized using differential descriptors, /(x) is often taken as certain derivative of a smooth
kernel or has certain order of vanishing moments. Under different physical meanings various linear
scale-space representations are proposed [4] where different kernels / are assumed. Here we also use
the B-spline of order n 2 to approximate the wavelet /(x):
In the classical scale-space theory, two frequently used multiscale edge detection filters are the
famous Marr-Hildreth operator [10] and Canny operator [12], which are obtained by taking the first
and second derivative of the Gaussian kernel respectively. Since B-splines are good approximations
of the Gaussian kernel, we shall use the derivatives of B-splines instead. In such cases, the coefficients
in (20) are the coefficients of first and second order difference operators respectively, i.e.
Such spline wavelets are shown in Fig. 2. Using these spline wavelets we obtain the approximate
Marr-Hildreth operator [10] and Canny operator [12] respectively. Higher order of derivatives of B-splines
can detect edges with higher singularity [42] and the coefficients g are the binomial-Hermite
sequences. Explicitly, in the Fourier domain the rth-order difference of the B-spline of order n can
be written as
which is also the rth-order derivative of the B-spline of order n + r. We remark that B-splines can
also efficiently approximate other kinds of wavelets such as the generalized edge detectors in the
-space representation [4] and the coefficients g can be computed numerically.
Since a real number can be approximated arbitrarily close by a rational number
Z, we take rational scale and derive a general filter bank implementation of the scale-space representation
of (19) using the m-refinable relation (15). The cascaded implementation of (19) with /
given in (20) and f approximated by (17) is,
The derivation of this algorithm is given in the Appendix A. This algorithm extend that in [27]. If
the scale is taken as an integer, i.e., when then the resulting formula is similar to that in
[27]:
c g "m 1
The implementation of the above algorithm is illustrated in the block diagram in Fig. 3.
In the filter bank implementation of (23), we can interchange the order of convolution. The result
is then equivalent to the discrete B-spline filtering of the difference of the discrete sampled signal
(with a down-sampling)
is the signal sampling followed by difference operation
We note that the computational complexity is mainly due to the discrete B-spline filtering (25)
which can be implemented efficiently. By (4), it turns into the cascaded convolution with the 0th-
order of discrete B-spline, and can be implemented via running average sum technique. If we define
such a running average operation as
R
then it can be realized using the following iterative strategy
Therefore, starting from the initial coefficient R 0 using only the
addition. Then after a down-sampling with a factor m 2 , we obtain the scale-space filtering at the
rational scale m1
m2 . Suppose m 2 is fixed, as usually the case in practice, the computational cost is
independent of the scale. Fig. 4 shows an example of the scale-space filtering of a simulated signal
using the above algorithm.
We record two important properties of this procedure.
ffl Efficiency: In practice, m 2 is usually fixed. The computational complexity at each scale m 1
is O(N ). The computational complexity is largely due to the convolution with the smoothing
B-spline kernel which can be reduced by running average sum technique. In contrast
with the existing procedure based on direct numerical integration or FFT-based scheme, the
computational complexity does not increase with the increasing number of values of the scale
parameter.
ffl Parallelism: The structure of the above algorithm is parallel and independent across scales.
This makes it inexpensive to run on arrays of simple parallel processors. In other words this
can be ideally suited for VLSI implementation.
One can recall that the above subdivision algorithm is also similar to the ' a trous algorithm [49],
[32] for fast computation of continuous wavelet transform except with some constraints on their
filters. However, the ' a trous algorithm can only compute the wavelet transform at dyadic scales.
The above algorithm can compute the wavelet transform efficiently at any scales. As will be shown in
Section 3.2, if the scale is restricted to dyadic, the above algorithm is similar to the ' a trous algorithm.
However, using B-splines more efficient scheme can be obtained which only need addition operation.
3.1.3 Extension to 2D images
Although the above algorithm is derived in the one-dimensional case, it can be easily extended to two
dimensions. The inner product B-spline fi n (x; used as a basis to parameterize
the image and approximate the two-dimensional wavelet kernels. For example, we can use the tensor
product B-splines to approximate the Maar-Hildreth's LoG operator [10]:
where
1). Since this two-dimensional
kernel is represented by the separable one-dimensional B-spline bases, by performing the above one-dimensional
fast algorithm along the horizontal and the vertical orientation respectively the LoG
operator can be computed efficiently. Fig. 5 shows such results for Lena image at 3 different scales.
3.2 Dyadic scale-space frame representation
3.2.1 One-dimensional signal representation by its local partial derivatives at dyadic
scales
The above continuous scale-space is too redundant for some applications. In addition, as stated by
Witkin [11], an initial representation ought to be as compact as possible, and its elements should
correspond as closely as possible to meaningful objects or events in the signal-forming process. A
description that characterizes a signal by its extrema and those of its first few derivatives is a
qualitative description to "sketch" a function. If we sample the scale of the above continuous scale-space
as dyadic while keeping the time variable continuous, we can obtain a more compact scale-space
representation. Moreover, such a representation is shift invariant and therefore is suitable for some
pattern recognition applications. In particular, using B-spline techniques efficient frame algorithms
can be designed to express the signal in terms of its local partial derivatives.
We now show the relationship between this type of transform and the above continuous scale-space
implementation.
If we use the approximation b n 2 +n 1 +1 c to replace the original signal f , and the width of the
B-spline m is restricted to be dyadic, say 2 m , we will get a recursive relation for the wavelet transform
between the dyadic scales. In this case, the formula (23) becomes
where
It is easy to derive the two-scale relations for
. From property (4) and the property of
z-transform,
is the z-transform of B n
We have the following relation
z \Gammaj 2
In the time domain, it becomes
By this relation, we can get a fast recursive implementation of dyadic-scale space filtering,
Or simply written as
is the binomial kernel.
The approximate Marr-Hildreth operator and Canny operator at dyadic scales can now be computed
as
"2
where is the first or second order of difference operator given in (21). The recursive
refinement relation (36) and (37) can be rewritten in the z-transform domain as
2: (39)
By requiring the reconstruction filter ~
G to satisfy the following perfect reconstruction condition
we can reconstruct the signal from its multiscale partial derivatives
where ~
are the time responses of ~
G k which are given explicitly in Appendix B. Since all of
these filters are linear combinations of binomial and divided by 1
using Pascal triangular algorithm
only addition operation and bit shift operation are needed. This is very suitable for hardware
implementation.
3.2.2 Image representation by its local directional derivatives
The tensor product B-spline basis is fi n (x; (y). It is interesting that we can still derive
an efficient frame algorithm to characterize an image from its local differential components. A fast
algorithm for the gradient case has been proposed [42] and further refined in [34]. Now we consider
the case of the second directional derivative.
For edge detection an approach is to detect the zero-crossings of the second directional derivative
of the smoothed image f fi n
along the gradient orientation
@x
@y
We can still derive a subdivision algorithm to compute the three local partial derivative components
or wavelet transforms:
where the three directional wavelet components are defined in the Fourier domain as
where G (1) and G (2) are the transfer functions of the first and second order difference operator. From
these definitions we can obtain a recursive algorithm for the computation of the three local partial
derivative components:
where I (h; g) "2 j \Gamma1 represents the separable convolution of the rows and columns of the image with
the one dimensional filters [h] respectively. The symbol d denotes the Dirac filter
whose impulse is 1 at the origin and 0 elsewhere. We can also reconstruct the image from these
dyadic wavelet transforms using the following formula
2, is the FIR of the transfer function (!). The
reconstruction formula (48) follows from the following perfect reconstruct identity:
G (2) (! x )U(! y
G (2) (! y )U(! x )
G (1) (! y
If we define the three corresponding reconstruction wavelets
G (2) (! x )H 2 (! y
G (2) (! y )H 2 (! x
G (1) (! x ) ~
G (1) (! y
it can be shown [34] that an image f(x; y) can be represented as
One can also notice that in the above decomposition and reconstruction formula all the filters are
binomial which require only the addition operation. For illustration Figure 6 shows the above decomposition
and reconstruction results of a square image at the scales 1, 2, 4. Like the compact wavelet
decomposition [40], the above algorithms also decompose an image into horizontal, vertical and diagonal
components. However, this transform has explicit physical meaning and is shift-invariant. This
can be useful for certain pattern recognition tasks.
3.2.3 Image representations by its isotropic and multi-orientational derivative component
One can obtain a more compact isotropic wavelet representation of an image that is complete and
efficient using a radial B-spline as the smoothing kernel in two dimensions. This representation is
important because it indicates that an image can be recovered from its multiscale LoG-like compo-
nents. The radial B-spline OE(x; y) is a non-separable function of two variables defined by its Fourier
where the radius ae = min(
and the wavelet is defined by
One may notice that such a wavelet is isotropic and LoG-like, which resembles the human visual
system. With these definitions we still have a filter bank implementation of the decomposition and
reconstruction. We omit the details, and just give the decomposition formulas,
r are the 2D non-separable radial filters corresponding to h and g respectively. In this
decomposition, two components are obtained at each resolution. By designing the filter ~
g r from the
relation (40) the corresponding 2D non-separable radial filter ~
r can be computed numerically via
its Fourier transform ~
G r . Then the reconstruction formula similar to (41) can be obtained. Also, it
is easy to check using the same arguments as the in 1D case that an image can be represented as [34]
is the 2D reconstruction wavelet defined by -
G r (ae) -
OE(ae).
One can build a wavelet representation having as many orientation tunings as desired by using
non-separable wavelet bases. A generalized Pythagorean theorem has been proved to decompose an
image into a finite number of equally spaced angles [34]:
(2m\Gamma1)!!n
1). If we multiply the above isotropic wavelet
(51) by the angular part H k Z, we can extract the orientational information
in the dyadic scale-space tuned to n orientations [34]:
Such wavelets can be called orientation tuned LoG-like filters. An image can be represented by its
multiscale and multi-orientational components,
is the oriented wavelet for reconstruction. Similarly, the pyramid-like filter bank implementation
of such a representation can be obtained as follows,
Therefore, through such an approach we can analyze the directional information of an image feature
at a certain angle in dyadic scale space. In Fig. 7 we show a multiscale orientational decomposition
and reconstruction, where the orientation number is chosen as 3.
3.2.4 Some comments on the application of dyadic scale-space representation
The dyadic scale-space frame representation from B-spline gives rise to many applications, since
it provides a invertible, translation-invariant, and pyramid-like compact representation of a signal.
One example is the fingerprint based compression [41] by combining with other techniques. Many
image features such as ridges, corners, blobs, junctions are usually characterized by local differential
descriptors [3]. It is usually enough to consider their behaviors only at dyadic scales. The proposed
algorithms provide efficient ways and are easy for hardware implementation. For example, in multi-scale
shape representation usually the computation of the curvature function is treated in continuous
scales [51]. In fact sometimes it is enough to consider its behavior only at the dyadic scales [50]. We
have used the above algorithms to efficiently compute the geometric descriptors for multiscale shape
analysis [35].
3.3 Compact scale-space representation
While the dyadic scale-space frame approach provides a more compact representation, it is still over-complete
for signal representation. For image compression applications, compact representation is
preferred. In order to give a complete picture we mention briefly the discrete wavelet transform.
While the scale-space technique has existed for a long time, it was the orthogonal multiresolution
representation proposed by S. Mallat [40] that makes the mathematical structure of the image more
explicit. This is an extension or refinement of traditional scale-space theory. This approach restricts
the scale to dyadic and samples the time variable. The starting point is to orthogonalize the B-spline
basis, and then decompose the signal approximated at a fine scale space S n
space S n
by imposing the orthogonal condition
The detail irregular information of the signal is contained in the subspace W n
This defines an
orthogonal multiscale representation. After the B-spline basis is converted into an orthogonal basis,
the two-scale relation still exists which results in an efficient pyramidal algorithm. The perfect reconstruction
condition (40) still exists. However, additional conditions on the filters H; G; ~
G are imposed
to ensure the orthogonality. There are several ways to achieve a compact multiresolution by imposing
the biorthogonal instead of the orthogonal condition (57). All these compact multiresolutions are
related to B-splines. A detailed study can be found in [30], [36].
From the above analysis, it is easy to see that the dyadic scale-space frame representation lies
between the continuous scale-space and the compact representation. Which kind of representation
to select depends on the problem at hand. For example, in multiscale feature extraction one may
compute the differential operation either at the continuous scales or only dyadic scales. Therefore,
the continuous or dyadic scale-space frame representation is more useful. However, for compression
applications, the compact multiresolution is the favorite.
3.4 Relations between the existing scale-space algorithms in computer vision
Before the appearance of wavelets, the B-spline technique has been widely used in computer vision.
Examples include Wells [5], Burt [6], [7], Ferrari et al. [8], [9]. We shall show that under certain
circumstances, they are either equivalent or the special cases of the general algorithm given in
Section 3.
3.4.1 Relation to Ferrari et al.'s method
Ferrari et al. [8], [9] have proposed B-spline functions to realize the 2D image filtering recursively.
Their idea is to use B-splines to as the filter kernel:
l
are the interpolation coefficients at the knots (kM; lN ), which can be computed
using the usual method. Then using the properties of discrete B-spline, they derive the recursive 2D
image filtering. In this way, the computational load can be greatly reduced. Therefore, this approach
is a special case of our proposed algorithm for continuous scale-space filtering.
3.4.2 Relation to Wells' method
In [5], Wells proposed an approach for efficient synthesis of Gaussian filters by cascaded uniform
filters. It is easy to show that his method is equivalent to using cascaded 0th discrete B-spline to
approximate the Gaussian kernel. By this approach, the cascaded convolution with a 0th-degree
B-spline (uniform filters) can be realized by running average sum technique as discussed above.
Obviously, his method is a special case of our recursive algorithm.
3.4.3 Relation to Burt's Laplacian pyramid algorithm
Burt [6], [7] has introduced the following low pass filter for the generation of Gaussian or Laplacian
pyramids
If the parameter a is taken as a=3/8, then w(j) can be re-written as
This is equivalent
to the special case of in the formula (35). In this case, the filter is also equivalent to the operator
used for the generation of the dual cubic spline pyramid representation as discussed in [29].
Edge patterns in the B-spline based scale-space
The study of edge patterns in scale-space is very important for many applications. Much work has
been done on the study of edge patterns in the Gaussian based scale-space. In this section, we want
to investigate the edge behaviors in the B-spline based scale-space.
It is well-known that there exists a similar "uncertainty principle" between good edge localization
and noise removal. At finer scale, better localization can be achieved with the cost of noise pollution,
and vice versa. Many researchers have studied the localization of operators based on Gaussian kernel
in scale-space. Berzins [24] studied the accuracy of Laplacian operator, M. Shah, A. Sood and Jain
[23] considered the localization of pulse and staircase edge models, I. J. Clark [21] investigated the
phantom edges in a scale-space. For corner detection, A. Huertas and H. Asada [51], R. T. Chin
[52] have also considered the behavior of edge models. We found that their derivations are all based
on Gaussian kernel. It is necessary to study the behavior of various edge models in B-spline based
scale-space which give some a priori knowledge of various patterns in an image.
Here we present a more general proof with the assumption that the primitive ' in the definition
dx '(x) is symmetric and with compact support [\Gammaw=2; w=2] and its derivatives have the
shape as in Fig. 2. In practice, the support of the Gaussian kernel is usually truncated to a finite
interval. Obviously, the truncated Gaussian and B-spline kernel are included in this assumption.
We shall show that these edge models have the same behaviors as that derived from the traditional
Gaussian kernel. First we adopt the following accurate definition of an edge [21].
Definition 1. A point x 0 is called an authentic edge of a signal f(x), if jW 1 f(s; x 0 )j is maxima,
will be a phantom edge.
We shall use W 1 f and W 2 f to denote two types of wavelet transform where the wavelets are the
first and second derivatives of ' respectively. I. J. Clark [21] analyzed these two types of detection.
Generally, zero-crossing detection is equivalent to the extrema detection. However, extrema detection
includes both maxima and minima detection. Only the edge point detected by local maxima belongs
to authentic edge and the edge point detected by local minima corresponds to phantom edge. It is
shown that zero-crossing edge detection algorithms can produce edges which do not correspond to
significant image intensity changes. Such edges are called phantom or spurious.
Now, as an example we study the behavior of staircase edge model in scale-space. The staircase
edge model can be represented as
amplitudes of the edge, d is the distance between the two abrupt changes
at is the step function,
whose derivative in the distributional sense is ffi(x). Hence, from (19),
d
s (x)dx
x
s
s
i.e., the wavelet transform is just the sum of two dilated smoothing functions. At the location
@
@x
s
s
s
s
s
Therefore, only at a small scale s ! 2d
w , the location of edge at can be detected exactly.
Similarly, at the location
@
@x
s
d
s
s
s
d
s
i.e., the location of edge at can only be detected exactly at a small scale s ! 2d
w . For large
scale, it will spread like a cone in scale-space and will be influenced by another cone at As a
consequence, the edge location will be mis-detected due to the superposition of two diffused cones.
One may deduce that there will be another point x 0 such that @
due to different
parities in the values of @
at the location of However, it is easy to
find such a point corresponding to a local minimum, which means it is a phantom edge point. This
cannot be distinguished from others by zero-crossing detection. Fig. 8 illustrates the behavior of this
type of edge in the scale-space. That was why local maxima is preferred for edge detection in [41],
[42].
Similarly, for zero-crossing detection of W 2 f(s; x), we can draw the same conclusion. In the
above analysis, we only consider one type of edge model. Other types of edges such as the step,
pulse, ramp, roof can be treated in a similar way [34]. Also, our derivation is based on a more
general assumption on the kernels which include both the truncated Gaussian and the B-spline.
5 Discussions on the properties of the B-spline based scale-space
We now discuss the advantages and properties of the B-spline based scale-space.
ffl Efficiency: It is a basic requirement that any algorithm should be able to capture and process
the meaningful information contained in the signal as fast as possible. Obviously, the major
weakness of the traditional Gaussian based scale-space is the lack of efficient algorithms. On
the contrary, B-spline techniques facilitate computational efficiency. The computational complexity
is scale independent. Moreover, in contrast with the scale-space based on the Gaussian
kernel, the B-spline representation of a signal is determined directly on an initial discrete set
of points, avoiding problems caused by discretization in continuous scale-space. B-splines also
have been used as smoothing windows for efficient computation of Gabor transform to extract
frequency information [31].
ffl Parallelism: Data parallelism is common in computer vision which arises from the nature of
an image. As Koendrink [1] pointed out, it seems clear enough that visual perception treats
images on several levels of resolution simultaneously and that this fact must be important for
the study of perception. In this paper, efficient parallel structure of an image is exhibited using
B-spline techniques. It may provide a good interpretation of human visual system which can
process the hierarchical information simultaneously. B-splines provide a flexible way to process
the multiscale information using either coarse to fine strategy or in parallel. This is also easy
for hardware implementation.
ffl Completeness and invertibility: We usually use the zero-crossings or the local extrema as
meaningful description of a signal. It is clearly important, therefore, to characterize in what
sense the information in an image or a signal is captured by these primal sketches uniquely. For a
Gaussian based scale space, the completeness property is guaranteed by the fingerprint theorem
[13]: the map of the zero crossing across scales determines the signal uniquely for almost all
signals in the absence of noise. Such results have theoretical interest in that they answer the
question of what information is conveyed by the zero and level crossings of multiscale Gaussian
filtered signals. Poggio and Yuille's proof is heavily dependent on the Gaussian kernel and they
conjectured that under certain conditions Gaussian kernel is necessary for fingerprints theorem
to be true. However, later Wu and Xie [17] gave a negative answer and present a more general
proof, which states that fingerprint theorem holds for any symmetry kernel. Therefore, the
fingerprint theorem is also true in the case of B-splines for continuous scale-space
representation.
Differential operators have also been widely used for multiscale geometric description of images,
but it has not been clear such representations are invertible. As shown in the paper, using B-
splines, efficient frame algorithms can be designed to express an image from its local derivatives
at dyadic scales.
ffl Compactness: For compression application, we require a representation to be as compact as
possible so that an image can be represented by the corresponding primitives using less storage.
In [13] Poggio and Yuille conjectured that the fingerprints are redundant and the appropriate
constraints derived from the process underlying signal generation should be used to characterize
how to collapse the fingerprints into a more compact representation. In the paper, the more
compact dyadic scale-space representations is proposed. We can use such representation for
compression applications by combining with other techniques.
ffl Causality: Since edge points are important features, it is natural to require that no new
features are created as the scale increases. A multiscale feature detection method that does
not introduce new features as the scale increases is said to possess the property of causality.
Causality is in fact equivalent to the maximum principle in the theory of parabolic differential
equations [18]. The Gaussian scale-space is governed by the heat diffusion equation and therefore
possesses the causality property. Such continuous causality property of the Gaussian kernel
is not shared by the B-spline. However in the discrete sense, M. Aissen, I. J. Schoenberg and
A. Whitney [54] proved that for a discrete scale-space kernel h, the number of local extrema or
zero-crossings in f out = h f in does not exceed the number of local extrema or zero-crossings
in f in if and only if its generating function H(z) =P
h(n)z n is of the form
where
It is easy to verify that the discrete B-spline kernel in (25) satisfies such a condition. Therefore,
the causality property still holds for discrete B-spline filtering in the discrete sense.
The number of local extrema or zero-crossings of the derivative of the discrete signal does not
increase after running average sum. This justify the use of the discrete smoothing kernel in
practice.
ffl Orientation: Orientation analysis is an important task in early vision and image processing,
for example in texture analysis [45]. The Laplacian multiresolution [6] does not introduce
any spatial orientation selectivity into the decomposition process. Daugman [45] showed that
these impulse response can be approximated by Gaussian windows modulated with a wave.
It is meaningful to combine both the orientation analysis and the scale feature [43]. In this
paper, efficient pyramid-like algorithm is designed using B-spline technique to analyze and
synthesize an image from its multi-orientational information at any number of angles in the
dyadic scale-space. Note that the usual wavelet transforms can decompose an image in only
three orientations.
There are other advantages of the B-spline kernel. In the time-frequency analysis, the Gaussian
kernel is the optimal function that minimizes the uncertainty principle. The cubic B-spline is already
a good approximation to the Gaussian function [28], see also Fig. 1. As the order goes infinity, the
B-splines converge to the Gaussian in both the time and frequency domain. Moreover, a B-spline
resembles the response of receptive field [22] and is also suitable for modeling the human visual
system. Edge detection is an ill-posed problem. From the view point of regularization theory, cubic
spline is proved optimal. The connection between the regularized edge detection and the smoothing
spline problem proposed by Schoenberg, Reinsh in statistics is noted by Poggio et al. [20]. It was
shown that the cubic B-spline rather than the Gaussian kernel is optimal for edge detection.
B-splines are the shortest basis functions that provide a stable multiresolution analysis of a signal
[33]. This explains why many wavelet models of a vision [40], [28],[46], [47], [48], [37] are derived
from B-splines [33], [36]. For the derivative operations, B-spline approach is very intrinsic which
elucidates the relationship between derivative and difference which are usually characterized by the
two-scale difference relations. B-splines play an important role to bridge the traditional scale-space
theory, dyadic scale-space frame and compact multiresolution representation.
6 Conclusions
This paper describes a B-spline based visual model. For a long time, the Gaussian kernel has been
commonly used in computer vision. In this paper a general framework for scale-space representation
using B-splines is presented. In particular, the design of two types of scale-space representations
is given in detail. A fast algorithm for continuous scale-space filtering is proposed. In the case
of dyadic scale some efficient frame algorithms are designed to express the image from its local
differential descriptors. The intrinsic relationship with the compact wavelet models is also indicated.
Several algorithms are proved to be special cases of our proposed algorithm.
To our knowledge, the scale-space property based on B-splines has not been fully studied before.
We examine the property of B-spline based scale-space in parallel with Gaussian kernel. Our results
indicate that B-splines possess almost the same properties as the Gaussian kernel. Moreover, the
B-spline kernels outperform the Gaussian in many aspects, notably, computational efficiency.
Acknowledgment
The authors wish to thank the referees for their comments which greatly
improve the presentation of the paper. The first author wishes to express his thanks to Prof. Wu
for providing the reference [17].
Appendix
A: Derivation of fast implementation of continuous
wavelet transform at rational scales.
We use the m-scale relation (15) to derive the filter bank implementation of scale-space filtering at
rational scales. From (19), (20), (17),
l
l
Using the m-refinable relation (15),
Hence,
(j)B n1
(j)B n1
where the following property of B-spline is used [25]:
Substituting (70) into (69) gives
l
l
l
If we take then the above formula can be written as,
If we take
we get an interpolation formula, and the size of the transformed data m 2
is as large as that of the original sampling signal data,
Appendix
B: Derivation of the reconstruction filter responses
For the first order difference the perfect reconstruction condition (40) gives
~
and the corresponding FIR is
j+l
j+l
\Gamman - l -
j+l
For the second order difference
~
4z
and the corresponding FIR is
--R
The structures of images
Edge and curve detection for visual scene analysis
Efficient synthesis of Gaussian filters by cascaded uniform filters
The Laplacian pyramid as a compact image code
Fast hierarchical correlations with Gaussian-like kernels
Efficient two-dimensional filters using B-spline functions
Recursive algorithms for implementing digital image filters
Theory of edge detection
A computational approach to edge detection
Fingerprints theorems for zero-crossings
Scaling theorems for zero-crossings
Uniqueness of Gaussian kernel for scale-space filtering
Scaling theorem for zero-crossings
Reconstruction from zero-crossings in scale-space
Scaling theorems for zero-crossings of bandlimited signals
Computational vision and regularization theory
Authenticating edges produced by zero-crossing algorithms
The Gaussian derivative model for machine vision: Visual cortex simulation.
Pulse and staircase edge models
Accuracy of Laplacian edge detection
Part II-efficient design and applications
Fast implementation of continuous wavelet transforms with integer scale
On the asymptotic convergence of B-spline wavelets to Gabor functions
The polynomial spline pyramid
A family of polynomial spline wavelet transforms
Fast Gabor-like windowed Fourier and continuous wavelet transform
A practical guide to the implementation of the wavelet transform
Ten good reasons for using spline wavelets
Multiscale curvature based shape representation using B-spline wavelets
Periodic orthogonal splines and wavelets
Characterization of compactly supported refinable splines
Window functions represented by B-spline functions
A theory for multiresolution signal decomposition: wavelet representation
Characterization of signals from multiscale edges
IEEE Trans.
Wavelets for a vision
Subband image coding using watershed and watercourse lines of the wavelet transform
Complete discrete 2D Gabor transform by neural networks for image analysis and compression
Biorthogonal bases of compactly supported wavelets
On compactly supported spline wavelets and a duality principle
Wavelets on a bounded interval
The discrete wavelet transform: wedding the
Multiscale corner detection by using wavelet transform
The curvature primal sketch
On the generating functions of totally positive sequences I
--TR
--CTR
Anant Madabhushi , Jayaram K. Udupa , Andre Souza, Generalized scale: theory, algorithms, and application to image inhomogeneity correction, Computer Vision and Image Understanding, v.101 n.2, p.100-121, February 2006
Yu-Ping Wang , Ruibin Qu, Fast Implementation of Scale-Space by Interpolatory Subdivision Scheme, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.21 n.9, p.933-939, September 1999
Yu-Ping Wang , Jie Chen , Qiang Wu , Kenneth R. Castleman, Fast frequency estimation by zero crossings of differential spline wavelet transform, EURASIP Journal on Applied Signal Processing, v.2005 n.1, p.1251-1260, 1 January 2005
Huamin Feng , Wei Fang , Sen Liu , Yong Fang, A new general framework for shot boundary detection and key-frame extraction, Proceedings of the 7th ACM SIGMM international workshop on Multimedia information retrieval, November 10-11, 2005, Hilton, Singapore
Luca Lombardi , Alfredo Petrosino, Distributed recursive learning for shape recognition through multiscale trees, Image and Vision Computing, v.25 n.2, p.240-247, February, 2007 | b-spline;image modeling;scaling theorem;wavelet;scale-space;fingerprint theorem |
290131 | Efficient Error-Correcting Viterbi Parsing. | AbstractThe problem of Error-Correcting Parsing (ECP) using an insertion-deletion-substitution error model and a Finite State Machine is examined. The Viterbi algorithm can be straightforwardly extended to perform ECP, though the resulting computational complexity can become prohibitive for many applications. We propose three approaches in order to achieve an efficient implementation of Viterbi-like ECP which are compatible with Beam Search acceleration techniques. Language processing and shape recognition experiments which assess the performance of the proposed algorithms are presented. | Introduction
The problem of Error-Correcting Parsing (ECP) is fundamental in Syntactic Pattern
Recognition (SPR) [11, 13], where data is generally distorted or noisy. It also arises
in many other areas such as Language Modeling [23, 6], Speech Processing [7, 22],
OCR [18], Grammatical Inference [20], Coding Theory [8, 15] and Sequence Comparison
[21]. As in many other problems arising in several research areas, ECP is
related to finding the best path through a trellis, a problem that is solved by the
Viterbi algorithm (VA) [10] as is well-known. McEliece [19] makes a good description
of the operation and complexity of the VA with application to decoding linear block
codes. On the other hand, Wiberg et al [24] use an alternative decoding scheme
based on Tanner graphs rather than trellises. Nevertheless, only the work described
in [8, 15] seems to specifically deal with the ECP problem as stated in Sect. 2.
Henceforth we shall be concerned with the application of ECP in SPR that is, it
will be assumed that both a stochastic Finite State Model (FSM) and an (stochas-
tic) error model accounting for insertions, substitutions and deletions of symbols are
given 1 . These symbols belong to some alphabet \Sigma, which stands for the set of primitives
or features that characterise a given pattern we aim to recognise. Therefore,
objects are represented by strings of symbols belonging to \Sigma. On the other hand,
the FSM accounts for the (generally infinite) set of different strings corresponding to
the several ways a given object can be represented, while the error model accounts
for the typical variations that pattern strings tend to exhibit with regard to their
"standard" forms as represented by the FSM.
If no error model is given, the recognition problem amounts to a simple problem
of Finite-State parsing. Given an input string of symbols, we have to compute the
probability that this string belongs to the language generated by the FSM. From
this point of view, we are only interested in the maximum likelihood derivation
("Viterbi-like") of the string, instead of the sum of the likelihoods of every derivation
("Forward-like"). In a multiclass situation, where one FSM is provided for each
class, these probabilities can be used for recognition using the maximum likelihood
classification rule: the string is classified into the class represented by the FSM
whose probability of generating the string is maximum. However, in many cases,
test strings cannot be exactly parsed through any of these FSMs, leading to zero
probabilities for all the classes. This can often be solved through ECP.
If the FSM is deterministic, (non error-correcting) parsing is trivial. Otherwise,
the VA can be used. If an error model is provided, the same Viterbi framework
can be adopted for ECP, but at the expense of a higher computational cost. Unfor-
1 Note that this work is also related to the problem of (approximately) matching regular expres-
sions, since they are equivalent to FSMs. See [2] for an introduction to this problem.
tunately, this higher cost can become prohibitive in many applications of interest.
The computational problem of ECP is outlined in the next section. Solutions to this
problem are proposed in Sects. 3, 4 and 5. Sect. 6 describes the adaptation of the
well-known Beam Search technique [17] to further accelerate the parsing process.
Sect. 7 presents the experiments that have been carried out to test the performance
of the distinct approaches.
2 The Computational Problem of ECP
In general, the problem of Finite-State parsing with no error correction can be
formulated as a search for the "minimum cost" path 2 through a trellis diagram
associated to the FSM and the given input string, x. This trellis is a directed acyclic
multistage graph, where each node q j
k corresponds to a state q j in a "stage" k. The
stage k is associated with a symbol, x k , of the string to be parsed and every edge of
the trellis, t
stands for a transition between the state q i in stage k and
the state q j in stage k Thanks to the acyclic nature of this graph,
Dynamic Programming (DP) can be used to solve the search problem, leading to
the well-known Viterbi algorithm.
The trellis diagram can be straightforwardly extended to parse errors produced
by changing one symbol for another symbol and errors produced by inserting a
symbol before or after each symbol in the original string. In this way, taking only
substitution and insertion errors into account, efficient ECP can be implemented
because such an extended trellis diagram still has the shape of a directed acyclic
multistage graph (Fig. 1 (a) (b)).
Unfortunately, the extension of this trellis diagram to also parse errors produced
by deletion of one or more (consecutive) symbol(s) in the original string results in a
kind of graph with edges between nodes belonging to the same stage k (Fig. 1 (c)).
Nevertheless, if the FSM has no cycles, the resulting graph is still acyclic and DP
negative log probabilities and sums rather than products are used to avoid underflows.
(d)
(c)
(b)
(a)
K K
K K
Figure
1: Trellis a) Substitution and proper FSM transitions b) Insertion transitions
c) Deletion transitions in an acyclic FSM d) Deletion transitions in a cyclic FSM.
Each edge is actually labelled with a symbol of \Sigma
can still be applied, leading to an efficient algorithm that can be implemented as a
simple extension of VA [3]. However, if cycles exist in the FSM, then DP can no
longer be directly used and the problem becomes one of finding a minimum cost
path through a general directed cyclic graph (Fig. 1 (d)). As noted in [15], we can
still take advantage of the fact that most of the edges of this kind of graphs have a
left-to-right structure and consider each column as a separate stage like in the VA.
3 Solving the Problem by Score Ordering
Bouloutas et al [8] formulate an interesting recurrence relation to solve the problem
stated in Sect. 2. In our notation it is as follows:
l ) in stage k
8l in stage k
(1)
) is the cost of the minimum cost path from any of the initial states
to state q i in stage k;
is the inverse of the transition function, ffi, of the FSM;
k+1 ) is the cost of the minimum cost transition from state q i in stage k to
state q l in stage k
k+1 ) is the cost of the minimum cost path from
state q l to state q j , both of which are in stage k + 1. Its correctness lies in i , since
for each pair of states of the FSM its evaluation yields the cost of the minimum
cost deletion path between them. Fig. 2 shows the algorithm, called EV1, which we
developed from (1), with Q being the set of states of the FSM. Lines 1-3 will be
referred to as init-block, lines 12-16 as ins-subs-block and line as ret-block in the
remainder of the paper.
1. for each j 2 Q do
2. if j is an initial state then C(q j
InitialCost else C(q j
3. endfor
4. for to jxj do
5.
do
7.
8. for each
9. C(q j
10. endfor
11. endwhile
12. for each i 2 Q do
13. for each l 2 ffi(q i ) do
14. C(q l
C(q l
15. endfor
16. endfor
17. endfor
18. return argmin
for each final state i
Figure
2: Algorithmic scheme for EV1 and EV1PQ
Given that there are no transitions with a negative cost, Dijkstra's strategy is
followed in order to compute i (lines 6-11 in Fig. 2). All transitions from a state to
itself can be discarded to perform this computation. The state q l whose score in a
given parsing stage is minimum is chosen and the score of each j 2 ffi(q l ) is updated.
Again, the minimum-score state is chosen and the scores of its direct successors are
updated, and so on, until there are no states whose score could be updated. An
input string x can be parsed in \Theta(jxj \Delta using EV1 if no care about the
implementation of Q 0 is taken.
This algorithm can be further improved by using priority queues [1] in the implementation
of Q 0 . In this implementation, the scores (therefore the positions) of
the states in the heap need to be dynamically changed. This can be done by simply
storing the pointer to each state in the heap in order to perform a heapify operation
from the position of the state whose score has changed. The worst-case time complexity
of the loop in lines 6-11 of Fig. 2 is, in this case, O(jQj
given that at most jQj \Delta B operations will be performed in the heap [1], and with
B being the "Branching Factor" or maximum number of transitions associated to
each state in Q. Since B ! jQj in many cases, the worst-case time complexity of
this version of EV1 (called EV1PQ throughout the paper) to parse an input string
x is O(jxj log jQj). Note that if the FSM is a fully-connected graph, then
and the performance of EV1PQ can be worse than that of EV1.
4 Solving the Problem Iteratively
Another approach for coping with the deletion problem consists in performing consecutive
iterations to compute the minimum cost path to each state in any parsing
stage k (line 9 in Fig. 2), using only deletion transitions (Fig. 1 (c) (d)). If this iterative
procedure is performed until no score updating is produced then the properness
of the overall computation is guaranteed. This idea was independently proposed
in [20] and [15], though the paper by Hart and Bouloutas is more comprehensive.
Their work deals with many kinds of error rules and efficiently copes with their
associated computational problems. The resulting algorithm, called EV2, is shown
in Fig. 3.
Let T be the number of iterations to be done at lines 3-9 in Fig. 3. The time
complexity of these lines is O(T \Delta jQj \Delta B). T would be 1 if no deletion transition
changed the score of any state of the FSM [5]. If, at least, one deletion transition
per state changed the score of any other state through consecutive iterations then
T would be jQj can be produced if the states are traversed in reverse
topological or score order [5]). EV2 can thus parse an input string x in O(jxj \Delta jQj \Delta B)
1. init-block
2. for to jxj do
3. repeat
4. for each l 2 Q do
5. for each j 2 ffi(q l ) do
7. endfor
8. endfor
9. until C(q j
k ) has not been changed for any j 2 Q
10. ins-subs-block
11. endfor
12. ret-block
Figure
3: Algorithmic scheme for EV2
and O(jxj \Delta jQj 2 \Delta B) in the best and worst cases, respectively. Its performance in the
average case strongly depends on following some order of the states of the FSM as
closely as possible when parsing deletion transitions and on the number of "effective"
deletion transitions [5].
5 Solving the Problem by Depth-First Ordering
Here, we propose an algorithm based on a recurrence relation which extends previous
ideas of [20] for ECP with acyclic FSMs to general FSMs:
l ) in stage k
(2)
are the same as in equation (1);
is a generalisation
of function
which returns, for a given state q, the set of states that are
"topological predecessors" of q in the FSM, that is
is an initial state
and W T (q l
k+1 ) is the minimum cost path from state q l to state q j , both of
which are in stage k + 1, which only includes states that are topological predecessors
of state q j . The computation of W T can be performed by following a "topological
order" of the states of the FSM when parsing deletion transitions.
If the FSM has cycles, a depth-first "topological sort" of its states can be computed
by detecting the back-edges [3] (i.e., transitions which produce cycles in the
FSM). This leads to a fixed order for the traversal of the list of states of the FSM
during the parsing process. Backtracking becomes necessary to ensure the correctness
of the overall computation only if some back-edge coming from a state q i to
another state q j updates the cost of q j .
This is a solution for the problem stated in Sect. 2, but, unfortunately, it is not
directly compatible with Beam Search (BS) techniques. When using BS the list
of states to be traversed can be different in almost every parsing stage; therefore
large computational overheads could be introduced during the parsing process if
we had to compute a depth-first sort of the list of "visited states" in each parsing
stage. This can be avoided by depth-first sorting the states as they are visited, using
bucketsort (binsort) techniques [1]. We only need to use an adequate ordering key.
Our proposal is to compute and store the ordering key \Psi i 8i 2 Q as shown in Fig. 4.
1. for each i 2 Q do
2.
#back-edges coming to i
3. endfor
4. for each i such that ae do
5. for each such that the edge (q is not a back-edge do
7. ae
8. endfor
9. endfor
Figure
4: Computation of \Psi i 8i 2 Q. Both \Psi and ae have been implemented as arrays
It can be easily shown that the relation on the set \Psi i 8i 2 Q is a partial
order [1]. Therefore, no pair of states q i and q j such that \Psi exists having a
transition (path) from q i to q j and vice versa. A permutation that maps Q into a
nondecreasing sequence \Pi(Q) can be found using \Psi by means of bucketsort. The
number of buckets to be used is max
. The depth-first traversal of the FSM, along
with the computation of \Psi i 8i 2 Q, can be performed in a preprocessing stage taking
only O(jQj \Delta B) computing steps [4, 5].
Once \Pi(Q) has been found, the only thing to worry about in a given parsing
stage is to find out when a back-edge is parsed. If
a back-edge is being parsed. Backtracking is performed only if the score of q j
is changed. Two algorithms based on equation (2), EV3 and EV3.V2, have been
developed [5]. In both algorithms \Pi(Q) has been implemented as a hash table. Fig. 5
shows EV3 algorithm. EV3.V2 is similar but it uses only the list \Pi(Q), performing
the parsing of insertion, substitution and deletion errors at once, while EV3 uses
both the (unsorted) list Q (for BS purposes, see next section) in the parsing of
insertions and substitutions and the list \Pi(Q) in the parsing of deletions.
1. init-block
2.
3. for to jxj do
4. for each l 2 \Pi(Q) do
5. for each j 2 ffi(q l ) do
7. if j 62 \Pi(Q) then Add j to bucket \Psi j in \Pi(Q) endif
8. if C(q j
k ) has been changed and \Psi
9. then backtrack to bucket \Psi j in \Pi(Q) endif
10. endfor
11. endfor
12. for each i 2 Q do
13. for each l 2
14. C(q l
C(q l
15. if l 62 \Pi(Q) then Add l to bucket \Psi l in \Pi(Q) endif
16. endfor
17. endfor
18. endfor
19. ret-block
Figure
5: Algorithmic scheme for EV3
The time complexity of lines 4-11 in Fig. 5 is the time of finding \Pi(Q) times the
maximum branching factor (B). \Pi(Q) is found using bucketsort. The complexity
of bucketsort to sort n elements is O(n +m), with m being the number of buckets.
In this case
\Psi i is clearly bounded by jQj (see Fig. 4). In
the best case (no back-edge requires a backtracking recomputation) the resulting
time complexity is, therefore, O(jQj \Delta B). In the worst case there will be a linear-on-
jQj number of back-edges requiring a backtracking recomputation of all the paths
already computed 3 [20], leading to O(jQj 2 \Delta B) time complexity.
EV3 and EV3.V2 can parse an input string x in O(jxj \Delta jQj \Delta B) and O(jxj \Delta jQj 2 \Delta B) in
the best and worst cases, respectively. The performance on the average case not only
depends on the structure of FSMs but also on the number of back-edges that require
a backtracking computation from the state that is reached by them. A theoretical
formulation of this average cost is very difficult and it requires assumptions on
probabilistic distributions over the space of possible FSMs, which is not always
feasible.
6 Beam Search
Beam Search [17] (BS) is a classical acceleration technique for VA. This search technique
often yields approximately optimal (or even optimal) solutions while drastically
cutting down the search space [22, 5]. In this respect, BS is comparable to
other "clever" strategies based on A search, such as that proposed in [14]. The idea
is to keep only a "beam" of "promising" paths at each stage of the trellis. The beam
width is a constant value to be empirically tuned to achieve an adequate tradeoff
between the efficiency and the accuracy of the search. The lower this parameter,
the lower the accuracy and the lower the computing time, and vice versa.
The implementation of BS consists in keeping only the paths ("visited states")
with a score which is lower than a given bound. For the sake of efficiency, this bound
is implemented by adding the currently found lowest score to the beam width [22].
Q is implemented as a double linked list, leaving the first place for this lowest-score
state, to avoid overhead. This strategy generally results in no significant differences
3 This upper bound is quite pessimistic since it assumes that after the depth-search ordering,
the resulting number of back-edges that change the score of some state is linear on jQj.
with regard to a strict implementation of BS. The extension of the algorithms EV1,
EV1PQ and EV2 to perform BS is straightforward [4, 5]. In the case of EV3 and
EV3.V2 the extension is easy thanks to the ordering key \Psi, which allows for building
the list \Pi(Q) as the states are visited (see Sect. 5). EV1, EV1PQ and EV2 use the
linked list, but slight differences in the number of visited states can exist due to the
fact that EV1 and EV1PQ parse deletion transitions in score order. EV3 also uses
the linked list to be able to tightly follow this BS strategy. Again, slight differences
can exist, but only due to the fact that EV3 parses only the deletions in "topological"
order while EV3.V2 parses all transitions following this order. This is a problem for
the computation of the first bound value to be used in the next parsing stage k
since it is likely that the first state in \Pi(Q) is not the minimum-score one. EV3.V2
overcomes this problem by computing an approximate bound value as the minimum
cost edge of the lowest-score state found in parsing stage k plus the beam width. In
practice, the differences in the number of visited states among all algorithms prove
to be negligible [5].
7 Experiments and Results
Two series of experiments were carried out in order to assess both the effectiveness
(parsing results) of ECP and (mainly) the efficiency (speed) of the algorithms previously
discussed. In the first case, ECP was used to "clean" artificially distorted
sentences from a Language Learning task called "Miniature Language Acquisition"
In the second case, ECP was applied to recognise planar shapes (hand-
written digits), coded by chain-coding the contours of the corresponding images [18].
In both cases, the required stochastic FSMs were automatically learned by means
of the k-TSI Grammatical Inference algorithm proposed in [12]. This algorithm
infers a (stochastic) FSM that accepts the smallest k-Testable Language in the
Strict Sense (k-TS language) that contains the training sentences. Stochastic k-TS
4 This consists in a pseudo-natural language for describing simple visual scenes, with a vocabulary
of 26 words.
languages are equivalent to the languages modeled by the well-known N-GRAMS,
with Increasing values of k from 2 to 10 (2 to 7) were used in the first
(second) series of experiments. This yielded increasing size FSMs as required for
studying the computational behaviour of the different algorithms. In all cases, only
roughly hand-tuned error-model parameters were used. All the experiments were
carried out on an HP9000 Unix Workstation (Model 735) performing 121 MIPS.
7.1 Language Processing Experiments
A set of nine stochastic FSMs, ranging from 26 to 71; 538 states, which were automatically
learned from 50; 000 (clean) sentences of the MLA task [9] were used in
these experiments. The test set consisted of 1; 000 sentences different from those
used in training. It was distorted using a conventional distortion model [16] in order
to simulate the kinds of errors typically faced in speech processing tasks. This
generally resulted in sentences which no longer belonged to the languages of the
learned FSMs. Three different percentages of global distortion, which were evenly
distributed among insertion, deletion and substitution parameters, were used:
5% and 10%.
The effectiveness of ECP is summarised in Table 1. The quality of parsing was
measured in terms of both word error rate (WER) 5 between the original (undis-
torted) test sentences and those obtained by ECP (from the corresponding distorted
sentences), and test-set Perplexity (PP) 6 . These results were obtained without
BS and were identical for all the algorithms. For adequately learned k-TS models
the distortion (WER) of test sentences was reduced by a factor ranging
from 2 to 3; the best results obtained by the 6-TS model with 3; 231 states.
Perplexity figures closely follow the tendency of WER. Values of k greater than 6
tended to degrade the results, due to the lack of generalisation usually exhibited
5 minimum number of insertions, substitutions and deletions.
6 2 to the power of the Cross-Entropy, which is the sum of the maximum log-likelihood score for
each input distorted test sentence divided by the overall number of parsed words [23].
by k-TS models as k is increased beyond a certain value (6 in this case). This is
explicitly assessed by the column labelled N in Table 1, which shows the number
of original (undistorted) test sentences that would not have been accepted by the
corresponding k-TS models without ECP 7 .
Table
1: Parsing results in terms of Word Error Rate (WER-%) and test-set Perplexity
(PP) for each FSM without BS in the LP experiments
Distortion 1% Distortion 5% Distortion 10%
jQj (value of
26 (2) 0 0.43 4.36 2.19 5.55 4.80 7.28
71; 538 (10) 475 3.73 3.48 4.87 4.15 6.32 5.29
Table
2: WERs and PPs for each Beam width using the FSM with 3; 231 states
in the LP experiments
Distortion WER PP WER PP WER PP
5% 3.46 4.23 1.70 3.87 1.70 3.87
10% 6.08 5.71 3.59 5.17 3.32 5.15
Table
2 shows the effect of using BS in the ECP process for the 6-TS model
(3; 231 states) using EV1. Only negligible differences were observed for the other
algorithms, due to the distinct ways BS is implemented [5]. Four increasing values
of beam width (ff) were tested: 5, 10, 20 and 40 (1 means no BS). Setting ff to 20
provided results identical to those achieved by a full search (see also Table 1), while
a further decreasing of ff gracefully degrades the results.
Fig. 6 shows the relative efficiency of the different algorithms for increasing FSM
7 It is notable that even with almost 50% undistorted test sentences rejected without ECP, a
noticeable distortion reduction is still achieved for 10% and 5%-distorted sentences. Therefore,
proves useful not only in dealing with imperfect input, but also in improving the effectiveness
of imperfect FSMs.
Computation
time
(in
centiseconds)
Number of states in FSM
Figure
Average computing times (in centiseconds) of the different algorithms measured
without BS for 10% of distortion in the LP experiments
sizes without BS. Only results for 10% of distortion are reported; the results were
similar for 5% and 1%. The variance observed in these results was negligible. More
specifically, the standard deviation of the computing time per symbol parsed was
never greater than the 6:7% of the average computing time per symbol parsed. This
means that, with a probability close to 1, the real computing times will match their
corresponding expected values. A dramatically higher computational demand of
EV1 is clear in this figure. The use of priority queues (EV1PQ) to compute i
contributes to alleviating the computational cost, though the resulting jQj: log jQj
time complexity is still exceedingly high. The two implementations of EV3 show a
much better (linear-on-jQj) performance 8 . EV2 also shows linear time complexity
but with a slope which is larger than that of EV3. This is due to the number of
iterations required for the parsing of deletion transitions [5].
Figs. 7 and 8 show the impact of BS in the performance of the different algorithms.
Fig. 7 shows the effect of increasing the distortion rate for a fixed ff 9 (20; i.e., results
8 The observed computing times of the preprocessing stage for EV3 and EV3.V2 (see Sect. 5)
were negligible, ranging from less than a centisecond (for the smallest FSM) to 36 centiseconds
(for the largest FSM).
9 Differences between scores through consecutive parsing stages tend to level as larger errors are
produced. Therefore, an increase in the distortion produces an increase in the rate of visited states
1% dist.0,2
0,4
0,6
Computation
time
(in
centiseconds)
Number of states in FSM0,51,52,53,54,55,5
Computation
time
(in
centiseconds)
Number of states in FSM
5% dist.0,51,52,53,54,55,50 10000 20000 30000 40000 50000 60000 70000 80000
Computation
time
(in
centiseconds)
Number of states in FSM
10% dist.
Figure
7: Average computing times observed for each distortion rate and in the
LP experiments
identical to full search). With 1% of distortion, the number of visited states is very
small and all the algorithms perform fairly well. With 5% and 10% of distortion, EV1
is again highly cost-demanding. EV1PQ also tends to have higher computational
costs. EV2 and EV3 show the best performance and, by also using BS, EV3 gets
the best results for the larger FSMs. Finally, Fig. 8 shows the effect of the different
ff for the highest distortion rate (10%). Significant differences exist for ff 20. It
should be taken into account that when ff is smaller than 20, only suboptimal results
are produced (see Tables 1 and 2). The best results are systematically achieved by
EV3 and EV2, although EV3 clearly outperforms EV2 when the number of visited
states is higher than a certain bound. See [5] for further details on this experimental
study.
7.2 OCR Experiments
In these experiments, 2; 400 images of handwritten digits were used. Strings representing
these images were obtained by following their outer contour using a chain-code
of eight directions [18]. The resulting string corpus was randomly split into
(for a given ff).
0,4
0,6
Computation
time
(in
centiseconds)
Number of states in FSM
0,4
0,6
Computation
time
(in
centiseconds)
Number of states in FSM
Computation
time
(in
centiseconds)
Number of states in FSM
Computation
time
(in
centiseconds)
Number of states in FSM
Figure
8: Average computing times (in centiseconds) observed for 10% of distortion and
Beam width values 5, 10, 20 and 40 in the LP experiments
disjoint training and test sets of the same size. The training set was used to automatically
learn six different stochastic k-TS FSMs per digit using values of k from
2 to 7. For each value of k, the 10 k-TS models learned for each of the 10 classes
(digits) were merged into a whole global model, with class-dependent labelled final
states. This resulted in six stochastic FSMs of increasing size, ranging from 80 to
states. Testing was carried out using the maximum likelihood classification
rule as mentioned in Sect. 1. Table 3 shows the overall recognition rates achieved
with ECP without BS. The FSM with 18; 416 states gets the best results;
further decreasing of the value of k tended to degrade the effectiveness. Informal
tests show that recognition rates without ECP drop below 50% for all the models.
Table
3: Recognition rate (%) achieved in the OCR experiments for each FSM
jQj (value of
Recog. rate 75.8 90.8 95.1 97.2 97.2 97.5
Computation
time
(in
centiseconds)
Number of states in FSM
Figure
9: Average computing times (in centiseconds) of the different algorithms measured
without BS in the OCR experiments
Fig. 9 shows the observed average parsing times for each ECP algorithm without
BS. The observed variance was also negligible (the observed standard deviation
of the computing time per symbol parsed was never greater than the 7:2% of the
average computing time per symbol parsed). Again, the performance achieved by
much better than that of the other algorithms. The performance of EV2
was, in this case, worse than the one observed in the previous set of experiments (in
even EV1PQ outperformed EV2). This is because EV2 significantly depends
on specific conditions of data (see [5]).
The impact of BS was studied for the model supplying the best recognition results;
namely the 7-TS model with 18; 416 states (see Table 3). The results are shown in
Table
4. In this case, none of the values of ff tested yielded identical recognition
rate as that achieved by full search (97:5%). However, a width of 30 appears to be
a good tradeoff between efficiency and recognition rate. The fastest performance
using BS was again achieved by EV2 and EV3. The differences among the different
algorithms were more significant for the widest beam (30). In this case, EV3 was
about 1.5 times faster than EV2 and 1.7 times faster than EV1PQ.
10 The observed computing times of the preprocessing stage for EV3 and EV3.V2 were again
negligible, ranging from less than a centisecond to 7 centiseconds.
Table
4: Impact of using BS in recognition rate (%) and computing time (centiseconds)
in the OCR experiments
ff Recog. rate EV3 EV3.V2 EV2 EV1PQ EV1
Concluding Remarks
Several techniques have been proposed for a cost-efficient implementation of Finite-State
Correcting Viterbi Parsing. This is a key process in many applications
in areas such as Syntactic Pattern Recognition, Language Processing, Grammatical
Inference, Coding Theory, etc. A significant improvement in parsing speed with regard
to previous approaches can be achieved by the EV3 algorithm which is proposed
here. Furthermore, a dramatic acceleration can be achieved by applying suboptimal
Beam Search strategies to the proposed algorithms. All the algorithms developed
allow for integration with this search strategy, although minor differences in the rate
of visited states lead to small differences in performance. In this case, EV3 has also
exhibited better behaviour than all the other algorithms.
9
Acknowledgements
The authors wish to thank the anonymous reviewers for their careful reading and
valuable comments. Work partially funded by the European Union and the Spanish
CICYT under contracts IT-LTR-OS-30268 and TIC97-0745-C01/02.
--R
The Design and Analysis of Computer Algorithms.
"Algorithms for Finding Patterns in Strings"
"Fast Viterbi decoding with Error Correction"
"Two Different Approaches for Cost-efficient Viterbi Parsing with Error Correction"
"Different Approaches for Efficient Error-Correcting Viterbi Parsing: An Experimental Comparison"
"Simplifying Language through Error-Correcting Decoding"
"Decoding for Channels with Insertions, Deletions and Substitutions with Applications to Speech Recognition"
"Two Extensions of the Viterbi Algorithm"
"Miniature Language Acquisition: A touchstone for cognitive science"
"The Viterbi algorithm"
"Inference of k-testable languages in the strict sense and application to
An Introduction.
"Efficient Priority-First Search Maximum-Likelihood Soft-Decision Decoding of Linear Block Codes"
"Correcting Dependent Errors in Sequences Generated by Finite-State Processes"
"Evaluating the performance of connected-word speech recognition systems"
"The Harpy Speech Recognition System"
"A Comparison of Syntactic and Statistical Techniques for Off-Line OCR"
"On the BCJR Trellis for Linear Block Codes"
Time Warps
"Fast and Accurate Speaker Independent Speech Recognition using structural models learnt by the ECGI Algorithm"
"Codes and Iterative Decoding on General Graphs"
--TR
--CTR
Christoph Ringlstetter , Klaus U. Schulz , Stoyan Mihov, Orthographic Errors in Web Pages: Toward Cleaner Web Corpora, Computational Linguistics, v.32 n.3, p.295-340, September 2006
Juan-Carlos Amengual , Alberto Sanchis , Enrique Vidal , Jos-Miguel Bened, Language Simplification through Error-Correcting and Grammatical Inference Techniques, Machine Learning, v.44 n.1-2, p.143-159, July-August 2001
Francisco Casacuberta , Enrique Vidal, Learning finite-state models for machine translation, Machine Learning, v.66 n.1, p.69-91, January 2007 | depth-first topological sort;shape recognition;viterbi algorithm;priority queues;error-correcting parsing;beam search;sequence alignment;language processing;bucketsort |
290216 | Simple Fast Parallel Hashing by Oblivious Execution. | A hash table is a representation of a set in a linear size data structure that supports constant-time membership queries. We show how to construct a hash table for any given set of n keys in O(lg lg n) parallel time with high probability, using n processors on a weak version of a concurrent-read concurrent-write parallel random access machine (crcw pram). Our algorithm uses a novel approach of hashing by "oblivious execution" based on probabilistic analysis. The algorithm is simple and has the following structure: Partition the input set into buckets by a random polynomial of constant degree. For t:= 1 to O(lg lg n) do Allocate Mt memory blocks, each of size Kt. Let each bucket select a block at random, and try to injectively map its keys into the block using a random linear function. Buckets that fail carry on to the next iteration. The crux of the algorithm is a careful a priori selection of the parameters Mt and Kt. The algorithm uses only O(lg lg n) random words and can be implemented in a work-efficient manner. | Introduction
Let S be a set of n keys drawn from a finite universe U . The hashing problem is to construct a
with the following attributes:
Injectiveness: no two keys in S are mapped by H to the same value,
Space efficiency: both m and the space required to represent H are O(n), and
Time efficiency: for every x 2 U , H(x) can be evaluated in O(1) time by a single processor.
Such a function induces a linear space data structure, a perfect hash table, for representing S. This
data structure supports membership queries in O(1) time.
This paper presents a simple, fast and efficient parallel algorithm for the hashing problem.
Using n processors, the running time of the algorithm is O(lg lg n) with overwhelming probability,
and it is superior to previously known algorithms in several respects.
Computational models As a model of computation we use the concurrent-read concurrent-
parallel random access machine (crcw pram) family (see, e.g., [35]). The members of
this family differ by the outcome of the event where more than one processor attempts to write
simultaneously into the same shared memory location. The main sub-models of crcw pram
in descending order of power are: the Priority ([29]) in which the lowest-numbered processor
succeeds; the Arbitrary ([42]) in which one of the processors succeeds, and it is not known in
advance which one; the Collision + ([9]) in which if different values are attempted to be written,
a special collision symbol is written in the cell; the Collision ([15]) in which a special collision
symbol is written in the cell; the Tolerant ([32]) in which the contents of that cell do not change;
and finally, the less standard Robust ([7, 34]) in which if two or more processors attempt to write
into the same cell in a given step, then, after this attempt, the cell can obtain any value.
1.1 Previous Work
Hash tables are fundamental data structures with numerous applications in computer science. They
were extensively studied in the literature; see, e.g., [37, 40] for a survey or [41] for a more recent one.
Of particular interest are perfect hash tables, in which every membership query is guaranteed to be
completed in constant time in the worst case. Perfect hash tables are perhaps even more significant
in the parallel context, since the time for executing a batch of queries in parallel is determined by
the slowest query.
Fredman, Koml'os, and Szemer'edi [16] were the first to solve the hashing problem in expected
linear time for any universe size and any input set. Their scheme builds a 2-level hash function: a
level-1 function splits S into subsets ("buckets") whose sizes are distributed in a favorable manner.
Then, an injective level-2 hash function is built for each subset by allocating a private memory
block of an appropriate size.
This 2-level scheme formed a basis for algorithms for a dynamic version of the hashing problem,
also called the dictionary problem, in which insertions and deletions may change S dynamically.
Such algorithms were given by Dietzfelbinger, Karlin, Mehlhorn, Meyer auf der Heide, Rohnert and
Tarjan [12], Dietzfelbinger and Meyer auf der Heide [14], and by Dietzfelbinger, Gil, Matias and
Pippenger [11].
In the parallel setting, Dietzfelbinger and Meyer auf der Heide [13] presented an algorithm for
the dictionary problem. for each fixed ffl - 0, n arbitrary dictionary instructions (insert, delete, or
lookup), can be executed in O(n ffl ) expected time on a a n 1\Gammaffl -processor Priority crcw. Matias
and Vishkin [39] presented an algorithm for the hashing problem that runs in O(lg n) expected
time using O(n= lg n) processors on an Arbitrary crcw. This was the fastest parallel hashing
algorithm previous to our work. It is based on the 2-level scheme and makes extensive use of
counting and sorting procedures.
The only known lower bounds for parallel hashing were given by Gil, Meyer auf der Heide
and Wigderson [27]. In their (rather general) model of computation, the required number of
parallel steps is
\Omega\Gamma/1 n). They also showed that in a more restricted model, where at most one
processor may simultaneously work on a key, parallel hashing time
is\Omega\Gamma/3 lg n). They also gave an
algorithm which yields a matching upper bound if only function applications are charged and all
other operations (e.g., counting and sorting) are free. Our algorithm falls within the realm of the
above mentioned restricted model and matches the \Omega\Gammae/ lg n) lower bound while charging for all
operations on the concrete pram model.
1.2 Results
Our main result is that a linear static hash table can be constructed in O(lg lg n) time with high
probability and O(n) space, using n processors on a crcw pram. Our algorithm has the following
properties:
Time optimality It is the best possible result that does not use processor reallocations, as shown
in [27]. Optimal speed-up can be achieved with a small penalty in execution time. It is a significant
improvement over the O(lg n) time algorithm of [39].
Reliability Time bound O(lg lg n) is obeyed with high probability; in contrast, the time bound
of the algorithm in [39] is guaranteed only with constant probability.
Simplicity It is arguably simpler than any other hashing algorithm previously published. (Never-
theless, the analysis is quite involved due to tight tradeoffs between the probabilities of conflicting
events.)
Reduced randomness It is adapted to consume only O(lg lg n) random words, compared to \Omega\Gamma n)
random words that were previously used.
Work optimality A work optimal implementation is presented, in which the time-processor
product is O(n) and the running time is increased by a factor of O(lg n); it also requires only
O(lg lg n) random words.
Computational model If we allow lookup time to be O(lg lg n) as well, then our algorithm can
be implemented on the Robust crcw model.
Our results can be summarized in the following theorem.
Theorem 1 Given a set of n keys drawn from a universe U , the hashing problem can be solved using
O(n) space: (i) in O(lg lg n) time with high probability, using n processors, or (ii) in O(lg lg n lg n)
time and O(n) operations with high probability. The algorithms run on a crcw pram where no
reallocation of processors to keys is employed, and use O(lg lg bits.
The previous algorithms implementing the 2-level scheme, either sequentially or in parallel, are
based on grouping the keys according to the buckets to which they belong, and require learning the
size of each bucket. Each bucket is then allocated a private memory block whose size is dependent
on the bucket size. This approach relies on techniques related to sorting and counting, which require
\Omega\Gammaeq n= lg lg n) time to be solved by polynomial number of processors, as implied by the lower bound
of Beame and Hastad [4]. This lower bound holds even for randomized algorithms. (More recent
results have found other, more involved, ways to circumvent these barriers; cf. [38, 3, 26, 30].)
We circumvent the obstacle of learning buckets sizes for the purpose of appropriate memory
allocation by a technique of oblivious execution, sketched by Figure 1.
1. Partition the input set into buckets by a random polynomial of constant degree.
2. For t := 1 to O(lg lg n) do
(a) Allocate M t memory blocks, each of size K t .
(b) Let each bucket select a block at random, and try to injectively map its
into the block using a random linear function; if the same block was
selected by another bucket, or if no injective mapping was found, then
the bucket carries on to the next iteration.
Figure
1: The template for the hashing algorithm.
The crux of the algorithm is a careful a priori selection of the parameters M t and K t . For
each iteration t, M t and K t depend on the expected number of active buckets and the expected
distribution of bucket sizes at iteration t in a way that makes the desired progress possible (or
rather, likely).
The execution is oblivious in the following sense: All buckets are treated equally, regardless of
their sizes. The algorithm does not make any explicit attempt to estimate the sizes of individual
buckets and to allocate memory to buckets based on their sizes, as is the case in the previous
implementations of the 2-level scheme. Nor does it attempt to estimate the number of active
buckets or the distribution of their sizes.
The selection of the parameters M t and K t in iteration t is made according to a priori estimates
of the above random variables. These estimates are based on properties of the level-1 hash function
as well as on inductive assumptions about the behavior of previous iterations.
Remark The hashing result demonstrates the power of randomness in parallel computation on
crcw machines with memory restricted to linear size. Boppana [6] considered the problem of
Element Distinctness: given n integers, decide whether or not they are all distinct. He showed that
solving Element Distinctness on an n-processor Priority machine with bounded memory requires
\Omega\Gammaeq n= lg lg n) time. "Bounded memory" means that the memory size is an arbitrary function of n
but not of the range of the input values. It is easy to see that if the memory size is bounded by \Omega\Gamma
then Element Distinctness can be solved in O(1) expected time by using hash functions (Fact 2.2).
This, however, does not hold for linear size memory. Our parallel hashing algorithm implies that
when incorporating randomness, Element Distinctness can be solved in expected O(lg lg n) time
using n processors on Collision + (which is weaker than the Priority model) with linear memory
size.
1.3 Applications
The perfect hash table data structure is a useful tool for parallel algorithms. Matias and Vishkin [39]
proposed using a parallel hashing scheme for space reduction in algorithms in which a large amount
of space is required for communication between processors. Such algorithms become space efficient
and preserve the number of operations. The penalties are in introducing randomization and in
having some increase in time. Using our hashing scheme, the time increase may be substantially
smaller.
There are algorithms for which, by using the scheme of [39], the resulting time increase is O(lg n).
By using the new scheme, the time increase is only O (lg lg n lg n). This is the case in the construction
of suffix trees for strings [2, 17] and in the naming assignment procedure for substrings
over large alphabets [17].
For other algorithms, the time increase in [39] was O(lg lg n) or O
(lg lg n) 2
, while our algorithm
leaves the expected time unchanged. Such is the case in integer sorting over a polynomial range [33]
and over a super-polynomial range [5, 39].
More applications are discussed in the conclusion section.
1.4 Outline
The rest of the paper is organized as follows. Preliminary technicalities used in our algorithm
and its analysis are given in Section 2. The algorithm template is presented in greater detail in
Section 3. Two different implementations, based on different selections of M t and K t , are given
in the subsequent sections. Section 4 presents an implementation that does not fully satisfy the
statements of Theorem 1 but has a relatively simple analysis. An improved implementation of the
main algorithm, with more involved analysis, is presented in Section 5. In Section 6 we show how
to reduce the number of random bits. Section 7 explains how the algorithm can be implemented
with an optimal number of operations. The model of computation is discussed in Section 8, where
we also give a modified algorithm for a weaker model. Section 9 briefly discusses the extension of
the hashing problem, in which the input may consist of a multi-set. Finally, conclusions are given
in Section 10.
Preliminaries
The following inequalities are standard (see, e.g. [1]):
Markov's inequality Let ! be a random variable assuming non-negative values only. Then
Chebyshev's inequality Let ! be a random variable. Then, for T ? 0,
Chernoff's inequality Let ! be a binomial variable. Then, for T ? 0,
Terminology for probabilities We say that an event occurs with n-dominant probability if it
occurs with probability
\Gamma\Omega\Gamma21 . Our usage of this notation is essentially as follows. If a poly-logarithmic
number of events are such that each one of them occurs with n-dominant probability,
then their conjunction occurs with n-dominant probability as well. We will therefore usually be
satisfied by demonstrating that each algorithmic step succeeds with n-dominant probability.
Fact 2.1 Let independent binary random variables, and let
2. -dominant probability, and
3.
Proof. Recall the well known fact that
1-i-n
are pairwise independent
1. By Inequality (2)
2. If - 2 - E (!) 2 then by Inequality (2)
then by Inequality (1)
3. Follows immediately from the above.
Hash functions For the remainder of this section, let S ' U be fixed,
splits the set S into buckets; bucket i is the subset fx its size is
collides if its bucket is not a singleton. The
function is injective, or perfect , if no element collides. Let
0-i!m
r
A function is injective if and only if is the number of collisions of pairs of keys.
More generally, B r is the number of r-tuples of keys that collide under h.
Polynomial hash functions Let prime. The class of degree-d polynomial
hash functions, d - 1, mapping U into [0;
mod m; for some c
In the rest of this section we consider the probability space in which h is selected uniformly at
random from H d
The following fact and corollary were shown by Fredman, Koml'os, and Szemer'edi [16], and
before by Carter and Wegman [8]. (The original proof was only for the case however the
generalization for d ? 1 is straightforward.)
Fact 2.2 E
Corollary 2.3 The hash function h is injective on S with probability at least 1
Proof. The function h is injective if and only if Fact 2.2 and Markov's inequality,
the probability that h is not injective is Prob (B
The following was shown in [11].
Fact 2.4 If d - 3 then
For r - 0, let A r be the rth moment of the distribution of s i ,
0-i!m
s r
It is easy to see that A Further, it can be shown that if were
completely random function, then A r is linear in n with high probability for all fixed r - 2. For
polynomial hash functions, Dietzfelbinger et al. [12] proved the following fact:
Fact 2.5 Let r - 0, and m - n. If d - r then there exists a constant oe r ? 0, depending only on r,
such that
Tighter estimates on the distribution of A r were given in [11]: (For completeness, the proofs are
attached in Appendix B.)
Fact 2.6 Let r - 2. If d - r then
1-j-r
r
where
\Phi r
\Psi is the Stirling number of the second kind. 1
Fact 2.7 Let ffl ? 0 be constant. If d -
probability.
the Stirling number of the second kind ,
is the number of ways of
partitioning a set of k distinct elements into j nonempty subsets (e.g., [31, Chapter 6]).
3 A Framework for Hashing by Oblivious Execution
3.1 An algorithm template
The input to the algorithm is a set S of n keys, given in an array. The hashing algorithm works in two
stages, which correspond to the two level hashing scheme of Fredman, Koml'os, and Szemer'edi [16].
In the first stage a level-1 hash function f is chosen. This function is selected at random from
the class H d
, where d is a sufficiently large constant to be selected in the analysis, and
The hash function f partitions the input set into m buckets ; bucket i, is the
(i). The first stage is easily implemented in constant time. The main effort is in the
implementation of the second stage, which is described next.
The second level of the hash table is built in the second stage of the algorithm. For each bucket
a private memory region, called a block , is assigned. The address of the memory block allocated
to bucket i is recorded in cell i of a designated array ptr of size m. Also, for each bucket, a level-2
function is constructed; this function injectively maps the bucket into its block. The descriptions
of the level-2 functions are written in ptr.
Let us call a bucket active if an appropriate level-2 function has not yet been found, and inactive
otherwise. At the beginning of the stage all buckets are active, and the algorithm terminates when
all buckets have become inactive. The second stage consists of O(lg lg n) iterations, each executing
in constant time. The iterative process rapidly reduces the number of active buckets and the
number of active keys.
At each iteration t, a new memory segment is used. This segment is partitioned into M t blocks
of size K t each, where M t and K t will be set in the analysis. Each bucket and each key is associated
with one processor. The operation of each active bucket in each iteration is given in Figure 2.
Allocation: The bucket selects at random one of the M t memory blocks. If the same
block was selected by another bucket, then the bucket remains active and does not
participate in the next step.
Hashing : The bucket selects at random two functions from H 1
, and then tries to
hash itself into the block separately by each of these functions. If either one of
the functions is injective, then its description and the memory address of the block
are written in the appropriate cell of array ptr and the bucket becomes inactive.
Otherwise, the bucket remains active and carries on to the next iteration.
Figure
2: The two steps of an iteration, based on oblivious execution.
In a few of the last iterations, it may become necessary for an iteration to repeat its body more
than once, but no more than a constant number of times. The precise conditions and the number
of repetitions are given in Section 5.
The hash table constructed by the algorithm supports lookup queries in constant time. Given
a key x, a search for it begins by reading the cell ptr[f(x)]. The contents of this cell defines the
level-2 function to be used for x as well as the address of the memory block in which x is stored.
The actual offset in the block in which x is stored is given by the injective level-2 hash function
found in the Hashing step above.
3.2 Implementations
The algorithm template described above constitutes a framework for building parallel hashing
algorithms. The execution of these algorithms is oblivious in the sense that the iterative process
of finding level-2 hash functions does not require information about the number or size of active
buckets. Successful termination and performance are dependent on the a priori setting of the
parameters d, M t and K t . The effectiveness of the allocation step relies on having sufficiently many
memory blocks; the effectiveness of the hashing step relies on having sufficiently large memory
blocks. The requirement of keeping the total memory linear imposes a tradeoff between the two
parameters. The challenge is in finding a balance between M t and K t , so as to achieve a desired
rate of decay in the number of active buckets. The number of active keys can be deduced from the
number of active buckets based on the characteristics of the level-1 hash function, as determined
by d.
We will show two different implementations of the algorithm template, each leading to an
analysis of a different nature. The first implementation is given in Section 4. There, the parameters
are selected in such a way that in each iteration, the number of active buckets is expected to
decrease by a constant factor. Although each iteration may fail with constant probability, there is
a geometrically decreasing series which bounds from above the number of active buckets in each
iteration. After O(lg lg n) iterations, the expected number of active keys and active buckets becomes
n=(lg n)
\Omega\Gamma1/ . The remaining keys are hashed in additional constant time using a different approach,
after employing an O(lg lg n) time procedure.
From a technical point of view, the analysis of this implementation imposes relatively modest
requirements on the level-1 hash function, since it only uses first-moment analysis (i.e., Markov's
inequality). Moreover, it only requires a simpler version of the hashing step, in which only one
hash function from H 1
is being used. The expected running time is O(lg lg n), but this running
time is guaranteed only with (arbitrary small) constant probability.
The second implementation is given in Section 5. This implementation is characterized by a
doubly-exponential rate of decrease 2 in the number of active buckets and keys. After O(lg lg n)
sequence v0 ; decreases in an exponential rate if for all t, v t - v0=(1 the sequence
decreases in a doubly-exponential rate if for all t, v t - v0=2 (1+ffl) t
for some ffl ? 0.
iterations all keys are hashed without any further processing. This implementation is superior in
several other respects: its time performance is with high probability, each key is only handled by its
original processor, and it forms a basis for further improvements in reducing the number of random
bits.
From a technical point of view, the analysis of this implementation is more subtle and imposes
more demanding requirements on the level-1 hash function, since it uses second-moment analysis
(i.e., Chebyshev's inequality). Achieving a doubly-exponential rate of decrease required a more
careful selection of parameters, and was done using a "symbolic spreadsheet" approach.
Together, these implementations demonstrate two different paradigms for fast parallel randomized
algorithms, each involving a different flavor of analysis. One only requires an exponential rate
of decrease in problem size, and then relies on reallocation of processors to items. (Subsequent
works that use this paradigm and its extensions are mentioned in Section 10.) This paradigm is
relatively easy to understand and not too difficult to analyze, using a framework of probabilistic
induction and analysis by expectations. The analysis shows that each iteration succeeds with constant
probability, and that this implies an overall constant success probability. In contrast, the
second implementation shows that each iteration succeeds with n-dominant probability, and that
this implies an overall n-dominant success probability. The analysis is significantly more subtle,
and relies on more powerful techniques of second moment analysis. The second paradigm consists of
a doubly-exponential rate of decrease in the problem size, and hence does not require any wrap-up
step.
4 Obtaining Exponential Decrease
This section presents our first implementation of the algorithm template. Using a rather elementary
analysis of expectations, we show that at each iteration the problem size decreases by a constant
factor with (only) constant probability. The general framework described in Section 4.1 shows that
this implies that the problem size decreases at an overall exponential rate.
After O(lg lg n) iterations, the number of keys is reduced to n=(lg
n)\Omega\Gamma1/ . A simple load balancing
algorithm now allocates (lg
n)\Omega\Gamma21 processors to each remaining key. Using the excessive number of
processors, each key is finally hashed in constant time.
4.1 Designing by Expectation
Consider an iterative randomized algorithm, in which after each iteration some measure of the
problem decreases by a random amount. In a companion paper [22] we showed that at each
iteration one can actually assume that in previous iterations the algorithm was not too far from its
expected behavior. The paradigm suggested is:
Design an iteration to be "successful" with a constant probability under the assumption
that at least a constant fraction of the previous iterations were "successful".
It is justified by the following lemma.
Lemma 4.1 (probabilistic induction [22]) Consider an iterative randomized process in which,
for all t - 0, the following holds: iteration t with probability at least 1=2, provided that
among the first t iterations at least t=4 were successful. Then, with
probability\Omega\Gammail , for every t ? 0
the number of successful iterations among the first t iterations is at least t=4.
4.2 Parameters setting and analysis
Let the level-1 function be taken from H 10
Further, set
Let
is as in Fact 2.5.
To simplify the analysis, we allow the parameters K t and M t to assume non-integral values. In
actual implementation, they must be rounded up to the nearest integer. This does not increase
memory requirements by more than a constant factor; all other performance measures can only be
improved.
Memory usage The memory space used is
Lemma 4.2 Let v t be the number of active buckets at the beginning of iteration t. Then,
Proof. We assume that the level-1 function f satisfies
By Fact 2.5, (11) holds with probability at least 1=2.
The proof is by continued by using Lemma 4.1. Iteration t is successful if v t+1 - v t =2. Thus,
the number of active buckets after j successful iterations is at most m2 \Gammaj .
The probabilistic inductive hypothesis is that among the first t iterations at least t=4 were
successful, that is
The probabilistic inductive step is to show that
In each iteration the parameters K t and M t were chosen so as to achieve constant deactivation
probability for buckets of size at most
We distinguish between the following three types of events, "failures", which may cause a bucket
to remain active at the end of an iteration.
(i) Allocation Failure. The bucket may select a memory block which is also selected by other
buckets.
be the probability that a fixed bucket does not successfully reserve a block in the
allocation step. Since there are at most v t buckets, each selecting at random one of M t
memory blocks, ae 1 (t) - v t =M t . By (12) and (10)
(ii) Size Failure. The bucket may be too large for the current memory block size. As a result, the
probability for it to find a level-2 hash function is not high enough.
t be the number of buckets at the beginning of iteration t that are larger than fi t . By
(11),
Therefore, by (13),
Without loss of generality, we assume that if v t+1 - v t =2 then v
buckets that are needed become inactive, then some of them are still considered as active).
Thus, for the purpose of analysis,
We have then
(iii) Hash Failure. A bucket may fail to find an injective level-2 hash function even though it is
sufficiently small and it has uniquely selected a block.
Let ae 3 (t) be the probability that a bucket of size at most fi t is not successfully mapped into
a block of size K t in the hashing step. By Corollary 2.3 and (13)
A bucket of size at most fi t that successfully reserves a block of size K t , and that is successfully
mapped into it, becomes inactive. The expected number of active buckets at the beginning of
iteration therefore be bounded by
By Markov's inequality
proving the inductive step. The lemma follows.
Lemma 4.3 Let n t be the number of active keys at the beginning of iteration t. Then
for some constants c; ff ? 0.
Proof. It follows from (11), by using a simple convexity argument, that n t is maximal when all
active buckets at the beginning of iteration t are of the same size q t . In this case, by (11),
and
Therefore, by Lemma 4.2, the lemma follows.
By Lemma 4.3 and Lemma 4.2 we have an exponential decrease in the number of active keys
and in the number of active buckets with
probability\Omega\Gamma329 The number of active keys becomes
n=(lg n) c , for any constant c ? 0, after O(lg lg n) iterations with
4.3 A final stage
After the execution of the second stage with the parameter setting as described above, the number
of available resources (memory cells and processors) is a factor of (lg
n)\Omega\Gamma1/ larger than the number
of active keys. This resource redundancy makes it possible to hash the remaining active keys in
constant time, as described in the remainder of this section.
All keys that were not hashed in the iterative process will be hashed into an auxiliary hash
table of size O(n). Consequently, the implementation of a lookup query will consist of searching
the key in both hash tables.
The auxiliary hash table is built using the the 2-level hashing scheme. A level-1 function maps
the set of active keys into an array of size n. This function is selected at random from a class
of hash functions presented by Dietzfelbinger and Meyer auf der Heide [14, Definition 4.1]. It
has the property that with n-dominant probability each bucket is of size smaller than lg n [14,
Theorem 4.6(b')]. For the remainder of this section we assume that this event indeed occurs.
(Alternatively, we can use the n ffl -universal class of hash functions presented by Siegel [43].)
Each active key is allocated 2 lg n processors, and each active bucket is allocated 4(lg n) 3 mem-
ory. The allocation is done by mapping the active keys injectively into an array of size O(n= lg n),
and by mapping the indices of buckets injectively into an array of size O(n=(lg n) 3 ). These mappings
can be done in O(lg lg n) time with n-dominant probability, by using the simple renaming
algorithm from [20].
The remaining steps take constant time. We independently select 2 lg n linear hash functions
and store them in a designated array. These hash functions will be used by all buckets.
The memory allocated to each bucket is partitioned into 2 lg n memory blocks, each of size 2 lg 2 n.
Each bucket is mapped in parallel into its 2 lg n blocks by the 2 lg n selected linear hash functions,
and each mapping is tested for injectiveness. This is carried out by the 2 lg n processors allocated
to each key. For each bucket, one of the injective mappings is selected as a level-2 function. The
selection is made by using the simple 'leftmost 1' algorithm of [15].
If for any of the buckets all the mappings are not injective then the construction of the auxiliary
hash table fails.
Lemma 4.4 Assume that the number of keys that remain active after the iterative process is at most
n=(lg n) 3 . Then, the construction of the auxiliary hash table succeeds with n-dominant probability.
Proof. Recall that each bucket is of size at most lg n; A mapping of a bucket into its memory
block of size 2(lg n) 2 is injective with probability at least 1=2 by Corollary 2.3. The probability
that a bucket has no injective mapping is therefore at most 1=n 2 . With probability at least 1 \Gamma 1=n,
every bucket has at least one injective mapping.
It is easy to identify failure. If the algorithm fails to terminate within a designated time, it
can be restarted. The hash table will be therefore always constructed. Since the overall failure
probability is constant, the expected running time is O(lg lg n).
5 Obtaining Doubly-Exponential Decrease
The implementation of the algorithm template that was presented in the previous section maintains
an exponential decrease in the number of active buckets throughout the iterations. This
section presents the implementation in which the number of active buckets decreases at a doubly-
exponential rate.
Intuitively, the stochastic process behind the algorithm template has a potential for achieving
doubly-exponential rate: If a memory block is sufficiently large in comparison to the bucket size then
the probability of the bucket to remain active is inversely proportional to the size of the memory
block (Corollary 2.3). Consider an idealized situation in which this is the case. If at iteration t
there are m t active buckets, each allocated a memory block of size K t , then at iteration t
will be m t =K t active buckets, and each of those could be allocated a memory block of size K 2
t ; at
iteration there will be m t =K 3
active buckets, each to be allocated a memory block of size K 4
and so on.
In a less idealized setting, some buckets do not deactivate because they are too large for the
current value of K t . The number of such buckets can be bounded above by using properties of the
level-1 hash function. It must be guaranteed that the fraction of "large buckets" also decreases at
a doubly-exponential rate.
The illustrative crude calculation given above assumes that memory can be evenly distributed
between the active buckets. To make the doubly-exponential rate possible, the failure probability
of the allocation step, and hence the ratio m t =M t , must also decrease at a doubly-exponential rate.
Establishing a bound on the number of "large blocks" and showing that a large fraction of the
buckets are allocated memory blocks were also of concern in the previous section. There, however,
it was enough to show constant bounds on the probabilities of allocation failure, size failure and
hash failure.
The parameter setting which establishes the balance required for the doubly-exponential rate is
now presented. Following that is the analysis of the algorithm performance. The section concludes
with a description of how the parameters were selected.
5.1 Parameters setting
Let the level-1 function be taken from H
Further, set
Let
where
5.2 Memory usage
Proposition 5.1 The total memory used by the algorithm is O(n).
Proof. By (17), the memory used in the first stage is O(n). The memory used in an iteration t
of the second stage is
The total memory used by the second stage is therefore at mostX
5.3 Framework for time performance analysis
be defined by
The run-time analysis of the second stage is carried out by showing:
Lemma 5.2 With n-dominant probability, the number of active buckets in the beginning of iteration
t is at most m t .
The lemma is proved by induction on t, for t - lg lg n= lg -. The induction base follows
from and the fact that there are at most n active buckets.
In the subsequent subsections, we prove the inductive step by deriving estimates on the number
of failing buckets in iteration t under the assumption that at the beginning of the iteration there
are at most m t active buckets. Specifically, we show by induction on t that, with n-dominant
probability, the number of active buckets at the end of iteration t is at most
The bucket may fail to find an injective level-2 hash function. In estimating the number of
buckets that fail to find an injective level-2 function during an iteration we assume that the bucket
uniquely selected a memory block and that the bucket size is not too large relatively to the current
block size. Accordingly, as in Section 4.2, we distinguish between the following three types of
events, "failures", which may cause a bucket to remain active at the end of an iteration.
(i) Allocation Failure. The bucket may select a memory block which is also selected by other
buckets.
(ii) Size Failure. The bucket may be too large for the current memory block size. As a result, the
probability for it to find a level-2 hash function is not high enough.
(iii) Hash Failure. A bucket may fail to find a level-2 hash function even though it is sufficiently
small and it has uniquely selected a block.
We will provide estimates for the number of buckets that remain active due to either of the above
reasons: in Lemma 5.5 for case (i), in Lemma 5.6 for case (ii), and in Lemma 5.7 and Lemma 5.8
for case (iii). The estimates are all shown to hold with n-dominant probability. The induction step
follows from adding all these estimates.
To wrap up, let
We can therefore infer:
Proposition 5.3 With n-dominant probability, the number of iterations required to deactivate all
buckets is at most lg lg n= lg -.
5.4 Failures in Uniquely Selecting a Block
Lemma 5.4 Let ffl be fixed, suppose that either m t ? M 1=2+ffl
t .
Let ! be the random variable representing the number of buckets that fail to uniquely select a block.
Proof. A bucket has a probability of at most m t =M t to have other buckets select the memory
block it selected. Therefore,
Further, ! is stochastically smaller than a binomially distributed random variable $ obtained
by performing m t independent trials, each with probability m t =M t of success. That is to say,
t then
by (3)
\Gamma\Omega\Gamma26
Otherwise,
t and we are in the situation where E (!) - 1. Since ! is integer valued
and 2m 2
by (1)
by (25)
The setting not covered by the above lemma is M 1=2\Gammaffl
t . This only occurs
in a constant number of iterations throughout the algorithm and requires the following special
treatment. The body of these iterations is repeated, thus providing a second allocation attempt of
buckets that failed to uniquely select a memory block in the first trial.
be the random variables representing the number of buckets that fail to uniquely
select a block in the first and second attempts respectively.
j by (1)
by (25)
\Gamma\Omega\Gamma27
Therefore, with M t -dominant probability the second attempt falls within the conditions of Equation
Lemma 5.5 Let t - lg lg n= lg -. The number of buckets that fail to uniquely select a block is, with
n-dominant probability, at most m t+1 =4.
Proof. By Lemma 5.4, the number of buckets that fail to uniquely select a memory block is, with
-dominant probability, at most
by (19),(23)
by (20)
by (20)
by (24)
The above holds also with n-dominant probability since
by (19)
by (20)
5.5 Failures in Hashing
In considering buckets which uniquely selected a block which fail to find an injective level-2 function
we draw special attention to buckets of size at most
Lemma 5.6 The number of buckets larger than fi t is, with n-dominant probability, at most m t+1 =4.
Proof. Let incorporating the appropriate values for the Stirling numbers of
the second kind into Fact 2.6, we get
by (17)
Therefore, by Fact 2.7, with n-dominant probability
From the above and (6) it follows that the number of buckets bigger than fi t is, with n-dominant
probability, at most
by (31),(16)
by (18)
by (20)
by (20)
by (20)
by (24)
The analysis of hashing failures of buckets that are small enough is further split into two cases.
Lemma 5.7 Suppose that m t =2K t -
n. Then the number of buckets of size at most fi t that fail
in the hashing step of the iteration is, with n-dominant probability, at most m t+1 =4.
Proof. Without loss of generality, we may assume that there are exactly m t active buckets of size
at most fi t that participate in Step 2. When such a bucket is mapped into a memory block of size K t ,
the probability of the mapping being non-injective is, by Corollary 2.3, at most fi 2
The probability that the bucket fails in both hashing attempts is therefore at most 1=2K t . Let
~
t be the total number of such failing buckets. Then,
. By Fact 2.1, with
~
by (18),(23)
by (20)
by (20)
by (20),(24)
Note that since m t =2K t -
n, the above holds with n-dominant probability and we are done.
Lemma 5.8 Suppose that m t
n. Then, by repeating the hashing step of the iteration a
constant number of times, we get ~
Proof. We have
and thus,
Therefore,
by (18)
by (36)
for some constant ffi ? 0. Recall from the proof of Lemma 5.7 that a bucket fails in the hashing
step with probability at most 1=2K t . By (37), if the iteration body is repeated d2=ffie
the failure probability of each bucket becomes at most (2K t
The lemma follows by Markov inequality.
6 Reducing the Number of Random Bits
In this section we show how to reduce the number of random bits used by the hashing algorithm.
The algorithm as described in the previous section consumes \Theta(n lg u) random bits, where
the first iteration already uses \Theta(n lg u) random bits; for each subsequent iteration, the
number of random words from U which are used is by at most a constant factor larger than the
memory used in that iteration, resulting in a total of \Theta(n lg u) random bits.
The sequential hashing algorithm of Fredman, Koml'os, and Szemer'edi [16] can be implemented
with only O(lg lg U bits [11]. We show how the parallel hashing algorithm can be
implemented with O(lg lg U bits.
We first show how the algorithm can be modified so as to reduce the number of random bits
to O(lg u lg lg n). The first stage requires O(1) random elements from U for the construction of
the level-1 function, and remains unchanged. An iteration t of the second stage required O(m t )
random elements from U ; it is modified as follows.
Allocation step If each bucket independently selects a random memory block then O(m t lg M t )
random bits are consumed. This can be reduced to O(lg m) by making use of polynomial hash
functions
Lemma 6.1 Using 6 lg m random bits, a set R ' t can be mapped in constant
time into an array of size 3M t such that the number of colliding elements is at most 2m 2
Proof. Let
and
be selected at random. Then, the image of a bucket i is
defined by
Algorithmically, h 1 is first applied to all elements and then h 2 is applied to the elements which
collided under h 1 . The colliding elements of g t are those which collided both under h 1 and under h 2 .
R 0 be the set of elements that collide under h 1 . Clearly, jR
1=6. Consider the following three cases:
By Corollary 2.3, Prob (R 0 6=
2.
It follows from Fact 2.4 that
probability. As
t =2 we have that jR
3. M 1=2\Gammaffl
By Fact 2.2,
t =2 and by Markov's inequality,
Therefore, with M t -dominant probability, jR
t , in which case, by Corollary
2.3,
is not injective over R 0
Invoking the above procedure for block allocation does not increase the total memory consumption
of the algorithm by more than a constant factor.
Hashing step The implementation of the hashing part of the iteration body using independent
hash functions for each of the active buckets consumes O(m t lg u) random bits. This can be reduced
to O(lg u) by using hash functions which are only pairwise independent . This technique and its
application in the context of hash functions are essentially due to [10, 11].
The modification to the step is as follows. In each hashing attempt executed during the step,
four global parameters a are selected at random by the algorithm. The hash function
attempted by a bucket i is
where
All hashing attempts of the same bucket are fully independent. Thus, the proof of Lemma 5.8
is unaffected by this modification. Recall that Fact 2.1 assumes only pairwise independence. Since
are pairwise independent, the proof of Lemma 5.7 remains valid as well.
The above leads to a reduction in the number of random bits used by the algorithm to
O(lg u lg lg n).
The number of random bits can be further reduced as follows: Employ a pre-processing hashing
step in which the input set S is injectively mapped into the range [0; This is done by
applying a hash function - selected from an appropriate class, to map the universe U into this
range. Then the algorithm described above is used to build a hash table for the set -(S). A lookup
of a key x is done by searching for -(x) in this hash table.
The simple class of hash functions H 3
m is appropriate for this universe reduction application. It
was shown in [11] that the class H 3
m has the following properties:
1. A selection of a random function - from the class requires O(lg lg u bits.
2. A selection can be made in constant time by a single processor.
3. The function - is injective over S with n-dominant probability.
4. Computing -(x) for any x 2 U can be done in constant time.
This pre-processing is tantamount to a reduction in the size of the universe, after which application
of the algorithm requires only O(lg n lg lg n) bits. The total number of random bits used is therefore
O(lg lg u
7 Obtaining Optimal Speedup
The description of the algorithm in Section 3 assumed that the number of processors is n; thus the
time-processor product is O(n lg lg n). Our objective in this section is a work-optimal implementation
where this product is O(n), and p, the number of processors, is maximized.
array and the bucket array are divided into p sectors , one per processor.
A parallel step of the algorithm is executed by having each processor traverse its sector and execute
the tasks included in it.
A key is active if its bucket is active. Let n t be the number of active keys in the beginning of
iteration t. Assume that the implemented algorithm has reached the point where
Further assume that these active elements are gathered in an array of size O(n= lg lg n). Then,
applying the non-optimal algorithm of Section 3 with p - n= lg lg n, and each processor being
responsible for n=p lg lg n problem instances, gives a running time of
O
lg lg n
O (n=p)
which is work-optimal.
We first show that the problem size is reduced sufficiently for the application of the non-optimal
algorithm after O(lg lg lg lg n) iterations.
Lemma 7.1 There exists t n) such that n t n) with n-dominant probability
Proof. The number of active buckets decreases at a doubly-exponential rate as can be seen from
Lemma 5.2. To see that the number of keys decreases at a doubly-exponential rate as well, we show
that with n-dominant probability
Inequality (32), A r - 6n, clearly holds when the summation is over active buckets only. By
a convexity argument, the total number of keys in active buckets is maximized when all active
buckets are of equal size. The number of active buckets is bounded from above by m t . Therefore,
Inequality (40) is obtained from (41) by replacing in m t by its definition in (23) and then substituting
numerical values for the parameters using (16) and (20).
The lemma follows by choosing an appropriate value for t 0 with respect to (23) and (40).
It remains to exhibit a work-efficient implementation of the first t 0 steps of the algorithm. This
implementation outputs the active elements gathered in an array of size O(n= lg lg n). The rest of
this section is dedicated to the description of this implementation.
As the algorithm progresses, the number of active keys and the number of active buckets de-
crease. However, the decrease in the number of active elements in different sectors is not necessarily
identical. The time of implementing one parallel step is proportional to the number of active elements
in the largest sector. It is therefore crucial to occasionally balance the number of active
elements among different sectors in order to obtain work efficiency.
Let the load of a sector be the number of active elements (tasks) in it. A load balancing
algorithm takes as input a set of tasks arbitrarily distributed among p sectors; using p processors
it redistributes this set so that the load of each sector is greater than the average load by at most
a constant factor. Suppose that we have a load balancing algorithm whose running time, using p
processors, is T lb (p) with n-dominant probability. If load balancing is applied after step t then the
size of each sector is O(n t =p).
We describe a simple work-optimal implementation in which load balancing is applied after each
of the first t 0 parallel steps. A parallel step t executes in time which is in the order of
The total time of this implementation is in the order of
decreases at least at an exponential rate, the total time is in the order of
which is O(n=p) for
Using the load balancing algorithm of [20] which runs in T lb time, we conclude that
with n-dominant probability the running time on a p-processor machine is
The load balancing algorithm applied consumes O(p lg lg p) random bits. All these bits are used
in a random mapping step which is very similar to the allocation step of the hashing algorithm.
Thus, by a similar approach as the mapping procedure in Lemma 6.1 it may be established that
the number of random bits in the load balancing algorithm can be reduced to O(lg p lg lg p).
We finally remark that using load balancing in a more efficient, yet as simple way, as describe
in [23], yields a faster work-efficient implementation. The technique is based on carefully choosing
the appropriate times for invoking the load balancing procedure; it applies to any algorithm in which
the problem size has an exponential rate of decrease, and it hence applies to the implementation
of Section 4 as well. In such an implementation the load balancing algorithm is only used O(lg n)
times, resulting in a parallel hashing algorithm that takes O(n=p+lg lg n lg n) time with n-dominant
probability.
8 Model of Computation
In this section we give a closer attention to the details of the implementation on a pram, and
study the type of concurrent memory access required by our algorithm. We first present an implementation
on Collision, and its extension to the weaker Tolerant model. We proceed by
presenting an implementation on the even weaker Robust model. The hash-table constructed in
this implementation only supports searches in O(lg lg n) time. Finally, we examine the concurrent
read capability needed by the implementations.
8.1 Implementation on Collision and on Tolerant
We describe an implementation on Collision. This implementation is also valid for Tolerant,
since each step of Collision can be simulated in constant time on Tolerant provided that, as
it is the case here, only linear memory is used [32].
Initialization The selection of the level-1 hash function is done by a single processor. Since the
level-1 function is a polynomial of a constant degree, its selection can be done by a single processor
and be read by all processors in constant time, using a singe memory cell of dmax flg lg u; lg nge
bits. No concurrent-write operation is required for the implementation of this stage.
Bucket representatives The algorithm template assumes that each bucket can act as a single
entity for some operations, e.g., selecting a random block and selecting a random hash function.
Since usually several keys belong to the same bucket, it is necessary to coordinate the actions of
the processors allocated to these keys. A simple way of doing so is based on the fact that there
are only linearly many buckets and that a bucket is uniquely indexed by the value of f , the level-1
hash function, on its members. A processor whose index is determined by the bucket index acts as
the bucket representative and performs the actions prescribed by the algorithm to the bucket.
Allocation and Hashing steps A processor representing an active bucket selects a memory block
and a level-2 hash function, and records these selections in a designated cell. All processors with keys
in that bucket read then that cell and use the selected block in the hashing step. Each participating
processor (whose key belongs in an active bucket) writes its key in the cell determined by its level-
examines the cell contents to see if the write operation was successful. A
processor for which the write failed will then attempt to write its key to position i of array ptr,
where i is the number of the bucket this processor belongs to. Processors belonging to bucket i can
then learn if the level-2 function selected for their bucket is injective by reading the content of ptr[i].
A change in value or a collision symbol indicate non-injectiveness. To complete the process, the
array ptr is restored for the next hashing attempt. This restoration can be done in constant time
since this array is of linear size.
In summary we have
Proposition 8.1 The algorithms of Theorem 1 can be implemented on Tolerant.
8.2 Implementation on Robust
We now describe an implementation that, at the expense of slowing down the lookup operation,
makes no assumption about the result of a concurrent-write into a cell. Specifically, we present an
implementation on the Robust model, for which a lookup query may take O(lg lg n) time in the
worst case, but O(1) expected time for keys in the table.
The difficulty with the Robust model is in letting all processors in a bucket know whether the
level-2 hash function of their bucket is injective or not. The main idea in the modified implementation
is in allowing iterations to proceed without determining whether level-2 hash functions are
injective or not; whenever a key is written into a memory cell in the hashing step it is deactivated,
and its bucket size decreases. The modified algorithm performs at least as well as the implementation
in which a bucket is deactivated only if all of its keys are mapped injectively. The total
memory used by the modified algorithm and the size of the representation of the hash table do not
change.
Allocation step We first note that the algorithm can be carried out without using bucket representatives
at all. Allocation of memory blocks is done using hash functions, as in Lemma 6.1; each
processor can individually compute the index of its memory block by evaluating the function g t .
This function is selected by a designated processor and its representation (6 lg m bits) is read in
constant time by all processors.
We further modify the algorithm, so that the hashing step is carried out by all active buckets.
That is, even buckets that collided in the allocation step will participate in the hashing step. This
modification can only serve to improve the performance of the algorithm, since even while sharing a
block with another bucket the probability that a bucket finds an injective function into that block is
not zero. This modification eliminates the concurrent memory access needed for detecting failures
in the allocation step.
Hashing step The selection of a level-2 hash function is done as in the hashing step described
in Section 6. As can be seen from (39), only four global parameters should be selected and made
available to all processors; this can be done in constant time.
It remains to eliminate the concurrent memory access required for determining if the level-2
function of any single bucket was injective. Whenever a key is successfully hashed by this function,
it is deactivated even if other keys in the same bucket were not successfully hashed. Thus, keys of
the same bucket may be stored in the hash table using different level-2 hash functions.
The two steps of an iteration in the hashing algorithm are summarized in Figure 3.
Let x be an active key in a bucket f(x). The processor assigned to x executes
the following steps.
Allocation: Compute (i), the index of the memory block selected to the bucket
of x, where g t is defined by (38).
Hashing : Determine h i , the level-2 hash function selected by the bucket of x, where h i
is defined by (39). Write x into cell h i (x) in memory block g t (i) and read the contents
of that cell; if x was written then the key x becomes inactive.
Figure
3: Implementation of iteration t in the hashing algorithm on Robust
Lookup algorithm The search for a key x is done as follows. Let
read position h i (x) in the memory block g t (i) in the appropriate array. (All random bits that were
used in the hash table construction algorithm are assumed to be recorded and available.) The
search is terminated when either x is found, or else when t exceeds the number of iterations in the
construction algorithm.
The lookup algorithm requires O(lg lg n) iterations in the worst case. However, for any key x 2 S
the expected lookup time (over all the random selections made by the hashing algorithm) is O(1).
An alternative simplified implementation
Curiously, the sequence of modifications to the algorithm described in this section has lead to a
1-level hashing scheme, i.e., to the elimination of indirect addressing. To see this, we observe that
at iteration t an active key x is written into a memory cell g t (x), where the function g t (x) is
dependent only on n and on the random selections made by the algorithm, but not on the input.
An even simpler implementation of a 1-level hashing algorithm is delineated next.
At each iteration t, a new array T t of size 3M t is used, where M t is as defined in (19). In addition,
a function g t as defined in (38) is selected at random. A processor representing an active key x
in the iteration tries to write x into T t [g t (x)], and then reads this cell. If x is successfully written
in T t [g t (x)] then x is deactivated. Otherwise, x remains active and the processor representing it
carries on to the next iteration.
To see that the algorithm terminates in O(lg lg n) iterations, we observe that the operation on
keys in each iteration is the same as the operation on buckets in the allocation step of Section 6.
Therefore, the analysis of Section 6 can be reused, substituting keys for buckets (and ignoring
failures in the hashing step of the 2-level algorithm). The hash table consists of the collection of
the arrays T and, as can be easily verified, is of linear size. A lookup query for a given key
x is executed in O(lg lg n) time by reading T t [g t (x)] for
8.3 Minimizing concurrent read requirements
The algorithms for construction of the hash table on Tolerant and Robust can be modified to use
concurrent-read from a single cell only. By allowing a pre-processing stage of O(lg n) time, concurrent
read can be eliminated, implying that the ercw model is sufficient. With these modifications,
parallel lookups still require concurrent read, and their execution time increases to O(lg lg n) in the
worst case. Nevertheless, the expected time for lookup of any single key x 2 S is O(1). The details
are described next.
8.3.1 Concurrent read in the Tolerant implementation
There are two types of concurrent read operations required by the modified algorithm. First, the
sequence of O(lg lg n) functions g t (or alternatively, g t in the simplified implementation), must
be agreed upon by all processors. Since each of these functions is represented by O(lg u) bits, its
selection can be broadcasted at the beginning of the iteration through the concurrent-read cell.
The single cell concurrent read requirement for broadcasting can be eliminated by adding an
O(lg n)-time pre-processing step for the broadcasting. (This is just a special case of simulating
crcw pram by erew pram.)
The other kind of concurrent-read operation occurs when processors read a memory cell to verify
that their hashing into that cell has succeeded. This operation can be replaced by the following
procedure. For each memory cell, there is a processor standing by. Whenever a pair hx; ji is written
into a cell, the processor assigned to that cell sends an acknowledgement to processor j by writing
into a memory cell j in a designated array.
The lookup algorithm requires concurrent-read capabilities. In this sense, the lookup operation
is more demanding than the construction of the hash table. A similar phenomenon was observed by
Karp, Luby and Meyer auf der Heide [36] in the context of simulating a random access machine on
a distributed memory machine. The main challenge in the design of their (parallel-hashing based)
simulation algorithm was the execution of the read step. Congestions during the execution of the
write step were resolved by attempting to write in several locations and using the first for which the
write succeeded. It is more difficult to resolve read congestions since the cells in which values were
stored are already determined. Indeed, the read operation constitutes the main run-time bottleneck
in their algorithm.
8.3.2 Concurrent Read in the Robust implementation
The simplified 1-level hashing algorithm for construction of the hash table on Robust is modified
as follows. We eliminate the step in which a processor with key x reads the contents of the cell
after trying to write to that cell. Instead, we use the acknowledgement technique described
above: A processor j handling an active key x writes hx; ji into the cell T t [g t (x)]. The processor
standing by cell T t [g t (x)] into which hx; ji is written, sends an acknowledgement to processor j.
Note that this implementation introduces a new type of failures: due to the unpredictability
of the concurrent write operation in Robust, an acknowledgement for a successful hash may not
be received. Consider for example the following situation: Let j be a processor whose key x
did not collide. Let i, i 0 be two processors with colliding keys y, y 0 , i.e., g t
two processors concurrently write the pairs hy; ii and hy into the cell T t [g t (y)]. The result
of this concurrent write is arbitrary. In particular, it can be the pair hx 0 ; ji, which would cause
the processor standing by the cell T t [g t (y)] to garble the acknowledgement sent to processor j.
(Recall that an acknowledgement to processor j is implemented by writing into a memory location
associated with j.)
The number of the new failures described above can be at most half the number of colliding
keys. It is easy to verify that the analysis remains valid, since the number of these new failures
in no more than the number of "hashing failures" accounted for in Section 5.5, and which do not
occur in this implementation.
9 Hashing of Multi-Sets
We conclude the technical discussion by briefly considering a variation of the hashing problem in
which the input is a multi-set rather than a set. We first note that the analyses of exponential and
doubly-exponential rate of decrease in the problem size is not affected by the possibility of multiple
occurrences of the same key. This is a result of relying on estimates of the number of active
buckets rather than the number of active keys. The number of distinct keys-not the number of
keys-determines the probability of a bucket to find an injective function.
A predictable decrease in the number of active keys is essential for obtaining an optimal speedup
algorithm. Unfortunately, the analysis in Section 7 with regard to the implementation of Section 5
does not hold. To understand the difficulty, consider the case where a substantial fraction of the
input consists of copies of the same key. Then, with non-negligible probability this key may belong
to a large bucket. The probability that this bucket deactivates in the first few iterations, in which
the memory blocks are not sufficiently large, is too small to allow global decrease in the number of
keys with high probability. Consequently, the rapid decrease in the number of buckets may not be
accompanied by a similar decrease in the number of keys.
In contrast, the nature of the analysis in Section 4 makes it susceptible to an easy extension
to multiple keys, which leads to an optimal speedup algorithm, albeit with expected performance
only. Using the probabilistic induction lemma all that is required is to show that each copy of
an active key stands a constant positive probability of deactivation at each iteration. Since the
analysis is based on expectations only, there are no concerns regarding correlations between copies
of the same key, or dependencies between different iterations. The details are left to the reader.
We also note that the model of computation required for a multi-set is Collision + , since it
must be possible to distinguish between the case of multiple copies of the same key being written
into a memory cell, and the case where distinct keys are written. Also, the extensions of the hashing
algorithms which only require concurrent read from a single memory cell can be used for hashing
with multi-set input, but then a Collision + model, as opposed to Robust, must be assumed.
We finally observe that the hashing problem with a multi-set as input can be reduced into
the ordinary hashing problem (in which the input consists of a set), by a procedure known as
leaders election. This procedure selects a single representative from among all processors which
share a value. By using an O(lg lg n)-time, linear-work leaders election algorithm which runs on
Tolerant [24] we have
Theorem 2 Given a multi-set of n keys drawn from a universe U , the hashing problem can be
solved using O(n) space: (i) in O(lg lg n) time with high probability, using n processors, or (ii) in
O(lg lg n lg n) time and O(n) operations with high probability. The algorithms run on Tolerant.
Conversely, note that any hashing algorithm, when run on Arbitrary, solves the leaders election
problem. In particular, the simple 1-level hashing algorithm for Robust, when implemented on
Arbitrary with a multi-set as input, gives a simple leaders election algorithm.
Consider now another variant of the multi-set hashing problem in which a data record is associated
with each key. The natural semantics of this problem is that multiple copies of the same key
can be inserted into the hash table only if their data records are identical. Processors representing
copies of a key with conflicting data records should terminate the computation with an error code.
The Collision + model makes it easy enough to extend the implementations discussed above to
accommodate this variant.
A more sophisticated semantics, in which the data records should be consolidated, requires a
different treatment, e.g., by applying an integer sorting algorithm on the hashed keys (see [39]).
Conclusions
We presented a novel technique of hashing by oblivious execution. By using this technique, algorithms
for constructing a perfect hash table which are fast, simple, and efficient, were made
possible. The running time obtained is best possible in a model in which keys are only handled in
their original processors.
The number of random bits consumed by the algorithm is \Theta(lg lg u n). An open
question is to close the gap between this number and the \Theta(lg lg u bits that are
consumed in the sequential hashing algorithm of [11].
The program executed by each processor is extremely simple. Indeed, the only coordination
between processors is in computing the and function, when testing for injectiveness. In the implementation
on the Robust model, even this coordination is eliminated.
The large constants hidden under the "Oh" notation in the analysis may render the described
implementations still far from being practical. We believe that the constants can be substantially
improved without compromising the simplicity of the algorithm, by a more careful tuning of the
parameters and by tightened analysis. This may be an interesting subject of a separate research.
The usefulness of the oblivious execution approach presented in this paper is not limited to the
hashing problem alone. We have adopted it in [24] for simulations among sub-models of the crcw
pram. As in the hashing algorithm, keys are partitioned into subsets. However, this partition is
arbitrary and given in the input, and for each subset the maximum key must be computed.
Subsequent work
The oblivious execution technique for hashing from Section 3 and its implementation from Section 4
were presented in preliminary form in [21]. Subsequently, our oblivious execution technique was
used several times to obtain improvements in running time of parallel hashing algorithms: Matias
and Vishkin [38] gave an O(lg n lg lg n) expected time algorithm; Gil, Matias, and Vishkin [26]
gave a tighter failure probability analysis for the algorithm in [38], yielding O(lg n) time with high
similar improvement (from O(lg n lg lg n) expected time to O(lg n) time with high
probability), was described independently by Bast and Hagerup [3].
An O(lg n) time hashing algorithm is used as a building block in a parallel dictionary algorithm
presented in [26]. parallel dictionary algorithm supports in parallel batches of operations insert ,
delete, and lookup.) The oblivious execution technique has an important role in the implementation
of insertions into the dictionary. The dictionary algorithm runs in O(lg n) time with high
probability, improving the O(n ffl dictionary algorithm of Dietzfelbinger and Meyer auf der
Heide [13]. The dictionary algorithm can be used to obtain a space efficient implementation of any
parallel algorithm, at the cost of a slowdown of at most O(lg n) time with high probability.
The above hashing algorithms use the log-star paradigm of [38], relying extensively on processor
reallocation, and are not as simple as the algorithm presented in this paper. Moreover, they require
a substantially larger number of random bits.
Karp, Luby and Meyer auf der Heide [36] presented an efficient simulation of a pram on a
distributed memory machine in the doubly-logarithmic time level, improving over previous simulations
in the logarithmic time level. The use of a fast parallel hashing algorithm is essential in their
result; the algorithm presented here is sufficient to obtain it.
Goldberg, Jerrum, Leighton and Rao [28] used techniques from this paper to obtain an O(h
lg lg n) randomized algorithm for the h-relation problem on the optical communication parallel
computer model.
Gibbons, Matias and Ramachandran [18] adapted the algorithm presented here to obtain a low-contention
parallel hashing algorithm for the qrqw pram model [19]; this implies an efficient hashing
algorithm on Valiant's bsp model, and hence on hypercube-type non-combining networks [44].
Acknowledgments
We thank Martin Dietzfelbinger and Faith E. Fich for providing helpful
comments. We also wish to thank Uzi Vishkin and Avi Wigderson for early discussions. Part of
this research was done during visits of the first author to AT&T Bell Laboratories, and of the
second author to the University of British Columbia. We would like to thank these institutions for
supporting these visits. Many valuable comments made by two anonymous referees are gratefully
acknowledged.
--R
The Probabilistic Method.
Parallel construction of a suffix tree.
Fast and reliable parallel hashing.
Optimal bounds for decision problems on the CRCW PRAM.
Improved deterministic parallel integer sorting.
Optimal separations between concurrent-write parallel machines
Observations concerning synchronous parallel models of computation.
Universal classes of hash functions.
New simulations between CRCW PRAMs.
On the power of two-point based sampling
Polynomial hash functions are reliable.
Relations between concurrent-write models of parallel computation
Storing a sparse table with O(1) worst case access time.
Data structures and algorithms for approximate string matching.
Efficient low-contention parallel algo- rithms
The QRQW PRAM: Accounting for contention in parallel algorithms.
Fast load balancing on a PRAM.
Fast hashing on a PRAM-designing by expectation
Designing algorithms by expectations.
An effective load balancing policy for geometric decaying algorithms.
Fast and efficient simulations among CRCW PRAMs.
Simple fast parallel hashing.
Towards a theory of nearly constant time parallel algorithms.
Doubly logarithmic communication algorithms for optical communication parallel computers.
A universal interconnection pattern for parallel computers.
Optimal parallel approximation algorithms for prefix sums and integer sorting.
Concrete Mathematics.
Incomparability in parallel computation.
Towards optimal parallel bucket sorting.
Every robust CRCW PRAM can efficiently simulate a Priority PRAM.
Introduction to Parallel Algorithms.
Sorting and Searching
Converting high probability into nearly-constant time-with applications to parallel hashing
On parallel hashing and integer sorting.
Data Structures and Algorithms I: Sorting and Searching.
Data structures.
An O(lg n) parallel connectivity algorithm.
On universal classes of fast high performance hash functions
General purpose parallel architectures.
--TR | data structures;randomization;parallel computation;hashing |
290221 | Downward Separation Fails Catastrophically for Limited Nondeterminism Classes. | The $\beta$ hierarchy consists of classes $\beta_k={\rm NP}[logkn]\subseteq {\rm NP}$. Unlike collapses in the polynomial hierarchy and the Boolean hierarchy, collapses in the $\beta$ hierarchy do not seem to translate up, nor does closure under complement seem to cause the hierarchy to collapse. For any consistent set of collapses and separations of levels of the hierarchy that respects ${\rm \cdots \subseteq {\rm NP}$, we can construct an oracle relative to which those collapses and separations hold; at the same time we can make distinct levels of the hierarchy closed under computation or not, as we wish. To give two relatively tame examples: for any $k \geq 1$, we construct an oracle relative to which \[ {\rm \cdots \] and another oracle relative to which \[ {\rm We also construct an oracle relative to which | Introduction
. Although standard nondeterministic algorithms solve many
NP-complete problems with O(n) nondeterministic moves, there are other problems
that seem to require very different amounts of nondeterminism. For instance, clique
can be solved with only O(
n) nondeterministic moves, and Pratt's algorithm [16]
solves primality, which is not believed to be NP-complete, with O(n 2 ) nondeterministic
moves. Motivated by the different amounts of nondeterminism apparently needed
to solve problems in NP, Kintala and Fischer [9, 10, 11] defined limited nondeterminism
classes within NP, including the classes we now call the fi hierarchy. The
structural properties of the fi classes were studied further by '
Alvarez, Diaz and Toran
[1, 6]. These classes arose yet again in the work of Papadimitriou and Yannakakis [15]
on particular problems inside NP (e.g., quasigroup isomorphism can be solved with
O(log 2 n) nondeterministic moves).
Kintala and Fischer [11] defined P f(n) to be the class of languages accepted by
a nondeterministic polynomial-time bounded Turing machine that makes at most
c-ary nondeterministic moves (equivalently, O(f(n)) binary nondeterministic
moves) on inputs of length n. Being mostly interested in polylogarithmic amounts of
nondeterminism, they defined
Diaz and Toran [6] wrote fi f(n) to denote Kintala and Fischer's P f(n) and fi k
to denote PL k . Papadimitriou and Yannakakis [15] wrote NP[f(n)] to denote P f(n) .
(Their work is surveyed in [7].) We will adopt the NP[f(n)] notation of Papadimitriou
and Yannakakis, as well as the fi k notation of Diaz and Toran. To reiterate:
Definition 1.1.
ffl A language L belongs to NP[f(n)] if there exists a polynomial-time bounded
nondeterministic Turing machine that accepts L and makes O(f(n)) nondeterministic
choices on inputs of length n. (Note: NP[f(n)] ' DTIME(2 O(f(n)) ):)
ffl The fi hierarchy consists of
y Yale University, Dept. of Computer Science, P.O. Box 208285, New Haven, CT 06520-8285.
email: [email protected]. Research supported in part by the United States National Science
Foundation under grant CCR-8958528 and by the Netherlands Organization for Scientific Research
(NWO) under Visitors Grant B 62-403.
z University of Kentucky, Dept. of Computer Science, Lexington, KY 40506. email:
[email protected]. Research supported in part by the National Science Foundation under
grant CCR-9315354
Kintala and Fischer [11] constructed oracles that make the fi hierarchy collapse
to any desired level. That is, there is an oracle relative to which
and, for every k - 1, there is an oracle relative to which
Oracles can also make the polynomial hierarchy and the Boolean hierarchy collapse
to any desired level [12, 4]. The polynomial and Boolean hierarchies have a
very nice property: collapses translate upward. I.e., if the kth and 1)st levels are
equal, then all levels are contained in the kth [5, 4]. This is also reflected in the non-determinism
hierarchy, now known as the b hierarchy, studied by Buss and Goldsmith
[3]. The classes in the b hierarchy are defined by two parameters: the exponent of
the polynomial time bound (ignoring log factors), and the constant factor for k log n
bits of nondeterminism. This hierarchy exhibits upward collapse for both time and
k. All attempts to prove an analogous translational property for the fi hierarchy have
failed. In fact the obvious technique extends a collapse by only a constant factor in the
number of nondeterministic bits, giving one of the aforementioned upward collapses
of the b hierarchy.
Hemachandra and Jha [8] attempted to explain this failure by examining the
tally sets in the fi hierarchy. For each k, they constructed an oracle that makes
We find this explanation unsatisfactory
because it considers only tally sets.
The known behavior of relativized fi hierarchies is that fi A
1 and that fi A
A. A collapse is a statement of the form fi A
. A
separation is a statement of the form fi A
. A closure is a statement of the form
. A nonclosure is a statement of the form fi A
. A requirement is a
collapse, separation, closure, or nonclosure. We call a set of requirements consistent if
it is consistent with the known behavior of relativized fi hierarchies, as stated above,
and the standard axioms for
Given a set S of requirements let X be the union of [0; 1] and all intervals [i; j]
such that and the collapse fi A
j belongs to S. It is easy to see that S
is consistent iff the following conditions hold for all a and b such that [a;
or [b; a] the separation fi A
a 6= fi A
b does not belong to S; (2) the closure
a and the nonclosure fi A
b do not both belong to S.
For any consistent set of requirements, we construct an oracle A such that the fi
hierarchy relative to A satisfies them. For example, for each k - 0, there is an oracle
that makes fi Another oracle makes fi
We can also make distinct levels in the hierarchy be closed under complementation or
not, as long as this is consistent with the collapses (if fi A
then we cannot have
i and fi A
We prove two initial results for every k:
ffl There is an oracle that makes the first k levels of the fi hierarchy coincide,
but makes the remaining levels all distinct (Theorem 2.3).
ffl There is an oracle that makes the first k levels of the fi hierarchy coincide,
the 1)st level different from the kth, and the remaining levels all equal
(Theorem 2.5).
The techniques from these two constructions can be combined to get any consistent
finite or infinite set of collapses and separations. To collapse fi k into fi j (for
DOWNWARD SEPARATION FAILS FOR fi HIERARCHY 3
k ? j), we code a complete set for fi k into fi j . The same coding techniques can also
code fi i into fi i , for any i's we wish, as long as this doesn't violate any collapses. (If
fi k is collapsed to fi j , then either both or neither will be closed under complement.)
Finally, in Section 3, we extend our results to fi r for real r - 0.
One theme in complexity theory is to ask whether contains any easy sets
(assuming P 6= NP). The answer to the question above depends on the definition of
"easy." Ladner [14] showed that if P 6= NP then contains an incomplete set.
On the other hand, there are oracles relative to which P 6= NP, but contains
(a) no tally sets [13] or (b) no sets in co-NP [2]. It is unknown whether the assumption
implies that contains a set in DTIME(n polylog ); a positive answer
would improve many constructions in the literature. As a step toward understanding
that question, we construct an oracle relative to which P 6= NP but contains
no set in the fi hierarchy (Corollary 2.2).
2. Limited Nondeterminism Hierarchies. The construction below gives almost
all the techniques used in subsequent theorems.
Theorem 2.1. Let g 0 and g 1 be polynomial-time computable monotone increasing
functions such that log n 2 o(g 1 (n)) and g 0 (n) 2 n O(1) . If g 0 (n O(1)
there exists an oracle A such that P (and in fact
there is a tally set in (NP[g 1 (n)]) A \Gamma (NP[g 0 (n)]) A ).
Proof. Let C accepts x within s steps with oracle
A, making at most g 0 (jxj) nondeterministic choicesg. Then C A is - p
m -complete for
for every A. Let p(n) be the polynomial time bound for some NP[g 0 (n)]
oracle Turing machine recognizing C () .
Let D A]g. Note that D A 2 NP[g 1 (n)] A .
The construction consists of coding C A into A in a polynomial-time recoverable
manner, making (NP[g 0 (n)]) A ' P A , while diagonalizing, i.e, guaranteeing that no
machine recognizes the set D A , so P A 6= (NP[g 1 (n)]) A .
At the end of the construction, we will have
We refer to all strings beginning with 1 as coding strings. We refer to all strings
beginning with 0 as diagonalizing strings.
Assume that P () is enumerated by Turing machines P (), P ()
runs
in time bounded by n i for all i, and for sufficiently large n.
The construction proceeds in stages. At the end of stage s, A is decided for all
strings of length up to n s , and D A is extended so that P A
s does not recognize D A . The
stage consists of one diagonalization, which determines n s , and continued encoding of
C A .
At stage s, choose n ? n s\Gamma1 such that n is a power of 2, the running time of P ()
s
on inputs of length n is at most n s , and n satisfies an inequality to be specified below
that is to be specified below that is true for almost all n. Let . The value of
depends only on oracle strings of length
coding strings of length -
A up to length ' \Gamma 1.
In order to diagonalize, P A
s (x) must be calculated. But that computation may
query coding strings that code computations of C A that are not yet decided, because
those computations in turn rely on strings for which A is not yet decided. Those strings
in turn may depend on other coding strings. Any diagonalizing strings that do not
4 R. BEIGEL AND J. GOLDSMITH
already belong to A and are queried by P A
s (x), or by the computation corresponding
to a coding string that P A
s (x) queries, or in the computation corresponding to a coding
string that one of those computations queries, or so on, are restrained from A. We
claim that there are more potential witnesses for x to be in D A than there are possible
queries in such a cascade of queries, so deciding P A
s (x) does not restrict our decision
about D A (x).
Because of the encoding of C A , a coding string z codes a computation that depends
only on strings of length bounded by
jzj. C A (w) directly depends on at most
p(jwj)2 g0 (jwj) of these shorter strings.
A computation of P A
s (x) may query no more than n s strings, each of length
bounded by n s . Each of these strings may code a computation on a string of length
at most n s=2 . Each of these computations depends on at most p(n s=2 )2 g0 (n s=2 ) strings,
each of which depends on at most p(n s=4 )2 g0 (n s=4 ) strings, etc.
This recursion can be cut off at strings of length ' \Gamma 1, because A is already
determined up to that length. The total number of queries needed to decide P A
s (x) is
bounded by n s times the product of all the terms above of the form p(n s=2 i
There are at most log log n s \Gamma log log log log n s \Gamma log log log s such terms,
and each of them is bounded by p(n s )2 g0 (n s Therefore the total number
of queries on which P A
s (x) depends is n s 2 O(log(s)g0 (n s )) , which is less than 2 g1 (n) for
sufficiently large n. (The inequality that n must satisfy is n s (p(n s )2 g0 (n s )
Thus there remains an unrestrained diagonalizing string of length ', which we put
into A if P A
s (x) rejects x. That is, we set D A
s (x), adding a string 0xy
to A if necessary. Thus, for each s, we can guarantee that P A
s does not accept D A ,
so
A , this shows that (NP[g 1 (n)]) A 6= P A . Since
is complete for (NP[g 0 (n)]) A , this shows that (NP[g 0 (n)]) A ' P A .
We complete stage s by letting n finishing the coding of any
C A (w) that was begun or queried in this stage.
The preceding theorem is tight because if
(even if we restrict to binary nondeterministic moves) via a relativizable proof. (Pre-
viously, Sanchis [17] had observed that if
Because the classes are separated by tally sets, we also separate the exponential-time
versions of these classes (see [8] for elaboration of this).
Corollary 2.2. There is an oracle relative to which
Proof. Let
and in the previous theorem. Then for
all k, fi A
Theorem 2.3. Let g(\Delta; \Delta) be a polynomial-time computable, monotone increasing
(in both variables) function with log
there exists an oracle A such that
(and in
fact there is a tally set in (NP[g(n; 2i for each i).
(In this theorem, we ignore the relationship between NP[g(n;
NP[g(n; 2i)]. We will take that up in the next theorem.) The only difference between
this construction and the previous one is that there are infinitely many diag-
onalizations going on. At stage we guarantee that the eth machine for
(NP[g(n; 2i)]) A does not accept the diagonal set D A
. Thus,
. The counting argument for this construction
is identical to that in the proof of Theorem 2.1.
DOWNWARD SEPARATION FAILS FOR fi HIERARCHY 5
Corollary 2.4. For any k, there is an oracle relative to which
Proof. Let g(n; in the preceding theorem.
Theorem 2.5. Let g 0 and g 1 be polynomial-time computable monotone increasing
functions such that log n 2 o(g 1 (n)) and g 0 (n) 2 n O(1) . If g 0 (n O(1)
there exists an oracle A such that
(and in fact there is a tally set in (NP[g 1 (n)]) A \Gamma (NP[g 0 (n)]) A ).
Sketch. In this construction, we do two encodings and one diagonalization. In addition
to coding C A
into P, we also code E A , a generic - p
m -complete set for PSPACE,
into A. accepts x using at most s tape squares with
oracle Ag, where we also count the space used on the oracle tape.) At the end of the
construction, we have
(If one prefers binary oracles, one may code 0, 1, and 2 as 00, 01, and 10.) When we
are doing a diagonalization to make P A
s (x) 6= D A (x), if a coding string for C A
k (w) is
queried, we proceed as before; if a coding string for E A (w) is queried, where jwj - jxj,
then we simply restrain that coding string from the oracle. This will not restrain all
the coding strings for E A (w), since there are 2 g1 (jwj) coding strings for E A (w); if
is the
upper bound on the total number of queries generated by the computation of P A
as in the proof of Theorem 2.1. Therefore, restraining any such coding strings queried
in the computation of P A
s (x) or in its cascade of queries can not restrain all such coding
strings, and thus can not decide E A (w). At the end of each stage, we complete all
codings begun or queried in that stage, so that it will not be changed in any subsequent
stage.
Corollary 2.6. For every k, there is an oracle relative to which
With only a slight modification of this technique, we get far more bizarre collapses.
Theorem 2.7. Let g(\Delta; \Delta) be a polynomial-time computable, monotone increasing
(in both variables) function with log n 2 o(g(n; i)) and g(n; i) 2 n O(1) for all i - 1.
there exists an oracle A such that
for all i - 0 (and in fact there is a tally set in (NP[g(n; 2i
for each i).
We include the full proof of this result, although it uses techniques mentioned
before, since this shows how all the pieces fit together.
Proof. Let C A
accepts x within s steps with oracle
A, making at most g(jxj; i) nondeterministic choicesg. Then C A
m -complete for
(NP[g(n; i)]) A for any A. Let p(n; i) be the nondeterministic time bound for some
6 R. BEIGEL AND J. GOLDSMITH
Turing machine recognizing C ()
. Without loss of generality, assume
that for all i and almost all n,
Let D A
A]g.
For convenience, define g(n;
The construction consists of coding C A
2i into (NP[g(n; 2i \Gamma 1)]) A , for each i - 0,
so (NP[g(n; 0)]) A ' P A and (NP[g(n; 2i
diagonalizing, i.e, guaranteeing that no (NP[g(n; 2i)]) A machine recognizes the set
D A
2i+1 , for any i, so (NP[g(n; 2i + 1)]) A 6' (NP[g(n; 2i)]) A for any i.
At the end of the construction, we will have x 2 C A
Assume that (NP[g(n; i)]) A is enumerated by oracle NTMs M
runs in time bounded by n e for sufficiently large n.
The construction proceeds in stages. Stage consists of some encodings
and one diagonalization, which determines n s . At the end of stage s, A is decided for
all strings of length - n s (and some further coding strings), and A has been extended
so that M A
e;2i does not recognize D A
2i+1 .
At stage s, let he; ii = s, and then choose n ? n s\Gamma1 such that n is a power of 2,
runs in time bounded by n e on inputs of length n, and n satisfies an inequality
to be specified below that is true for almost all n. Let n . The value
of D A
depends only on strings of length 1). Do all coding
involving witnesses of length less than ', and then freeze A through length ' \Gamma 1.
As before, in order to diagonalize, we will need to calculate M A
e;2i (x), which may
generate a cascade of queries. Any diagonalizing strings that do not already belong
to A and are queried in this cascade are restrained from A. But coding strings may
be queried as well. (Because we are coding nondeterministically, coding strings can
be thought of as potential witnesses to membership.) If M A
queries a potential
witness that w 2 C A
2j (w) has not yet been decided, that
potential witness is restrained from A. If 2j - 2i and C A
2j (w) has not yet been
decided, then we compute C A
2j (w) recursively. We will show below that the number
of queries generated by such a cascade of queries is smaller than both of the following
bounds: (1) the number of potential witnesses for w 2 C A
2j , (2) the number of potential
. In fact, bound (1) implies bound (2) as follows. The number
of witnesses for D A
2i+1 (x) is 2 g(jxj;2i+1) , and the number of witnesses for C A
2j (w) is
. If a witness of C A
2j (w) is restrained, then jwj - jxj and 2j ? 2I . Thus
by monotonicity of g, g(jwj;
Thus, restraining potential witnesses as described does not impede any encodings
or restrict our decision about D 2i+1 (x), or those C A
2j (w) for which we restrict coding
strings. (We don't have to worry about what happens to potential witnesses for
2j at a later stage, because any affected codings, i.e., C A
2j (w), will be completed
at this stage; later diagonalizations will not affect them.)
Now we show that there are more potential witnesses for x 2 D A
2i+1 than there are
possible queries in such a cascade of queries. Because of how we encode C A
k , a coding
string z codes a computation that depends on strings of length bounded by
jzj. For
(w) depends on at most p(jwj; k)2
these shorter strings.
e;2i (x) has at most 2 g(n;2i) computations, and each of those computations may
query no more than n e strings, each of length bounded by n e . Each such string may
code a computation C A
e , but we only need to expand that compu-
DOWNWARD SEPARATION FAILS FOR fi HIERARCHY 7
tation if 2j - 2i. Each of these computations depends on at most p(n e=2 ; 2i)2 g(n e=2 ;2i)
strings, each of which depends on at most p(n e=4 ; 2i)2 g(n e=4 ;2i) strings, etc. As be-
fore, the total number of queries needed to decide M A
e;i (x) is bounded by the product
of log e - log s terms, each of which is 2 o(g(n;2i+1)) . Therefore the total number of
queries on which M A
depends is 2 o(g(n;2i+1)) , which is less than 2 g(n;2i+1) for
sufficiently large n.
Thus there remains an unrestrained diagonalizing string of length ', which we
put into A if and only if M A
rejects x. That is, we set D A
adding a string 0xy to A if necessary. Thus, for each s, we can guarantee that M A
does not accept D A
this shows that (NP[g(n; 2i
Since C A
2i is complete for (NP[g(n; 2i)]) A , our encoding guarantees that
Corollary 2.8. There is an oracle relative to which, for each k, fi
fi 2k+2 .
Corollary 2.9. For any consistent pattern of collapses and separations of the
k 's, there is an oracle relative to which that pattern holds.
Notice that if the set of collapses is not recursive, then the oracle will also be
non-recursive.
In addition to collapsing or separating fi j and fi k , we can code co-fi k into fi k -
or separate the two. This involves some additional argument.
Theorem 2.10. Let g(\Delta; \Delta) be a polynomial-time computable, monotone increasing
(in both variables) function with log n 2 o(g(n; i)) and g(n; i) 2 n O(1) for all
there exists an oracle
A such that P
(and in fact there
are tally sets in (NP[g(n; 2i A and in (NP[g(n; 2i
for each i).
Sketch. For convenience, we will separate (NP[g(n; 2i+1)]) A from (co-NP[g(n; 2i
rather than (NP[g(n; 2i 2)]) A from (co-NP[g(n; 2i given the other re-
quirements, this is equivalent. In order to separate (NP[g(n; 2i 1)]) A from
we use the set D A
2i+1 in (NP[g(n; 2i
that D A
. Most of this construction is identical to that
of Theorem 2.7, except that we interleave an extra diagonalization into the construc-
tion; the codings and diagonalizations are analogous to earlier constructions, and the
counting argument is identical.
We code complete sets C A
2i for (NP[g(n; 2i)]) A into (NP[g(n; 2i \Gamma 1)]) A , and diagonalize
so that no (NP[g(n; 2i)]) A machine recognizes D A
(Thus D A
does double duty: during even stages, it diagonalizes against
during odd stages, against (co-NP[g(n; 2i
To guarantee that D A
2i+1 is not in (co-NP[g(n; 2i , we make sure that, for
each e, the e th machine for (NP[g(n; 2i does not recognize D A
. This holds
if and only if there is some x such that D A
e;2i+1 (x). This diagonalization
differs from earlier ones only when M A
queries a witness for x 2 D A
2i+1 . As
before, if M A
queries a coding string for some computation of C A
2j (w) where
then we can safely restrain the coding string. (If
may exclude w from D A
2i+1 , but that doesn't matter. As
long as D A
e;2i+1 (x), we don't care what happens to D A
2i+1 for other strings
8 R. BEIGEL AND J. GOLDSMITH
of lengths between n s\Gamma1 and n s , where then we retrace the
computation, as before.
If M A
queries a witness for x 2 D A
2i+1 , we first restrain all such wit-
nesses, and continue. If this leads to a rejecting computation of M A
2i+1 (x), and the diagonalization is successful. If it leads to an
accepting computation, we preserve the lexicographically least accepting path for
that computation, and all of its cascade of queries. As before, the computation of
restrains at most 2 o(g(n;2i+1)) strings, so this will not restrain all the witnesses
for x 2 D A
2i+1 . Thus we can find an unrestrained witness and add it to A, so
D A
e;2i+1 (x), as desired.
Therefore, this additional set of diagonalization requirements can be interleaved
with the previously-described diagonalizations and collapses.
Theorem 2.11. Let g(\Delta; \Delta) be a polynomial-time computable, monotone increasing
(in both variables) function with log n 2 o(g(n; i)) and g(n; i) 2 n O(1) for all
there exists an oracle
A such that P
(and in fact there are tally
sets in (NP[g(n; 2i for each i).
Sketch. As before, we construct A so that no (NP[g(n; 2i)]) A machine recognizes
the set D A
2i+1 , and so that C A
In addition, in order to make
as follows:
For each i, let N A
i be an (NP[g(n; i)]) A machine recognizing C A
i in nondeterministic
time bounded by p(n; i) (regardless of the oracle). By the form of the encoding,
query any of its own coding strings. If a witness string for C A
is queried in the course of a diagonalization (NP[g(n; 2i)]) A 6= (NP[g(n; 2i
then we can retrace the computation. If 2j
we can restrain the queried witness string (for jxj sufficiently large) without deciding
C A
2j+1 (x), by the same counting argument as in previous proofs.
Thus, we can add this extra encoding, without interfering with the other collapses
and codings.
This gives us the following stronger version of Hemachandra and Jha's oracle [8].
Corollary 2.12. For each k, there is an oracle relative to which for all j,
(and the separations are witnessed by tally sets).
Combining the results (and techniques) of Theorems 2.7, 2.10, and 2.11, we get
the following very strong result.
Corollary 2.13. For any consistent set of requirements, there is an oracle
relative to which the fi hierarchy satisfies those requirements.
In constructing such an oracle, one must be careful in closing classes under com-
plement. In particular, if we close one class under complement, and separate another
from its complement, we cannot then make the two classes equal.
Corollary 2.9 implies that there are uncountably many different patterns of collapse
that can be realized in relativized worlds. If the set of requirements is recursive,
then the oracle can be made recursive, but certainly some of those patterns are realized
by only nonrecursive oracles.
DOWNWARD SEPARATION FAILS FOR fi HIERARCHY 9
3. Dense fi Hierarchies. Previously we considered fi r only when r is a natural
number. But the class (NP[log r n]) A is meaningful whenever r is a nonnegative real
number (regardless of whether r is computable). Even when we allow r to be real, we
can make the fi hierarchy obey any consistent set of requirements. For example, we
can make the fi hierarchy look like a Cantor set.
Theorem 3.1. Let X be any subset of [1; 1). There exists an oracle A such
that, for all s; t - 1, fi A
t if and only if [s; t] ' X.
Note that there may be uncountably many distinct fi t s. Because there are two
rationals between any two reals, we need only separate the distinct fi q s where q is
rational.
Proof. Without loss of generality, assume that X is a union of intervals, each containing
more than one point. Every interval in X contains a rational point; therefore
contains countably many intervals.
We will satisfy the following requirements for each maximal interval in X , depending
on its type:
log log n]
log log n]
In addition, for each rational number q in (1;
NP[log q n= log log n] 6= NP[log q n] 6= NP[log q n log log n]:
then we make P 6= NP[log n log log n] as well.
The construction is a slight modification of that in the proof of Theorem 2.7.
We perform the diagonalizations in some well-founded order, while maintaining the
codings as we go along. The only significant difference here is that the diagonalizations
are not performed in increasing order. Suppose that at some stage we are making
ae NP[b(n)] and a coding string for some NP[c(n)] computation is queried;
we restrain that string if and only if (9n)[c(n) ? b(n)] if and only if (8n)[c(n) - b(n)].
The counting argument is the same as before.
Note: we could also close each distinct fi r under complement or not, as we wish,
in the theorem above.
4. Open Problems. The class fi k is contained in NP " DTIME(2 log k n ). Our
work shows that there is no relativizing proof that We would
like to know whether
are there any easy languages in NP \Gamma P? The best we can show is that if
well-behaved function f , then
Is there an oracle relative to which this is the best possible translation of the collapse?
Does
Acknowledgments
. We are grateful to Leen Torenvliet, Andrew Klapper, and
Martin Kummer for helpful discussions, and Andrew Klapper, Bill Gasarch, and Martin
Kummer for careful proofreading of earlier drafts.
--R
"Complexity Classes With Complete Problems Between P and NP-Complete,"
Relativizations of the P
"Nondeterminism within P,"
The Boolean hierarchy I: structural properties.
Classes of bounded nondeterminism.
Limited nondeterminism SIGACT News
Defying upward and downward separation.
Computations with a restricted number of nondeterministic steps.
Computations with a restricted number of nondeterministic steps.
Refining nondeterminism in relativized polynomial-time bounded computations
Relativized polynomial hierarchies having exactly k levels.
Sparse sets in NP
On the structure of polynomial time reducibility.
On limited nondeterminism and the complexity of the V-C dimension
Every prime has a succinct certificate.
Constructing language instances based on partial information.
--TR | oracles;structural complexity theory;hierarchies;limited nondeterminism |
290287 | A rejection technique for sampling from log-concave multivariate distributions. | Different universal methods (also called automatic or black-box methods) have been suggested for sampling form univariate log-concave distributions. The descriptioon of a suitable universal generator for multivariate distributions in arbitrary dimensions has not been published up to now. The new algorithm is based on the method of transformed density rejection. To construct a hat function for the rejection algorithm the multivariate density is transformed by a proper transformation T into a concave function (in the case of log-concave density log(x).) Then it is possible to construct a dominating function by taking the minimum of serveral tangent hyperplanes that are transformed back by T-1 into the original scale. The domains of different pieces of the hat function are polyhedra in the multivariate case. Although this method can be shown to work, it is too slow and complicated in higher dimensions. In this article we split the Rn into simple cones. The hat function is constructed piecewise on each of the cones by tangent hyperplanes. The resulting function is no longer continuous and the rejection constant is bounded from below but the setup and the generation remains quite fast in higher dimensions; for example, 8. The article describes the details of how this main idea can be used to construct algorithm TDRMV that generates random tuples from a multivariate log-concave distribution with a computable density. Although the developed algorithm is not a real black box method it is adjustable for a large class of log-concave densities. | Introduction
For the univariate case there is a large literature on generation methods for
standard distributions (see e.g. [Dev86] and [Dag88]) and in the last years some
papers appeared on universal (or black-box) methods (see [Dev86, chapter VII],
[GW92], [Ahr95], [H-or95a], [HD94] and [ES97]); these are algorithms that can
generate random variates from a large family as long as some information (typ-
ically the mode and the density of the specific distribution) are available.
For the generation of variates from bivariate and multivariate distributions
papers are rare. Well known and discussed are only the generation of the multi-normal
and of the Wishart distribution (see e.g. [Dev86] and [Dag88]). Several
approaches to the problem of generating multivariate random tuples exist, but
these have some disadvantages:
ffl The multivariate extension of the ratio of uniforms methods as in [SV87]
or [WGS91]. This method can be reformulated as rejection from a small
family of table-mountain shaped multivariate distributions. This point of
view is not included in these two papers but it is useful as it clarifies the
question why the acceptance probability becomes poor for high correla-
tion. This disadvantage of the method is already mentioned in [WGS91].
The practical problem how to obtain the necessary multivariate rectangle
enclosing the region of acceptance for the ratio of uniforms method is not
discussed in [SV87] nor in [WGS91] and seems to be difficult for most
distributions.
ffl The conditional distribution method. It requires the knowledge of and the
ability to sample from the marginal and the conditional distributions (see
[Dev86, chapter XI.1.2]).
ffl The decomposition and rejection method. A majorizing function (also
called suggested for the multivariate rejection method is the
product of the marginal densities (in [Dag88]). It is not clear at all how
to obtain the necessary rejection constant ff.
ffl Development of new classes of multivariate distributions, which are easy
to generate. It is only necessary (and possible) to specify the marginal
distribution and the degree of dependence measured by some correlation
coefficient (see the monograph [Joh87]). This idea seems to be attractive
for most simulation practitioners interested in multivariate distributions
but it is no help if variates from a distribution with given density should
be generated.
Recently Devroye [Dev97] has developed algorithms for ortho-unimodal
densities. But this paper leaves the generation of log-convave distributions
as open problem.
ffl Sweep-plane methods for log-concave (and T-concave) distributions are
described recently in [H-or95b] for bivariate case and in [LH98] for the
multivariate case. These algorithms use the idea of transformed density
rejection which is presented in a first form in [Dev86, chapter VII.2.4] and
with a different set-up in [GW92].
To our knowledge these two algorithms are the only universal algorithms
in the literature for multivariate distributions with given densities. (In
[Dev86, chapter XI.1.3] it is even stressed that no general inequalities
for multivariate densities are available, a fact which makes the design of
black-box algorithms, that are similar to those developed in [Dev86] for
the univariate case, impossible.)
Although the algorithm in [LH98] works, it is very slow, since the domain of
the density f is decomposed in polyhedra. This is due to the construction of the
hat function, where we take the pointwise minimum of tangent hyperplanes. In
this paper we again use transformed density rejection and the sweep-plane technique
to derive a much more efficient algorithm. The main idea is to decompose
the domain of the density in cones first and then compute tangent hyperplanes
in this cones. The resulting hat function is not continuous any more and the
rejection constant is bounded from below, but the setup as well as the sampling
from the hat function is much faster than in the original algorithm.
Section 2 explains the method and gives all necessary mathematical formu-
lae. Section 3 provides all details of the algorithm. Section 4 discusses how to
improve and extend the main idea of the algorithm (e.g. to T-concave distribu-
tions, bounded domain) and section 5 reports the computational experience we
have had with the new algorithm.
2 The method
2.1 Transformed density rejection
Density. We are given a multivariate distribution with differentiable density
function
To simplify the development of our method we assume
In x4 we extend the algorithm so that these requirements can be
dropped.
Transformation. To design an universal algorithm utilizing the rejection
method it is necessary to find an automatic way to construct a hat function
for a given density. Transformed density rejection introduced under a different
name in [GW92] and generalized in [H-or95a] is based on the idea that the density
f is transformed by a monotone T (e.g. T in such a way that
(see [H-or95a]):
concave (we then say "f is T -concave");
differentiable and T 0 (x) ? 0, which implies T \Gamma1 exists; and
(T4) the volume under the hat is finite.
Hat. It is then easy to construct a hat ~ h(x) for ~
f(x) as the minimum of N
tangents. Since ~
f(x) is concave we clearly have ~
Transforming ~ h(x) back into the original scale we get
majorizing function or hat for f , i.e. with f(x) - h(x). Figure 1 illustrates
the situation for the univariate case by means of the normal distribution and
the transformation T log(x). The left hand side shows the transformed
Figure
1: hat function for univariate normal density
density with three tangents. The right hand side shows the density function
with the resulting hat. (The dashed lines are simple lower bounds for the density
called squeezes in random variate generation. Their use reduces the number of
evaluations of f . Especially if the number of touching points is large and the
evaluation of f is slow the acceleration gained by the squeezes can be enormous.)
Rejection. The basic form of the multivariate rejection method is given by
algorithm Rejection().
Algorithm 1 Rejection()
1: Set-up: Construct a hat-function h(x).
2: Generate a random tuple proportional to
h(X) and a uniform random number U .
3: If Uh(X) - f(X) return X else go to 2.
The main idea of this paper is to extend transformed density rejection as
described in [H-or95a] to the multivariate case.
2.2 Construction of a hat function
Tangents. Let p i be points in D ' R n . In the multivariate case the tangents
of the transformed density ~
f(x) at p i are the hyperplanes given by
where h\Delta; \Deltai denotes the scalar product.
Polyhedra. In [LH98] a hat function h(x) is constructed by the pointwise
minimum of these tangents. We have
The domains in which a particular tangent ' i (x) determines the hat function
are simple convex polyhedra P i , which may be bounded or not (for details
about convex polyhedra see [Gr-u67, Zie95]). Then a sweep-plane technique for
generating random tuples in such a polyhedron with density proportional to
To avoid lots of indices we write p, '(x) and P without the index i if there
is no risk of confusion.
A sweep-plane algorithm. Let
r ~
kr ~
if r ~
choose any g with denotes the 2-norm.)
For a given x let We denote the hyperplane perpendicular to g
through x by
fy
and its intersection with the polytope P with
depends on x only; thus we write F (x), if there is no risk of confusion.) Q(x)
again is a convex simple polyhedra. Now we can move this sweep-plane F (x)
through the domain P by varying x. Figure 2 illustrates the situation.
As can easily be seen from (2), (4) and (5), T \Gamma1 ('(x)) is constant on Q(x)
for every x. Let
Then the hat function in P is given by
where again We find for the marginal density function of the hat
Z
Figure
2: sweep-plane F (x)
where integration is done over F (x). A(x) denotes the (n \Gamma 1)-dimensional
volume of Q(x). It exists if and only if Q(x) is bounded.
To compute A(x) let v j denote the vertices of P and v
assume that the polyhedron P is simple. Then let t v j
n be the n nonzero
vectors in the directions of the edges of P originated from v j , i.e. for each k and
every
by modifying the method in [Law91] we find
a
The coefficients are given by
Y
and
a
Notice that b (x)
equations (9) and (10)
does not hold if P is not simple. For details see [LH98].
The generation from h g
is not easy in general. But for log-concave or
-concave (see x4.8) densities f(x), h g
again is log-concave ([Pr'e73]) and T c -
concave ([LH98]), respectively.
Generate random tuples. For sampling from the "hat distribution" we first
need the volume below the hat in all the polyhedra P i and in the domain D.
We then choose one of these polytopes randomly with density proportional to
their volumes. By means of a proper univariate random number we sample from
marginal distribution hj g
and get a intersection Q(x) of P . At last we have to
sample from a uniform distribution on Q(x).
It can be shown (see [LH98]) that the algorithm works if
(1) the polyhedra P i are simple (see above),
(2) there exists a unique maximum of ' i (x) in P i (then ff \Gamma fi x is decreasing
and thus the volume below the hat is finite in unbounded polyhedra), and
is non-constant on every edge of P i (otherwise hg; t v j
and an edge t i and thus a
Adaptive rejection sampling. It is very hard to find optimal points for
constructing these tangents ' i (x). Thus these points must be chosen by adaptive
rejection sampling (see [GW92]). Adapted to our situation it works in the
following way: We start with the vertices of a regular simplex and add a
new construction point whenever a point is rejected until the maximum number
N of tangents is reached. The points of contact are thus chosen by a stochastic
algorithm and it is clear that the multivariate density of the distribution of
the next point for a new tangent is proportional to h(x) \Gamma f(x). Hence with
tending towards infinity the acceptance probability for a hat constructed in
such a way converges to 1 with probability 1. It is not difficult to show that the
expected volume below the hat is 1 +O(N \Gamma2=n ).
Problems. Using this method we run into several problems.
We have to compute the polyhedra every time we add a point.
What must be done, if the marginal distribution (8) does not exist in the
initial (usually not bounded) polyhedra P i , or if the volume below the hat
is infinite (Q i (x) not bounded, ff \Gamma fi x not decreasing)?
Moreover the polyhedra P i typically have many vertices. Therefore the algorithm
is slow and hard to implement because of the following effects.
\Gamma The computation of the polyhedra (setup) is very expensive.
\Gamma The marginal density (8) is expensive to compute. Since it is different
for every polyhedron P i (and for every density function f ), we have to
use a slow black box method (e.g. [GW92, H-or95a]) for sampling from the
marginal distribution even in the case of log-concave densities.
\Gamma Q(x) is not a simplex. Thus we have to use the (slow) recursive sweep-
plane algorithm as described in [LH98] for sampling from the uniform
distribution over a (simple) polytope.
2.3 Simple cones
A better idea is to choose the polyhedra first as simple as possible, i.e. we choose
cones. (We describe in x2.4 how to get such cones.)
A simple cone C (with its vertex in the origin) is an unbounded subset
spanned by n linearly independent vectors:
In opposition to the procedure described above we now have to choose a proper
point p in this cone C for constructing a tangent. In the whole cone the hat h
is then given by this tangent. The method itself remains the same.
Obviously the hat function is not continuous any more (because we first define
a decomposition of the domain and then compute the hat function over the
different parts. It cannot be made continuous by taking the pointwise minimum
of the tangents, since otherwise we cannot compute the marginal density h g
by
equation (8)). Moreover we have to choose one touching point in each part.
These disadvantages are negligible compared to the enormous speedup of the
setup and of the generation of random tuples with respect to this hat function.
Marginal density. The intersection Q(x) of the sweep plane F (x) with the
cone C is bounded if and only if F (x) cuts each of the sets f-t
x ? 0, i.e. if and only if hg; -t hence if and only if
We find for the volume A(x) in (9) of the intersection Q(x)
ae a x
where (again)
a
Y
Notice that A(x) does not exist if condition (13) is violated, whereas the right
hand side in (14) is defined
If the marginal density exists, i.e. (13) holds, then by (8) and (6) it is given
by
Volume. The volume below the hat function in a cone C is given by
Z 1h g
Z 1a x
Notice that g and thus a, ff and fi depend on the choice of p. Choosing an
arbitrary p may result in a very large volume below the hat and thus in a very
poor rejection constant.
Intersection of sweep-plane. Notice that the intersection Q(x) is always
a (n \Gamma 1)-simplex if condition (13) holds. Thus we can use the algorithm in
[Dev86] for sampling from uniform distribution on Q(x). The vertices
of Q(x) in R n are given by
uniformly [0; 1] random variates and U
We sort these variates such that U 0 - Un . Then we get a random point
in Q(x) by (see [Dev86, theorems XI.2.5 and V.2.1])
The choice of p. One of the main difficulties of the new approach is the choice
of the touching point p. In opposite to the first approach where the polyhedron
is build around the touching point, we now have to find such a point so that
holds. Moreover the volume below the hat function over the cone should
be as small as possible.
Searching for such a touching point in the whole cone C or in domain D
(the touching point needs not to be in C) with techniques for multidimensional
minimization is not very practicable. Firstly the evaluation of the the volume
HC in (17) for a given point p is expensive and its gradient with respect to p is
not given. Secondly the domain of HC is given by the set of points where (13)
holds.
Instead we suggest to choose a point in the center of C for a proper touching
point for our hat. Let -
be the barycenter of the spanning vectors.
Let a(s), ff(s) and fi(s) denote the corresponding parameters in (16) for t.
Then we choose by minimizing the function
Z
The domain DA of this function is given by all points, where kr ~
where A(x) exists, i.e. where condition (13). It can easily be
seen, that DA is an open subset of (0; 1).
To minimize j we can use standard methods, e.g. Brent's algorithm (see
e.g. [FMM77]). The main problem is to find DA . Although ~
f(x) is concave by
assumption, it is possible for a particular cone C that DA is a strict subset of
(0; 1) or even the empty set. Moreover it might not be connected. In general
only the following holds: Let (a; b) be a component of DA 6= ;. If If f 2 C 1 , i.e.
the gradient of f is continuous, then
lim
s&a
Roughly spoken, j is a U-shaped function on (a; b).
An essential part of the minimization is initial bracketing of the minimum,
i.e. finding three points s (a; b), such that j(s 1
This is necessary since the function term of j in (20) is also
defined for some s 62 DA (e.g. s ! 0). Using Brent's algorithm without initial
bracketing may (and occasionally does) result in e.g. a negative s.
Bracketing can be done by (1) search for a s 1 2 DA , and (2) use property
(21) and move towards a and b, respectively, to find an s 0 and an s 2 . (It is
obvious that we only find a local minimum of j by this procedure. But in all
the distributions we have tested, there is just one local minimum which therefore
is the global one.)
For the special case where hg(s); - ti does not depend on s (e.g. for all multivariate
normal distributions) DA either is (0; 1) or the empty set. It is then
possible to make similar considerations like that in [H-or95a, theorem 2.1] for
the one dimensional case. Adapted to the multivariate case it would state, that
for the optimal touching point p, f(p) is the same for every cone C.
Condition violated. Notice that DA even may be the empty set, i.e. condition
fails for all s 2 (0; 1). By the concavity of ~
f(x) we know, that
construction point p. Furthermore hg; pi is bounded from
below on every compact subset of the domain D of the density f . Therefore
there always exists a partition into simple cones with proper touching points
which satisfy (13), i.e. the domains DA are not empty for all cones C.
We even can have
2.4 Triangulation
For this new approach we need a partition of the R n into simple cones. We get
such a partition by triangulation of the unit sphere S n\Gamma1 . Each cone C is then
generated by a simplex \Delta ae S n\Gamma1 (triangle in S 2 , tetrahedron in S 3 , and so
\Deltag (22)
These simplices are uniquely determined by the vectors t
their vertices. (They are the the convex hull of these vertices in S n\Gamma1 .) It does
not matter that these cones are closed sets. The intersection of such cones might
not be empty but has measure zero.
For computing a in (15) we need the volumes of these simplices. To avoid DA
being the empty set, some of the cones have to be skinny. Furthermore to get
a good hat function, these simplices should have the same volume (if possible)
and they should be "regular", i.e. the distances from the center to the vertices
should be equal (or similar). Thus the triangulation should have the following
properties:
Recursive construction.
are easy computable for all simplices.
(C3) Edges of a simplex have equal length.
Although it is not possible to get such a triangulation for n - 3 we suggest an
algorithm which fulfils (C1) and (C2) and which "nearly" satisfies (C3).
Initial cones. We get the initial simplices as the convex hull in S n\Gamma1 of the
vectors
en (23)
where e i denotes the i-th unit vector in R n (i.e. a vector where the i-th component
is 1 and all others are 1g. As can easily be seen the
resulting partition of the R n is that of the arrangement of the hyperplanes
Hence we have 2 n initial cones.
Barycentric subdivision of edges. To get smaller cones we have to triangulate
these simplices. Standard triangulations of simplices which are used
for example in fixed-point computation (see e.g. [Tod76, Tod78]) are not appropriate
for our purpose. The number of simplices increases too fast for each
triangulation step. (In opposition to fixed point calculations, we have to keep
all simplices with all their parameters in the memory of the computer.)
Instead we use a barycentric subdivision of edges: Let t be the
vertices of a simplex \Delta. Then use the following algorithm.
(1) Find the longest edge
(2) Let
i.e. the barycenter of the edge projected to the sphere.
(3) Get two smaller simplices: Replace vertex t i by t new for the first simplex
and vertex t j by t new for the second one. We have
After making k of such triangulation steps in all initial cones we have 2 n+k
simplices.
This triangulation is more flexible. Whenever we have a cone C, where D a is
empty (or the algorithm does not find an s 2 D a ) we can split C and try again
to find a proper touching point in both new cones. This can be done until we
have found proper construction points for all cones of the partition (see end of
x2.3). In practice this procedure stops, if too many cones are necessary. (The
computer runs out of memory.)
Notice that it is not a good idea to use barycentric subdivision of the whole
simplex (instead of dividing the longest edge). This triangulation exhibits the
inefficient behavior of creating long, skinny simplices (see remark in [Tod76]).
"Oldest" edge. Finding the longest of the
edges of a simplex is very
expensive. An alternative approach is to use the "oldest" edge of a simplex.
The idea is the following:
(1) Enumerate the 2n vertices of the initial cones.
(2) Whenever a new vertex is created by barycentric subdivision, it gets the
next number.
(3) Edges are indexed by the tuple (i; j) of the number of the incident vertices,
such that i ! j.
We choose the edge with the lowest index with respect to the lexicographic
order (the "oldest" edge). This is just the pair of lowest indices of the
vertices of the simplex.
As can easily be seen, the "oldest" edge is (one of) the longest edge(s) for the
first steps. Unluckily this does not hold for all simplices in
following triangulation steps. (But it is at least not the shortest one.)
Computational experiences with several normal distributions for some dimensions
have show, that this idea speeds up the triangulation enormously
but has very little effect on the rejection constant.
Setup. The basic version of the setup algorithm is as follows:
1. Create initial cones.
2. Triangulate.
3. Find touching points p if possible (and necessary).
4. Triangulate every cone without proper touching point.
5. Goto 3 if cones without proper touching points exist, otherwise stop.
2.5 Problems
Although this procedure works for our tested distributions, an adaptation might
be necessary for a particular density function f .
(1) The searching algorithm for a proper touching point in x2.3 can be im-
proved. E.g. DA is either [0; 1) or the empty set if f is a normal distribution.
(2) There is no criterion how many triangulation steps are necessary or usefull
for an optimal rejection constant. Thus some tests with different numbers of
trianglation steps should be made with density f (see also x5).
(3) It is possible to triangulate each cone with a "bad" touching point. But
besides the case where no proper touching point can be found, some touching
points may lead to an enourmous volume below the hat function. So this case
should also be excluded and the corresponding cones should be triangulated.
A simple solution to this problem is that an upper bound Hmax for the
volumes HC is provided. Each cone with HC ? Hmax has to be triangulated
further. Such a bound can be found by some empirical tests with the given
density f .
Another way is to triangulate all initial cones first and then let Hmax be a
multiple (e.g. 10) of the 90th percentile of the HC of all created cones.
Problems might occur when the mode is on the boundary of the support
(Then we set
can be seen as a concave
An example for such a situation is when f(x) is a normal density on
a ball B and vanishes outside of B.
In such a case there exists a cone C such that f- t: - ? 0g does not intersect
suppf and the algorithm is in troubles. If C " we simply can
remove this cone. Otherwise an expensive search for a proper touching point is
necessary.
Restrictions. The above observations - besides the fact that no automatic
adaption is possible - are a drawback of the algorithm for its usage as black-box
algorithm. Nevertheless the algorithm is suitable for a large class of log-concave
densities and it is possible to include parameters into the code to adjust the
algorithm for a given density easily. Of course some tests might be necessary.
Besides, the algorithm does not produce wrong random points but simply does
not work, if no "good" touching points can be found for some cones C.
2.6 Log-concave densities
The transformation T
is concave, we say f is log-concave. We have T thus we find
for the marginal density function in (16) those of a gamma distribution with
shape parameters n and fi.
The volume below the hat for log-concave densities in a cone C is now given by
Z 1a x
To minimize this function it is best to use its logarithm:
For the normal distribution with density proportional to
we have ~
is the center of
the cone C with
1. Thus we simply find by (6)
Since a(s) does not depend on s we find for constant.
But even for the normal distribution with an arbitrary covariance matrix, this
function becomes much more complicated.
3 The algorithm
The algorithm tdrmv() consists of two main parts: the construction of a hat
function h(x) and the generation of random tuples X with density proportional
to this hat function. The first one is done by the subroutine setup(), the second
one by the routine sample().
Algorithm random tuple for given log-concave density
Input: density f
1: call Construct a hat-function h(x)
2: repeat
3: X / call sample(). = Generate a random tuple X with density prop. to h(X).
4: Generate a uniform random number U .
5: until U \Delta h(X) - f(X).
return X.
To store h(x), we need a list of all cones C. For each of these cones we need
several data which we store in the object cone. Notice that the variables p, g,
ff, fi, a and HC depend on the choice the touching point p and thus on s. Some
of the parameters are only necessary for the setup.
object 1 cone
parameter variable definition
spanning vectors t
center of cone -
construction point
location of p s (setup)
sweep plane g see (4)
marginal density ff; fi see (6)
coefficient a see (15)
determinant of vectors
volume under hat HC ; H cum
C see (17) and (28)
Remark. To make the description of the algorithm more readable, some standard
techniques are not given in details.
3.1
The routine setup() consists of three parts: (H1) setup initial cones, (H2)
triangulation of the initial cones and (H3) calculation of parameters.
(H1) is simple (see x2.4). (H2) is done by subroutine split(). The main
problem in (H3) is how to find the parameter s (i.e. a proper construction
point). This is done by subroutine find(). Minimizing (29) is very expensive.
Notice that for a given s we have to compute all parameters that depend on s
before evaluating this function. Since it is not suitable to use the derivative of
this function, a good choice for finding the minimum is to use Brent's algorithm
(e.g. [FMM77]). To reduce the cost for finding a proper s, we do not minimize
for every cone. Instead we use the following procedure:
(1) Make some triangulation steps as described in x2.4.
(2) Compute s for every cone C.
(3) Continue with triangulation. When a cone is split by barycentric subdivision
of the corresponding simplex, both new cones inherit s from the old simplex.
Our computational experiences with various normal distributions show, that
the costs for setup reduces enormously without raising the rejection constant
too much. Using this procedure it might happen that s does not give a proper
touching point (or HC is too big; see end of x2.4) after finishing all
triangulation steps. Thus we have to check s for every cone and continue with
triangulation in some cones if necessary.
3.2 Sampling
The subroutine sample() consists of four parts: (S1) select a cone C, (S2) find
a random variate proportional to the marginal density h g
(27), (S3) generate a
uniform random tuple U on the standard simplex (i.e.
and compute tuple on the intersection Q(x) of the
sweeping plane with cone C. (S3) and (S4) is done by subroutine simplex().
4 Possible variants
4.1 Subset of R n as domain
Our experiments have shown, that the basic algorithm works even for densities
with support Since the hat h(x) has support
the rejection constant might become very big.
Subroutine 3 Construct a hat function
Input: level of triangulation for finding s, level of minimal triangulation
1: for all tuples
2: Append new cone to list of cones with en as its spanning
vectors.
3: repeat
4: for all cone C in list of cones do
5: call split() with C.
Update list of cones.
7: until level of triangulation for finding s is reached
8: for all cone C in list of cones do
9: call find() with C.
10: repeat
11: for all cone C in list of cones do
12: call split() with C.
13: Update list of cones.
14: until minimum level of triangulation is reached
15: repeat
16: for all cone C in list of cones where s unknown do = (13) violated
17: call split() with C and list of cones.
call find() with both new cones.
19: Update list of cones.
20: until no such cone was found
21: for all cone C in list of cones do
22: Compute all parameters of C.
Total volume below hat
24: for all cone C in list of cones do
Used for O(0)-search algorithm
27: return list of cones, H tot .
Subroutine 4 cone and update list of cones
Input: cone C, list of cones
1: Find lowest indices i; j of all vectors of C.
2: Find highest index m of all vectors (of triangulation).
3:
4: Append new cone C 0 to list and copy vectors and s of C into C 0 .
5: Replace t i by t m+1 in C and replace t j by t m+1 in C 0 .
Replace det by 1
in C and C 0 .
7: return list of cones.
Subroutine 5 find a proper touching point
Input: cone C
Bracketing a minimum
1: Search for a s 1 2 DA . return failed if not successful.
2: Search for s 0 , s 2 (Use property (21)). return failed if not successful.
3: Find s using Brent's algorithm (Use (29)). return failed if not successful.
Subroutine 6 Generate a random tuple with density proportional to hat
Input: H tot , list of cones
1: Generate a uniformly [0; H tot ] distributed random variate U .
2: Find C, such that H cum
pred
C .
(C pred is the predecessor of C is the list of cones.)
3: Generate a gamma(n; fi) distributed random variate G.
uniformly distributed point in Q(G) and return tuple
4: X / call simplex() with C and G.
5: return X.
Subroutine 7 Generate a uniform distributed tuple on simplex
Input: cone C, x (location of sweeping plane)
uniformly distributed random variates U i in simplex
1: Generate iid uniform [0; 1] random variates U i ,
2: Un / 1.
3:
4: for do
5: U i / U
uniformly distributed point X in Q(x)
x
7: return X.
Pyramids. If the given domain D is a proper subset of R n (that is, we give
constraints for suppf ), the acceptance probability can be increased when we
restrict the domain of h accordingly to the domain D. (The domain is the set
of points where the density f is defined; obviously suppf ' D. Notice that we
have to provide the domain D for the algorithm but the support of f is not
known.)
Thus we replace (some) cones by pyramids. Notice that the base of such
a pyramid must be perpendicular to the direction g. Hence we first have to
choose a construction point p and then compute the height of the pyramid.
The union of these pyramids (and of the remaining cones) must cover D.
Whenever we get a random point not in the domain D we reject it. It is clear
that continued triangulation decreases the volume between D and enclosing set.
Polytopes. We only deal with the case where D is an arbitrary polytope
which are given by a set of linear inequalities.
Height of pyramid. The height is the maximum of hg; xi in C " D. Because
of our restriction to polytopes this is a linear programming problem. Using the
spanning vectors t as basis for the R n , it can be solved by means of the
simplex algorithm in at most k pivot steps (for a simple polytope), where k is
the number of constraints for D.
Marginal density and volume below hat. The marginal distribution is a
truncated gamma distribution with domain [0; u], where u is the height of the
pyramid C. Instead of (28) and (29) we find for pyramids
Z ux \Gamman \Gamma(n; fiu) (30)
and
where \Gamma(n;
R x
\Gammat dt is proportional to the incomplete gamma function
and can be computed by means of formula (3.351) in [GR65].
Computing the height u(s) is rather expensive. So it is recommended to use
instead of the exact function (31) for finding a touching point in pyramid C.
Computational experiments with the standard normal distribution have shown,
that the effect on the rejection constant is rather small (less than 5%).
4.2 Density not differentiable
For the construction of the hat function we need a tangent plane for every x 2 D.
Differentiability of the density is not really required. Thus it is sufficient to have
a subroutine that returns the normal vector of a tangent hyperplane (which is
However for densities f which are not differentiable the function in (29)
might have a nasty behavior. However notice that f must be continuous in the
interior of suppf , since log ffi f is concave.
4.3 Indicator Functions
If is the indicator function of a convex set, then we can choose an
arbitrary point in the convex set as the mode (as origin of our construction) and
set t, the center of the cone C (see (4) in x2.2). Notice that the marginal
density in (16) now reduces to h g
. None of the parameters ff and
fi depends on the choice of the touching point p. Of course we have to provide
a compact domain for the density.
Using indicator functions we can generate uniformly distributed random
variates of arbitrary convex sets.
4.4 Mode not in Origin
It is obvious that the method works, when the mode m is an arbitrary point
in R. If the mode is unknown we can use common numerical methods for
finding the maximum of f , since T (f(x)) is concave (see e.g. [Rao84]).
Notice that the exact location of the mode is not really required. The algorithm
even works if the center for the construction of the cones is not close to
the mode. Then we just get a hat with a worse rejection constant.
4.5 Add mode as construction point
Since we have only one construction point in each cone, the rejection constant
is bounded from below. Thus only a few steps to triangulate the S
sense. To get a better hat function we can use the mode m of f as an additional
construction point. The hat function is then the minimum of f(m) and the
original hat. The cone is split into two parts by a hyperplane F (b) with different
marginal densities, where b is given by T f(m). Its marginal density
is then given by
Notice that we use the same direction g for the sweep plane in both parts. We
have to compute the volume below the hat for both parts which are given by
a x
4.6 More construction points per cone
A way to improve the hat function is to use more than one (or two) construction
points. But this method has some disadvantages and it is not recommended to
use it.
overlapping region
Figure
3: Two construction points in a cone
The cones are divided in several pieces of a pyramid (see figure 3). The
lower and upper base of these pieces must be perpendicular to the corresponding
direction g. These vectors g are determined by the gradients of the transformed
density at the construction points in this pieces. Thus these g (may) differ and
hence these pieces must overlap. This increases the rejection constant. Moreover
it is not quite clear how to find such pieces. For the univariate case appropriate
methods exist (e.g. [DH94]). But in the multivariate case these are not suitable.
Also adaptive rejection sampling (introduced in [GW92]) as used in [H-or95b,
LH98] is not a really good choice. The reason is quite simple. The cones are
fixed and the construction points always are settled in the center of these cones.
Thus using adaptive rejection sampling we select the new construction points
due to a distribution which is given by the marginal density of
this marginal density is not zero at the existing construction points.
4.7 Squeezes
We can make a very simple kind of squeezes: Let x
Compute the minima of the transformed density at Q(x i ) for all i. Since ~
f is
concave these minima are at the vertices of these simplices. The squeeze s i (x)
for x
denotes the minimum of ~
f(x) in Q(x i ). The setup of these squeezes
is rather expensive and only useful, if many random points of the same distribution
must be generated.
-concave densities
A family T c of transformations that fulfil conditions (T1)-(T4) is introduced in
[H-or95a]. Let c - 0. Then we set
c (x)
It can easily be verified, that condition (T4) (i.e. volume below hat is bounded)
holds if and only if \Gamma 1
To ensure the negativity of the transformed hat we always have to choose
the mode m as construction point (see x4.5).
In [H-or95a] it was shown that if a density f is T c -concave then it is T c1 -
concave for all c 1 - c. The case log(x) is already described in
x2.6.
For the case c ! 0 the marginal density function (16) is now given by
c for x ? b
where b is given by (fi
To our knowledge no special generator
for this distribution is known. (The part for x ? b looks like a beta-prime
distribution (see [JKB95]), but ff; fi ? 0.
By assumption (fi x
\Gamman. Hence it can easily be
seen that the marginal density is T c -concave. Therefore we can use the universal
generator ([H-or95a]).
Computational Experience
5.1 A C-implementation.
A test version of the algorithm was written in C and is available via anonymous
ftp [Ley98].
It should handle the following densities
ffl f is log-concave but not constant on its support.
ffl Domain D is either R n or an arbitrary rectangle [a
ffl The mode m is arbitrary. But if D 6= suppf then m must be an interior
point of suppf not "too close" to the boundary of suppf .
We used two lists for storing the spanning vectors and the cones (with pointers
to the list of vectors). For the setup we have to store the edges (i;
computing the new vertices. This is done temporarily in a hash table, where
the first index i is used as the hash index.
The setup step is modified in the case of a rectangular domain. If the mode
is near to the boundary of D we use the nearest point on the boundary (if
possible a vertex) for the center to construct the cones. If this point is on the
boundary we easily can eliminate all those initial cones, that does not intersect
D. If this point is a vertex of the rectangle there remains only one initial cone.
For finding the mode of f we used a pattern search method by Hooke and
Jeeves [HJ61, Rao84] as implemented in [Kau63, BP66], since it could deal
with both unbounded and bounded support of f without giving explicit con-
straints. For finding the minimum of (29) we use Brent's algorithm as described
in ([FMM77]). The implementation contains some parameters to adjust these
routines to a particular density f .
For finding a cone C in subroutine sample() we used a O(0)-algorithm
with a search table. (Binary search is slower.) For generating the gamma
distributed random number G we used the algorithm in [AD82] for the case
of unbounded domain. When D is a rectangle, we used transformed density
rejection ([H-or95a]) to generate from the truncated gamma distributions. Here
it is only necessary to generate a optimal hat function for the truncated gamma
distribution with shape parameters n and 1 with domain (0; umax ), where umax
is the maximal value of height \Delta fi for all cones. The optimal touching points for
this gamma distribution are computed by means of the algorithm [DH94].
The code was written for testing various variants of the algorithm and is not
optimized for speed. Thus the data shown in the tables below give just an idea
of the performance of the algorithm.
We have tested the algorithm with various multivariate log-concave distributions
in some dimensions. All our tests have been done on a PC with a P90
processor running Linux and the GNU C compiler.
5.2 Basic version: unbounded domain, mode in origin
Random points with density proportional to hat function. The time
for the generation of random points below the hat has shown to be almost linear
in dimension n. Table 1 shows the average time for the generation of a single
point. For comparison we give the time for generation of n normal distributed
points using the Box/Muller method [BM58] (which gives a standard multi-normal
distributed point with density proportional to
For computing the hat function we only used initial cones for the standard
multinormal density.
hat function 14.6 17.1 21.3 24.9 30.2 34.6 41.5 45.7 55.6
multinormal 7.2 10.8 14.4 18.0 21.6 25.2 28.8 32.4 36.0
Table
1: average time for the generation of one random point (in -s)
Random points for the given density. The real time needed for the generation
of a random point for a given log-concave density depends on the rejection
constant and the costs for computing the density. Table 2 shows the acceptance
probabilities and the times needed for the generation of standard multinormal
distributed points. Notice that these data do not include the time for setting
up the hat function.
Setup. When find() is called after triangulation has been done, the time
needed for the computation of the hat function depends linearly on the number
number of cones
time (-s) 23.7 27.9 38.2 49.8 73.8 99.3 142 262 575
acceptance
Table
2: acceptance probability and average time for the generation of standard
multinormal distributed points
of cones. (Thus find() is the most expensive part of the setup().) Table 3
shows the situation for the multinormal distribution with density proportional
to
It demonstrates the effects of continuing
barycentric subdivision of the "oldest" edge (see x2.4) on the number of cones,
the acceptance probability, the costs for generating a random point proportional
to the hat function (i.e. without rejection) and proportional to the given density.
Furthermore it shows the total time (in ms) for the setup (i.e. for computing
the parameters of the hat function) (in ms) and the time for each cone (in -s)
(The increase for large n in the time needed for generating points below the hat
is due to effects of memory access time.)
subdivisions
cones
acceptance
hat (-s) 24.8 24.8 24.9 25.0 25.2 25.3 25.7 26.1 27.1 27.6 28.4
density (-s) 94.2 73.0 59.9 52.1 45.6 42.9 40.1 39.3 39.5 39.6 40.4
setup/cone (-s) 700 706 738 720 713 714 727 756 762 763 764
Table
3: time for computing the hat function for multinormal distribution
If we do not run find() for every cone of the triangulation but use the
method described in x3.1 we can reduce the costs for the construction of the hat
function. Table 4 gives an idea of this reduction for the multinormal distribution
with proportional to
It shows the time for
constructing the hat function subject to the number of cones for which find()
is called.
Due to
Table
4 the acceptance probability is not very bad, if we run find()
only for the initial cones. But this is not true in general. It might become extremely
poor if the level sets of the density are very "skinny". Table 5 demonstrates
the effect on the density proportional to
2.
At last table 6 demonstrates that the increase in the time for constructing
the hat function for increasing dimension n is mainly due to the increase of
find() in subdivision
cones
acceptance
setup (ms) 66.2 76.8 100.2 141.6 224.7 393.6 744
setup/cone (total) (-s)
Table
4: time for computing the hat function for multinormal distribution with
"inherited" construction points
find() in subdivision 1 2 3 4
acceptance
Table
5: acceptance probability for multinormal distribution with "inherited"
construction points
number of cones. Notice that we start with 2 n cones. Furthermore we have to
make consecutive subdivisions to shorten every edge of a simplex that
defines an initial cone. Thus the number of cones increases exponentially.
acceptance
setup/cone (-s) 540 616 714 811 927 1011 1170 1250 1421
Table
time for computing the hat function for multinormal distribution
subdivisions of the initial cones)
If the covariance matrix of the multinormal distribution is not a diagonal
matrix and the ratio of the highest and lowest eigenvalue is large, then we cannot
use initial cones only and we have to make several subdivisions of the cones.
Because of the above considerations the necessary number of cones explodes
with increasing n. Thus in this case this method cannot be used for large n.
(Suppose we have to shorten every edge of each simplex, then we have
cones if but we need 2
Tests. We ran a - 2 -test with the density proportional to exp(\Gamma
to validate the implementation. For all other densities we compared the
observed rate of acceptance to the expected acceptance probability.
Comparison with algorithm [LH98]. The code for algorithm [LH98] is
much longer (and thus contains more bugs). The setup is much slower and it
needs 11 750 -s to generate on mulitnormal distributed random point in dimension
4 (versus 38 -s in table 2 for tdrmv()).
5.3 Rectangular domain
Normal densities restricted to an arbitrary rectangle have a similar performance
as the corresponding unrestricted densities. except of the acceptance probability
which is worse since the domain of the hat h is a superset of the domain of density
f .
5.4 Quality
The quality of non-uniform random number generators using transformation
techniques is an open problem even for the univariate case (see e.g. [H-or94]
for a first approach). It depends on the underlying uniform random number
generators. The situation is more serious in the multivariate case. Notice that
this new algorithm requires more than n+2 uniform random numbers for every
random point. We cannot give an answer to this problem here, but it should be
clear that e.g. RANDU (formerly part of IBM's Scientific Subroutine Package,
and now famous for its devasting defect in three dimensions: its consecuting
points just fifteen parallel planes; see e.g. [LW97]) may
result in a generator of poor quality.
5.5 Some Examples
We have tested our algorithm in dimensions
proportional to
where a i ? 0. The domain was R n and some rectangles. We also used densities
proportional to f i (U x U is an orthonormal transformation and b a
vector, to test distributions with non-diagonal correlation matrix and arbitrary
mode.
The algorithm works well for densities f 3 , f 4 and f 5 both with
and D being a rectangle enclosing the support of f i . Although some of these
densities are not C 1 , the find() routine works. Problems arise if the level sets
of the density have "corners", i.e. the g is unstable when we vary the touching
a little bit. Then there are somes (that contains these "corners") with huge
volumeHC and further triangulation is necessary. If dimension is high (n & 5)
too many cones might be necessary. The optimization algorithm for finding the
mode fails if we use a starting point outside the support of f 5 .
The code has some parameters for adjusting the algorithm to the given
density. For example, it requires some testing to get the optimal number of
cones and the optimal level of subdivisions for calling find().
5.6 R'esum'e
The presented algorithm is a suitable method for sampling from log-concave
(and T -concave) distributions. The algorithm works well for all tested log-concave
densities if dimension is low (n . 5) or if correlation is not too high.
Restrictions of these densities to compact polytopes are possible. The setup
time is small for small dimension but increases exponentially in n. The speed
for generating random points is quite fast even for n - 6. Due to the large
amount of cones for high dimension the program requires a lot of computer
memory (typically 2-10 MB).
Although the developed algorithm is not a real black box method it is easily
adjustable for a large class of log-concave densities. Examples for which
the algorithm works are the multivariate normal distribution and the multi-variate
student distribution (with transformation T arbitrary
mean vector and variance matrix conditioned to an arbitrary compact polytope.
However for higher dimensions the ratio of highest and lowest eigenvalue of the
covariance matrix should not be "too big".
Acknowledgments
The author wishes to note his appreciation for help rendered by J-org Lenneis.
He has given lots of hints for the implementation of the algorithm. The author
also thanks Gerhard Derflinger and Wolfgang H-ormann for helpful conversations
and their interest in his work.
--R
Generating gamma variates by a modified rejection technique.
Box and M.
Remark on algorithm 178.
Principles of Random Variate Generation.
Random variate generation for multivariate densities.
The optimal selection of hat functions for rejection algorithms.
Random variable generation using concavity properties of transformed densities.
Computer methods for mathematical computations.
Table of Integrals.
Convex Poytopes.
Adaptive rejection sampling for gibbs sam- pling
Universal generators for correlation induction.
"Direct search"
The quality of non-uniform random numbers
A rejection technique for sampling from T
A universal generator for bivariate log-concave distri- butions
Continuous Univariate Distributions
Multivariate Statistical Simulation.
Polytope Volume
A sweep plane algorithm for generating random tuples.
Inversive and linear congruential pseudorandom number generators in empirical tests.
Theory and Applications.
On computer generation of random vectors by transformations of uniformly distributed vectors.
The Computation of Fixed Points and Applications
Improving the convergence of fixed-point algorithms
Efficient generation of random variates via the ratio-of-uniforms method
Lectures on Polytopes
--TR
Multivariate statistical simulation
On computer generation of random vectors by transformation of uniformly distributed vectors
A rejection technique for sampling from <italic>T</italic>-concave distributions
Inversive and linear congruential pseudorandom number generators in empirical tests
Random variate generation for multivariate unimodal densities
A sweep-plane algorithm for generating random tuples in simple polytopes
`` Direct Search'''' Solution of Numerical and Statistical Problems
Generating gamma variates by a modified rejection technique
Remark on algorithm 178 [E4] direct search
Algorithm 178: direct search
Computer Methods for Mathematical Computations
--CTR
G. Leobacher , F. Pillichshammer, A method for approximate inversion of the hyperbolic CDF, Computing, v.69 n.4, p.291-303, December 2002
Wolfgang Hrmann, Algorithm 802: an automatic generator for bivariate log-concave distributions, ACM Transactions on Mathematical Software (TOMS), v.26 n.1, p.201-219, March 2000
W. Hrmann , J. Leydold, Random-number and random-variate generation: automatic random variate generation for simulation input, Proceedings of the 32nd conference on Winter simulation, December 10-13, 2000, Orlando, Florida
Seyed Taghi Akhavan Niaki , Babak Abbasi, Norta and neural networks based method to generate random vectors with arbitrary marginal distributions and correlation matrix, Proceedings of the 17th IASTED international conference on Modelling and simulation, p.234-239, May 24-26, 2006, Montreal, Canada
sampling with the ratio-of-uniforms method, ACM Transactions on Mathematical Software (TOMS), v.26 n.1, p.78-98, March 2000 | rejection method;multivariate log-concave distributions |
290382 | Mixed-Mode BIST Using Embedded Processors. | In complex systems, embedded processors may be used to run software routines for test pattern generation and response evaluation. For system components which are not completely random pattern testable, the test programs have to generate deterministic patterns after random testing. Usually the random test part of the program requires long run times whereas the part for deterministic testing has high memory requirements.In this paper it is shown that an appropriate selection of the random pattern test method can significantly reduce the memory requirements of the deterministic part. A new, highly efficient scheme for software-based random pattern testing is proposed, and it is shown how to extend the scheme for deterministic test pattern generation. The entire test scheme may also be used for implementing a scan based BIST in hardware. | Introduction
Integrating complex systems into single chips or implementing
them as multi-chip modules (MCMs) has become
a widespread approach. A variety of embedded processors
and other embedded coreware can be found on the
market, which allows to appropriately split the system
functionality into both hardware and software modules.
With this development, however, system testing has become
an enormous challenge: the complexity and the restricted
accessibility of hardware components require sophisticated
test strategies. Built-in self-test combined with
the IEEE 1149 standards can help to tackle the problem at
low costs [10].
For conventional ASIC testing, a number of powerful
BIST techniques have been developed in the past [1 - 3, 5,
example, it has been shown
that combining random and efficiently encoded deterministic
patterns can provide complete fault coverage while
This work has been supported by the DFG grant "Test und Synthese
schneller eingebetteter Systeme" (Wu 245/1-2).
keeping the costs for extra BIST hardware and the storage
requirements low [13, 14, 32]. In the case of embedded
systems such a high quality test is possible without any
extra hardware by just using the embedded processor to
generate the tests for all other components.
Usually, this kind of functional testing requires large
test programs, and a memory space not always available
on the system. In this paper it will be shown how small
test programs can be synthesized such that a complete
coverage of all non-redundant stuck-at faults in the combinational
parts of the system is obtained. The costs for extra
BIST hardware in conventional systems testing are reduced
to the costs for some hundred bytes of system memory
to store the test routines. The proposed BIST approach
can efficiently exploit design-for-testability structures of
the subcomponents. As shown in Figure 1 during serial
BIST the embedded processor executes a program which
generates test patterns and shifts them into the scan regis-
ter(s) of the component(s) to be tested. Even more effi-
ciently, the presented approach may be used to generate
test data for input registers of pipelined or combinational
subsystems.
embedded
processor
scan-
input
scan-
output
scan-
input
scan-
output
test data
(random &
deterministic
patterns)
Figure
1: Serial BIST approach.
The structure of the test program can be kept very sim-
ple, if only random patterns have to be generated, since
then some elementary processor instructions can be used
[12, 21, 25, 28].
Even a linear feedback shift register (LFSR) can be emulated
very efficiently: Figure 2 shows as an example a
modular LFSR and the corresponding program (for simplicity
the C-code is shown) to generate a fixed number of
state transitions.
void transition (int m, int n,
unsigned int polynomial,
unsigned int *state)
transitions of modular LFSR of degree n */
{
int
for (i=0; i<m; i++)
{
if (*state >> n-1)
{
*state <<=
*state ^= polynomial;
else
*state <<=
Figure
2: Modular LFSR and corresponding program for
generating state transitions.
But usually not all the subcomponents of a system will
be random pattern testable, and for the remaining faults deterministic
test patterns have to be applied. For this pur-
pose, compact test sets may be generated as described in
[16, 18, 22, 27] and reproduced by the test program, or a
hardware-based deterministic BIST scheme is emulated by
the test software [13 - 15, 32]. This kind of mixed-mode
testing may interleave deterministic and random testing or
perform it successively. In each case, the storage requirements
for the deterministic part of the test program are directly
related to the number of undetected faults after random
pattern generation. There is a great trade-off between
the run-time for random test and the memory requirements
of the mixed-mode program. Assume a small improvement
of the random test method which leads to an increase
of the fault coverage from 99.2% to 99.6%. This reduces
the number of undetected faults and the storage requirements
by the factor 1/2. Overall, the efficiency of a mixed-mode
test scheme can be improved to a much higher degree
by modifying its random part rather than its deterministic
part.
In this paper a highly efficient software-based random
BIST scheme is presented which is also used for generating
deterministic patterns. The rest of the paper is organized
as follows: In the next section, different random pattern
test schemes to be emulated by software are evaluated,
and in section 3 the extension to deterministic testing is
described. Subsequently, in section 4, a procedure for optimizing
the overall BIST scheme is presented, and section
5 describes the procedure for generating the mixed-mode
test program. Finally, section 6 gives some experimental
results based on the INTEL 80960CA processor as an example
Emulated Random Pattern Test
Test routines exploiting the arithmetic functions of a
processor can produce patterns with properties which are
sufficient for testing random pattern testable circuits [12,
25], even if they do not completely satisfy all the conditions
for randomness as stated in [11], e.g. However, for
other circuits, in particular for circuits considered as random
pattern resistant, arithmetic patterns may not perform
as well. Linear feedback shift registers (LFSRs) corresponding
to primitive feedback polynomials and cellular
automata are generally considered as stimuli generators
with good properties for random testing [9, 17, 20]. But
the generated sequences still show some linear dependen-
cies, such that different primitive polynomials perform differently
on the same circuit. In some cases, the linear dependencies
may support fault detection, for other circuits
they perform poorly. In the following, the fault coverage
obtained by several LFSR-based pattern generation
schemes will be discussed with some experimental data.
2.1 Feedback Polynomial
In contrast to hardware-based BIST, in a software-based
approach the number and the positions of the feedback taps
of the LFSR have no impact on the costs of the BIST im-
plementation. Thus, for a given length the achievable fault
coverage can be optimized without cost constraints.
Assuming a test per scan scheme as shown in Figure 3
the sensitivity of the fault coverage to the selected feed-back
polynomial has been studied by a series of experiments
for the combinational parts of the ISCAS85 and
ISCAS89 benchmark circuits [4, 6].
LFSR
feedback
scan path
CUT
Figure
3: Scan-based BIST.
Circuit PI F Degree LFSR1 LFSR2 LFSR3 LFSR4 LFSR5 LFSR6 Average
Table
1: Absolute and normalized (w. r. t. worst LFSR) percentage of undetected non-redundant faults after 10,000 patterns.
Fault simulation of 10,000 random patterns was performed
for each circuit using several different feedback
polynomials, all of the same degree. Some typical results
are shown in Table 1. The first four columns contain the
circuit name, the number of inputs, the number of non-redundant
faults, and the selected degree of the feedback
polynomial. 1 The remaining columns show the characteristics
for six different LFSRs. The first entry reports the
percentage of undetected non-redundant faults, and the second
entry normalizes this number to the corresponding
number for the worst LFSR (in %). The worst and best
performing LFSR are printed in bold, respectively. The
last column gives the average over all of the LFSRs.
It can be observed that there is a big variance in the performance
of different LFSRs of the same degree. For s641,
e.g., the best LFSR reduces the number of undetected
faults down to 27% of the faults left undetected by the
worst polynomial.
2.2 Multiple-Polynomial LFSRs
One explanation for the considerable differences in fault
coverage observed in section 2.1 is given by the fact, that
linear dependencies of scan positions may prevent certain
necessary bit combinations in the scan patterns independent
of the initial state of the LFSR [2]. For different
LFSRs the distribution of linear dependencies in the scan
chain is different and, depending on the structure of the
circuit, may have a different impact on the fault coverage.
As shown in Figure 4 the impact of linear dependencies
can be reduced if several polynomials are used. In this
small example the LFSR can operate according to two dif-
1 The degrees of the polynomials have been selected, such that they
were compatible with the requirements for the deterministic test
described in section 3.
ferent primitive feedback polynomials P
and are selected by the input of
the multiplexer. For any given initial state
LFSR produces a scan pattern (a 0 , . , a 7 ), such that,
depending on the selected polynomial, the shown equations
for hold for its components.
st-0
st-0
a 0
a 1
a 2
a 3
a 4
a 5
a 6
a 7
U
I
Figure
4: Scan-based BIST with multiple-polynomial LFSR.
For polynomial P 0 there is a linear relation a 3
which prevents the
combination (1, 1, 1) at the inputs of the AND-gate. This
implies that the polynomial P 0 (X) can never produce a test
for the stuck-at-0 fault at node o 2 . In contrast to that, for
polynomial the same input positions are linearly
independent and produce all possible nonzero bit combinations
and thus a test for the considered fault. Similarly, the
stuck-at-0 fault at node o 1 cannot be tested using polynomial
polynomial P 0 (X) can provide a test.
Using both polynomials, each for a certain number of
patterns, increases the chance of detecting both faults.
Such a multiple-polynomial LFSR can be implemented
efficiently in hardware by trying to share parts of the feed-back
for several polynomials. A software emulation is
also very simple, since the basic procedure to simulate an
LFSR has to be modified only slightly. To control the
selection of feedback polynomials several schemes are
possible. The first is shown in Figure 5 assuming N
random patterns to be generated by p different polynomials
denotes the LFSR operation
corresponding to feedback polynomial P i .
initialize (LFSR);
for
generate -N/p- patterns by LFSR(P i );
Figure
5: Successive multiple-polynomial scheme (SUC).
The polynomials are applied successively to generate
contiguous subsequences of -N/p- random patterns, the
scheme will therefore be referred to as scheme SUC. For
one polynomial the scheme degenerates to the conventional
single polynomial scheme. The possibility to
switch between different distributions of linear dependencies
is paid by the disadvantage that some patterns may
occur repeatedly up to p times. Hence, an overall increase
of the fault coverage cannot be expected, but experiments
have shown that there is indeed an improvement for some
circuits. Table 2 lists the results for the same set of circuits
as studied in the previous section.
Circuit Degree
Table
2: Absolute and normalized (w. r. t. worst and best
single LFSR) percentage of undetected non-redundant
faults for scheme SUC after 10,000 patterns.
For each circuit 10,000 patterns were simulated using p
polynomials. For each experiment the percentage
of undetected non-redundant faults is reported (1st
line), as well as the corresponding normalized numbers for
the worst (2nd line) and for the best single polynomial
(3rd line) of the same degree (in %).
Applying the successive scheme for example to the circuit
c2670 with reduces the number of undetected
faults down to 69.58% compared with the worst single
polynomial. Even more important is that the scheme also
outperforms the best single polynomial and the number of
remaining target faults for ATPG is less than 75%, i.e.
25% percent of the faults left by the best single polynomial
are additionally covered by this scheme.
The randomness of the sequence can be further in-
creased, if the polynomials are not used successively, but
selected randomly for each test pattern. This random selection
can be implemented by a second LFSR as shown in
Figure
6 and will be referred to as scheme RND.
U
feedback
CUT
scan chain
Figure
Hardware scheme for the random selection of feed-back
polynomials (RND).
The selection between p different feedback polynomials
for LFSR1 is controlled by -log 2 p- bits of the state register
of LFSR2. For a software implementation of the structure
of Figure 6, two additional registers are required for
storing the feedback polynomial and the state of LFSR2.
and LFSR2 can be emulated by the same proce-
dure, and the complete routine to generate a sequence of N
random patterns is shown in Figure 7.
initialize (LFSR1);
initialize (LFSR2);
for
{
select P based on state of LFSR2;
generate 1 pattern by LFSR1(P);
perform 1 state transition of LFSR2;
Figure
7: Software routine for the random pattern
generation scheme of Figure 6 (RND).
Table
3 shows the percentage of undetected non-redundant
faults and the corresponding normalized numbers obtained
by the scheme RND for
Circuit Degree
Table
3: Absolute and normalized (w. r. t. worst and best
single LFSR) percentage of undetected non-redundant
faults for scheme RND after 10,000 patterns.
For the randomly selected polynomials, there is a
higher chance of pattern repetitions, but randomly switching
between different distributions of linear dependencies
may improve the quality of the patterns. For some cir-
cuits, this results in an improvement of fault coverage, so
that the set of faults which remain for deterministic testing
is further reduced.
2 . 3 Multiple-Polynomial , Multiple-Seed
Another way of improving the efficiency of a random
test is repeatedly storing a new seed during pattern generation
as investigated for instance in [23]. This technique
can be combined with the use of multiple polynomials as
shown in Figure 8.
U
feedback
CUT
scan chain
Figure
8: Multiple-polynomial, multiple-seed LFSR.
As for the scheme RND, -log 2 p- bits of the state register
of LFSR2 are used to drive the selection between p different
feedback polynomials of degree k for LFSR1. The
remaining k bits provide the seed for LFSR1. In the sequel
this scheme will be referred to as the scheme RND 2 . The
structure of the corresponding test program is shown in
Figure
9.
initialize (LFSR2);
for
{
select seed S and polynomial P
based on state of LFSR2;
initialize LFSR1 with
generate 1 pattern by LFSR1(P);
perform 1 state transition of LFSR2;
Figure
9: Test program for the multiple-polynomial, multi-
ple-seed LFSR (RND 2 ).
Again, in this scheme patterns may occur repeatedly,
but in addition to the advantage of randomly changing the
distribution of linear dependencies this scheme is also able
to generate the all zero-vector which is often needed for
complete fault coverage.
Table
4 gives the results for
(percentage of undetected non-redundant faults and the corresponding
normalized numbers as in Tables 2 and 3).
Circuit Degree
Table
4: Absolute and normalized (w. r. t. worst and best
single LFSR) percentage of undetected non-redundant
faults for scheme RND 2 after 10,000 patterns.
As expected, not for all circuits the fault coverage
increases, but there are circuits where this technique leads
to significant improvements. For circuits s838.1 and
s9234 the best results are obtained compared with all the
experiments before.
3 Software-Based Deterministic BIST
The structure of the multiple-polynomial, multiple-seed
random BIST scheme of Figure 8 is very similar to the deterministic
BIST scheme based on reseeding of multiple-
polynomial LFSRs proposed in [13, 14], see Figure 10.
U
CUT
scan chain (m bits)
id seed
Figure
10: Deterministic BIST scheme based on a multiple-
polynomial LFSR by [14].
A deterministic pattern is encoded as a polynomial identifier
and a seed for the respective polynomial. During test
mode the pattern can be reproduced by emulating the
LFSR corresponding to the polynomial identifier, loading
the seed into the LFSR and performing m autonomous
transitions of the LFSR. After the m-th transition the scan
chain contains the desired pattern which is then applied to
the CUT.
To calculate the encoding systems of linear equations
have to be solved. For a fixed feedback polynomial
of degree k the LFSR produces
an output sequence (a i ) i-0 satisfying the feedback
equation a k. The
LFSR-sequence is compatible with a desired test pattern t
specified bits a holds. Recursively
applying the feedback equation provides a system of
linear equations in the seed variables a 0 , ., a k-1 . If no solution
can be found for the given polynomial, the next
available polynomial is tried, and in [14] it has been
shown that already for 16 polynomials there is a very high
probability of success that a deterministic pattern with s
specified bits can be encoded into an s-bit seed.
Hence, if p different polynomials are available and the
polynomial identifier is implemented as a "next bit", the
seed and the next bits for a deterministic test set
number of specified bits s max require
bits of storage. Minimizing
S(T) requires both minimizing the maximum number of
care bits s max and the number of patterns N. In [15] an
ATPG-algorithm was presented which generates test patterns
where the number of specified bits s max is mini-
mized. In a mixed-mode BIST approach the number N of
patterns is highly correlated to the number of faults left
undetected after random testing.
4 Synthesizing the BIST Scheme
Since the efficiency of a mixed-mode BIST scheme
strongly depends on the number of hard faults to be covered
by deterministic patterns, a major concern in synthesizing
the BIST scheme is optimizing the random test.
The experimental data of section 2 show that significant
variances in the fault efficiency achieved by different
LFSR schemes exist, and that there is no universal
scheme or polynomial working for all of the circuits. In
the sequel, a procedure is presented for determining an optimized
LFSR scheme. The selection of the LFSRs is
guided, such that the fault efficiency is maximized while
satisfying the requirements for an efficient encoding of deterministic
patterns for the random pattern resistant faults.
Assuming a table of primitive polynomials available the
proposed procedure consists of 4 steps:
Perform ATPG to eliminate the redundant faults and to
estimate the maximum number of specified bits, s max ,
to be expected in the test cubes for the hard faults.
Select M polynomials of degree s max randomly, and
perform fault simulation with the corresponding shift
register sequences. Rank the polynomials according to
the fault coverage achieved.
Select the P best polynomials and store the highest
fault coverage and the corresponding LFSR as
BEST_SCHEME.
Using polynomials, simulate the schemes
SUC, RND, and RND 2 . Update BEST_SCHEME to
the best solution obtained so far.
The number M is mainly determined by a limit of the
computing time to be spent. The number P is also restricted
by the computing time available, but in addition
to that each LFSR requires two registers of the processor
for pattern generation. So, the register file of the target
processor puts a limit on P, too.
Table
5 shows the results achieved by this procedure for
the same set of circuits as studied in section 2. For the
same degrees as used in section 2 sequences of 10,000
random patterns were applied.
Scheme
best UF worst
Table
5: Best schemes and relation to best and worst single
polynomial solution.
The second and third column show the best scheme and
the corresponding number of polynomials p, column 4
provides the fault efficiency FE (percentage of detected
non-redundant faults). The percentage of faults left undetected
by the best scheme is reported in column UF.
UF best normalizes this solution to the number obtained by
the best single polynomial, UF worst refers to the worst
single polynomial.
Table
5 indicates that the search for an appropriate random
test scheme can reduce the number of remaining
faults significantly. The procedure needs M
runs of fault simulations, but may decrease the storage
amount needed for deterministic patterns considerably.
These savings in memory for the mixed-mode test program
are particularly important, if the test program has to
be stored in a ROM for start-up and maintenance test.
Generating Mixed-Mode Test Programs
Test programs implementing the random test schemes
and the reseeding scheme for deterministic patterns were
generated for the INTEL 80960CA as a target processor.
Its large register set made a very compact coding possible.
Since the part of the test program which generates the
deterministic patterns is a superset of instructions required
for implementing any of the random schemes, only the
example for the most complex random scheme is shown.
The mixed-mode test program of Figure 11 generates
random test patterns by multiple-polynomial, multiple-
seed LFSR emulation, and switches to the reseeding
scheme afterwards.
The program of Figure 11 requires 27 words in memory
but assumes that all LFSRs fit into registers. This
steps1 equ . ; number of steps for lfsr1
steps2 equ . ; number of steps for lfsr2
steps_det equ . ; number of steps for deterministic test
len1 equ . ; position of msb of lfsr1
len2 equ . ; position of msb of lfsr2
testport equ . ; address of testport
no_poly_bits equ . ; number of bits for polynomial choice
mask equ . , define mask
start dq startvector ; define startvector for lfsr2
poly dq polynomials ; define polynomials for lfsr1
seeds dq seedvectors ; define seeds for det. test
seed_offset equ seeds - start ; define offset for seed table
begin: lda testport, r10 ; load address of testport
lda steps_det, r11 ; load loopcounter for lfsr1 in det. mode
lda steps1, r12 ; load loop counter for lfsr1
lda start, r14 ; load startvector address for lfsr1
ld (r14), r6 ; load startvector for lfsr2
ld 4(r14), r7 ; load polynomial for lfsr2
l0: mov r6, r4 ; initialize lfsr1 with contents of lfsr2
and mask, r4, r15 ; compute poly-id
ld 8(r14)[r15*4], r5 ; polynomial for lfsr1
lda no_poly_bits, r15 ; load number of bits for poly-id
l1: shro no_poly_bits, r4, r4 ; shift poly-bits
lda steps2, r13 ; load loop counter for lfsr1
l2: st r4, (r10) ; write testpattern to testport
mov r4, r8
shlo 1, r8, r4 ; shift left
bbc len2, r8, l3 ; branch if msb of lfsr2 equal zero
xor r4, r5, r4 ; xor
decrement loop counter
cmpibne r13, 0, l2 ; branch not equal zero
mov r6, r8
shlo 1, r8, r6 ; shift left
bbc len1, r8, l4 ; branch if msb of lfsr1 equal zero
xor r6, r7, r6 ; xor
l4: subi r12, 1, r12 ; decrement loop counter
cmpibg r12, r11, l0 ; branch if r12 > steps_det
ld seed_offset(r14)[r12*4],r6 ; load seed
cmpibne r12,0,l0
Figure
11: Mixed-mode BIST program.
is always possible for random pattern generation, but encoding
deterministic patterns may lead to LFSR lengths
exceeding bits. In this case, the program of Figure 11
has to be modified in a straightforward way, and requires
more memory. Table 6 gives the relation between memory
requirements and LFSR lengths.
LFSR length
Memory requirements
(words)
Table
length and memory requirements for the
mixed-mode test program.
In addition to the program size, memory has to be reserved
for storing the polynomials and the seeds in order to
decode the deterministic patterns. The experimental results
of the next section show that these data form by far the
major part of the memory requirements.
6 Experimental Results
The described strategy for generating mixed-mode test
programs was applied to all the benchmark circuits for M
e. for each circuit M 28
runs of fault simulation were performed to determine the
best random scheme. Tables 7 and 8 show the results.
Circuit PI Degree Best Scheme p
Table
7: Circuit characteristics and best random scheme.
The selected random schemes and their characteristic
data are reported in Table 7. Columns 2 and 3 list the
number of primary inputs PI and the degree of the poly-
nomials. The best random scheme and the number of polynomials
are reported in the subsequent columns.
Table
8 shows the detailed results. The number of non-redundant
faults for each circuit is given in column 2. The
efficiency of the random scheme is characterized again by
the fault efficiency FE, the percentage of undetected non-redundant
faults UF and the normalized numbers for UF
with respect to the best (UF best ) and the average (UF average )
single polynomial solution in columns 3 through 6.
Circuit F FE UF UF best UF average
s838.1 931 76.48 23.52 71.1 65.75
Table
8: Fault efficiency and percentage of undetected non-redundant
faults for the best random schemes after
10,000 patterns.
The reduction of the remaining faults obtained by the
best random test scheme is significant. For instance, the
circuit c7552 is known to be very random pattern resis-
tant, and a single polynomial solution in the average leads
to a fault efficiency of 95.79% leaving 4.21% of the faults
for deterministic encoding. For the same circuit, the RND 2
scheme achieves a fault efficiency of 98.87%, and only
1.13% or, absolutely, 84 faults are left. This corresponds
to a reduction of the remaining faults down to 27%.
For circuits s820 and s1423 a careful selection of the
random scheme even makes the deterministic test super-
fluous. Finally, it should be noted that for the larger cir
cuits already a small relative reduction means a considerable
number of faults which are additionally covered by the
random test and need not be considered during the deterministic
test. For example for circuit s38417 a reduction
down to 85.75% and 92.26%, respectively, means that additional
313 and 158, respectively, faults are eliminated
during random test.
Table
9 shows the resulting number of test patterns required
for the random pattern resistant faults and the
amount of test date storage (in bits) for the best random
scheme compared to a random test using an average single
polynomial. This includes the storage needed for the poly-
nomials, the initial LFSR states for the random test and
the encoded deterministic test set. Since the goal of this
work was to determine the impact of the random test on
the test data storage, a standard ATPG tool was selected to
perform the experiments [24]. For all circuits the fault efficiency
is 100% after the deterministic test.
Circuit
Deterministic
patterns
Test data storage
(bits)
scheme
Average
polynomial
scheme
Average
polynomial
s420.1 22 34 503 776
s1238 7 21 198 431
s5378 22 31 759 883
Table
9: Number of deterministic patterns and storage requirements
for the complete test data (in bits).
The results show that an optimized random test in fact
considerably reduces the number of deterministic patterns
and the overall test data storage. This is particularly true
for the circuits known as random pattern resistant. E.g. for
circuit c7552 the number of deterministic patterns is reduced
from 92 to 51 and the reduction in test data storage
is about 5K. For circuit s38417 the best scheme eliminates
137 deterministic patterns, which leads to a reduction
in test data storage of more than 14K. As shown in Table
already with standard ATPG the proposed technique
requires less test data storage than an approach based on
storing a compact test set (cf. [16, 18, 22, 27]).
Circuit
Deterministic
patterns
Test data storage
(bits)
scheme
Compact
Test
scheme
Compact
Test
s420.1 22 43 503 1505
s5378 22 104 759 22256
Table
10: Amount of test data storage for the proposed
approach and for storing a compact test set.
It can be expected, that the test data storage for the
presented approach could be reduced even further, if an
ATPG tool specially tailored for the encoding scheme were
used as described in [15].
7 Conclusion
A scheme for generating mixed-mode test programs for
embedded processors has been presented. The test program
uses both new, highly efficient random test schemes and a
new software-based encoding of deterministic patterns.
It has been shown that the careful selection of primitive
polynomials for LFSR-based random pattern generation
has a strong impact on the number of undetected faults,
and a multiple-polynomial random pattern scheme provides
significantly better results in many cases. The quality
0of the random scheme has the main impact on the
overall size of a mixed-mode test program. As an example,
for the processor INTEL 80960CA test programs were
generated, and for all the benchmark circuits a complete
coverage of all non-redundant faults was obtained.
--R
Test Embedding in a Built-in Self-Test Environment
Exhaustive Generation of Bit Patterns with Applications to VLSI Self-Testing
A Neutral Netlist of 10 Combinational Benchmark Designs and a Special Translator in
Combinational Profiles of Sequential Benchmark Circuits
A New Pattern Biasing Technique for BIST
BIST Hardware Generator for Mixed Test Scheme
Multichip Module Self-Test Provides Means to Test at Speed
Shift Register Sequences
Test Generation Based On Arithmetic Operations
Generation of Vector Patterns Through Reseeding of Multiple-Polynomial Linear Feedback Shift Registers
Pattern Generation for a Deterministic BIST Scheme
"Compaction of Test Sets Based on Symbolic Fault Simulation"
Cellular Automata-Based Pseudorandom Number Generators for Built-In Self-Test
"Cost-Effective Generation of Minimal Test Sets for Stuck-at Faults in Combinational Logic Circuits"
Accumulator Built-In Self Test for High-Level Synthesis
"ROTCO: A Reverse Order Test Compaction Technique"
A Multiple Seed Linear Feed-back Shift Register
Advanced Automatic Test Generation and Redundancy Identification Techniques
Synthesis of Mapping Logic for Generating Transformed Pseudo-Random Patterns for BIST
Minimal Test Sets for Combinational Circuits
Circuits for Pseudo-Exhaus- tive Test Pattern Generation
Test Using Unequiprobable Random Patterns
Multiple Distributions for Biased Random Test Patterns
Decompression of Test Data Using Variable-Length Seed LFSRs
--TR
--CTR
Sybille Hellebrand , Hua-Guo Liang , Hans-Joachim Wunderlich, A Mixed Mode BIST Scheme Based on Reseeding of Folding Counters, Journal of Electronic Testing: Theory and Applications, v.17 n.3-4, p.341-349, June-August 2001
Hua-Guo Liang , Sybille Hellebrand , Hans-Joachim Wunderlich, Two-Dimensional Test Data Compression for Scan-Based Deterministic BIST, Journal of Electronic Testing: Theory and Applications, v.18 n.2, p.159-170, April 2002
Rainer Dorsch , Hans-Joachim Wunderlich, Reusing Scan Chains for Test Pattern Decompression, Journal of Electronic Testing: Theory and Applications, v.18 n.2, p.231-240, April 2002
Liang Huaguo , Sybille Hellebrand , Hans-Joachim Wunderlich, A mixed-mode BIST scheme based on folding compression, Journal of Computer Science and Technology, v.17 n.2, p.203-212, March 2002 | embedded systems;deterministic BIST;BIST;random pattern testing |
290837 | Code generation for fixed-point DSPs. | This paper examines the problem of code-generation for Digital Signal Processors (DSPs). We make two major contributions. First, for an important class of DSP architectures, we propose an optimal O(n) algorithm for the tasks of register allocation and instruction scheduling for expression trees. Optimality is guaranteed by sufficient conditions derived from a structural representation of the processor Instruction Set Architecture (ISA). Second, we develop heuristics for the case when basic blocks are Directed Acyclic Graphs (DAGs). | INTRODUCTION
Digital Signal Processors (DSPs) are receiving increased attention recently due to
their role in the design of modern embedded systems like video cards, cellular telephones
and other multimedia and communication devices. DSPs are largely used
in systems where general-purpose architectures are not capable of meeting domain
specific constraints. In the case of portable devices, for example, the power consumption
and cost may make the usage of general-purpose processors prohibitive.
The same is true when high-performance arithmetic processing is required to implement
dedicated functionality at low cost, as in the case of specific communications
Preliminary version of parts of this paper was presented in [Araujo and Malik 1995] at the 1995
ACM/IEEE International Symposiumon System Synthesis, France, September 13-15, 1995, and in
[Araujo et al. 1996] at the 1996 ACM/IEEE Design Automation Conference, June 3-7. Author's
address: G. Araujo, Institute of Computing, University of Campinas (UNICAMP), Cx.Postal
6176, Campinas, SP, 13081-970, Brazil and S. Malik, Department of Electrical Engineering, Princeton
University, Olden St., Princeton, NJ, 08544, USA.
Permission to make digital or hard copies of part or all of this work for personal or classroom use is
granted without fee provided that copies are not made or distributed for profit or direct commercial
advantage and that copies show this notice on the first page or initial screen of a display along
with the full citation. Copyrights for components of this work owned by others than ACM must
be honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on
servers, to redistribute to lists, or to use any component of this work in other works, requires prior
specific permission and/or a fee. Permissions may be requested from Publications Dept, ACM
Inc., 1515 Broadway, New York, NY 10036 USA, fax +1 (212) 869-0481, or [email protected].
and computer graphics applications. The increasing usage of these processors has
revealed a new set of code generation problems, which are not efficiently handled
by traditional compiling techniques. These techniques make implicit assumptions
about the regular nature of the target architecture and microarchitecture. This is
rarely the case with DSPs, where irregularities in the microarchitecture are the very
basis for the efficient computation of specialized functions. Due to hard on-chip
memory constraints and hard real-time performance requirements, the code generated
for these processors has to meet very high quality standards. Since existing
compilation techniques are not up to this task, the vast majority of the code is
written directly in assembly language. This research is part of a project directed
towards developing compilation techniques that are capable of generating quality
code for such processors (http://ee.princeton.edu/spam). The implementation of
these techniques forms the compiling infrastructure used in this work, which is
called the SPAM compiler.
There is a large body of work done in code generation for general purpose pro-
cessors. Code generation is, in general, a hard problem. Instruction selection for
expressions subsumes Directed Acyclic Graph (DAG) covering, which is an NP-complete
problem [Garey and Johnson 1979]. Bruno and Sethi [1976] and Sethi
[1975] showed that the problem of optimal code generation for DAGs is NP-complete
even for a single register machine. It remains NP-complete for expressions in which
no shared term is a subexpression of any other shared term [Aho et al. 1977a]. An
efficient solution for a restricted class of DAGs has been proposed in [Prabhala and
1980]. Code generation for expression trees has a number of O(n) solutions,
where n is the number of nodes in the tree. These algorithms offer solutions for the
case of stack machines [Bruno and Sethi 1975], register machines [Sethi and Ullman
1970] [Aho and Johnson 1976] [Appel and Supowit 1987] and machines with
specialized instructions [Aho et al. 1977b]. They form the basis of code generation
for single issue, in order execution, general-purpose architectures.
The problem of generating code for DSPs and embedded processors has not received
much attention though. This was probably due to the small size of the
programs running on these architectures, which enabled assembly programming.
With the increasing complexity of embedded systems, programming such systems
without the support of high-level languages has become impractical. Many of the
problems associated to code generation for DSP processors were first brought to
light by Lee in [Lee 1988] [Lee 1989], a comprehensive analysis of the architecture
features of these processors. Code generation for DSP processors has been studied
in the past, but only more recently a number of interesting work has tackled
some of its important problems. Marwedel [1993] proposed a tree-based mapping
technique for compiling algorithms into microcode architectures. [Liem et al. 1994]
uses a tree-based approach for algorithm matching and instruction selection, where
registers are organized in classes and register allocation is based on a left-first algo-
rithm. Datapath routing techniques have also been proposed [Lanner et al. 1994]
to perform efficient register allocation. Wess [1990] proposed the usage of Normal
Form Schedule for DSP architectures, and offered a combined approach for register
allocation and instruction selection using the concept of trellis diagrams [Wess
1992]. [Kolson et al. 1996] recently proposed an interesting exact approach for
register allocation in loops for embedded processors. An overview of the current
Code Generation for Fixed-Point DSPs \Delta 3
research work on code generation for DSP processors, and embedded processors in
general, can be found in [Marwedel and Goosens 1995].
In this paper, we propose an optimal two phase algorithm which performs instruction
selection, register allocation and instruction scheduling for an expression tree
in polynomial time, for a class of DSPs. The architecture model here (described in
Section 2) is of a programmable highly-encoded Instruction Set Architecture (ISA),
fixed-point DSP processor. Formally speaking, this class is an extension of the machine
models discussed in Coffman and Sethi [1983]. In the first pass (Section 3),
we perform instruction selection and register allocation simultaneously, using the
Aho-Johnson algorithm [Aho and Johnson 1976]. The second pass, described in
Section 4, is an O(n) algorithm that takes an optimally covered expression tree
with n nodes, and schedules instructions such that no memory spills are required.
A memory spill is an operation where the contents of a particular register is saved
in memory due to a lack of available registers for some operation, and reloaded
from memory after that operation is finished. Observe that a memory store operation
required by the architecture topology is not considered a memory spill. The
proposed algorithm uses the concept of the Register Transfer Graph (RTG) that is
a structural representation of the datapath, annotated with ISA information. We
show that if the RTG of a machine is acyclic, then optimal code is guaranteed for
any program expression tree written for that machine. In this case the DSP is
said to have an acyclic datapath. Since DAG code generation is NP-complete, we
develop heuristics for the case of acyclic datapaths (Section 5) which again uses the
RTG concept. In Section 6 we show the results of applying these ideas to benchmark
programs. Section 7 summarizes our major contributions and suggests some
open problems.
2. ARCHITECTURAL MODEL
DSP processors are irregular architectures, when compared with their general purpose
counterparts. This section analyzes the main architecture features which distinguish
DSPs from general purpose processors with respect to basic block code
generation. It is not the purpose of this section to give a detailed and extensive
analysis of these features. A comprehensive analysis of DSP architectures can be
found in [Lee 1988][Lee 1989] [Lapsley et al. 1996].
DSPs can be classified according to the type of data they use as fixed-point DSPs
and floating-point DSPs. In applications running on a fixed-point DSP, users are
responsible for scaling the result of the integer operations. This is automatically
done in floating-point DSPs. Floating point units are extremely costly in terms of
silicon area and clock cycles. For this reason, a large number of the systems based
on DSPs uses fixed-point DSPs. In this case, the acronym DSP will be assumed
from now on to mean fixed-point DSP.
DSPs have on-chip data memory, based on fast static RAMs and on-chip non-volatile
program ROM. Unlike general purpose architectures, DSPs are not designed
with cache or virtual memory systems, since data and program streams usually fit
into the available on-chip memories. Because on-chip memories are fast and cache
misses are not an issue, some DSPs are designed as memory-register architectures
[Texas Instruments 1990]. In order to achieve the bandwidth required by its appli-
cations, other DSPs architectures provide multiple memory banks [Motorola 1990].
Since performance is an important factor for DSP applications, DSP instructions
are usually designed to be fetched in a single machine cycle. In order to achieve
that instructions are encoded such as to minimize the number of bits they require.
In some architectures [Texas Instruments 1990] this is done by means of data memory
pages, where instructions need only to carry the offset of the data within the
current page in order to access it.
The goal of the design of a DSP datapath is to implement those functional units
which can speed up costly operations that frequently occur in the processor application
domain. A common example of such units is Multiply and Accumulate
(MAC). Due to design requirements, DSP designers frequently constrain the inter-connectivity
between registers and functional units. There are two main reasons for
this. First, the desired functionality usually requires a particular datapath topol-
ogy. Second, broad interconnectivity translates into datapath buses and/or muxes,
which results in increased cost and instruction performance degradation.
A large number of DSPs are heterogeneous register architectures. These are architectures
which contain multiple register files, and instructions that require operands
and store the resulting computation in different register files (hence the name het-
erogeneous). In general-purpose architectures, instructions usually do not restrict
the registers they use, provided they come from the same register file (hence operand
registers are homogeneous). This considerably simplifies the code generation prob-
lem, since it decouples the tasks of instruction selection from register allocation.
Due to this property, we say that general-purpose architectures are homogeneous
register architectures.
Example 1. An example of a DSP architecture is the TI TMS320C25 Digital
Signal Processor (DSP) [Texas Instruments 1990], which will be considered the
target architecture for the rest of this paper. This processor is part of the TI
TMS320 family of processors, which makes a large number of all commercial DSP
processors in use today. The TMS320 family is composed of fixed-point processors
(TMS320C1x/C2x/C5x/C54x) which are heterogeneous architectures, and also by
a number of floating-point homogeneous architecture DSPs (TMS320C3x/C4x).
The TMS320C25 processor contains an ISA with specialized memory-register and
register-register instructions. It has three separate register-files (a, p and t) containing
a single register each.
3. OPTIMAL INSTRUCTION SELECTION AND REGISTER ALLOCATION
In homogeneous register architectures the selection of an instruction has no connection
whatsoever with the types of registers that the instruction uses. Selecting
instructions for heterogeneous register architectures usually requires allocating register
from specific register-files as operands for particular instructions. The strong
binding between instruction selection and register allocation indicates that these
tasks must be performed together [Araujo and Malik 1995].
Consider, for example, the Intermediate Representation (IR) patterns in Figure 1
corresponding to a subset of the instructions in the TMS320C25 ISA. In Figure 1
each instruction is associated to a tree-pattern whose node is composed of operations
(PLUS,MINUS,MUL), registers (a,p,t), constants (CONST) and memory
references (m).
Code Generation for Fixed-Point DSPs \Delta 5
a m
a
a
a
add: apac:
mpy: mpyk: p
a
pac:
sacl:
a
a
CONST
a
a
spac:
a
lac: m
lack: CONST
Fig. 1. IR patterns for the TMS320C25 processor
Instruction Operands Destination Cost Three Address Form
add m a,m a 1 a / a +m
apac a,p a 1 a / a
spac a,p a 1 a / a \Gamma p
lack k k a 1 a / k
pac p a 1 a / p
Table
1. Partial ISA of the TMS320C25 processor
These tree-patterns, are represented using three-address form in Table 1. Three-address
is a standard compiler representation for instructions, where the destination
of the instruction, its two operands (hence the name three address) and the operation
it performs are present. Any reference in square brackets is associated to a
memory position. Table 1 also lists the cost associated to each instruction.
Notice that the instructions implicitly define the registers they use. For example,
the instruction apac can only take its operands from registers a and p, and always
computes the result back into a. Observe also that operations which transfer data
through the datapath like lac m (load register a from memory position m), and
pac (move register p into register a) can be represented each as a single node,
corresponding to the source register of the transfer operation. The associated cost
in this case is only the cost of moving the data from the source register into the
destination register. Since registers in DSP architectures are a scarce resource, the
final code quality is very sensitive to the cost of routing data through the datapath.
3.1 Problem Definition
Optimal instruction selection combined with register allocation is the problem of
determining the best cover of an expression tree such that the cost of each pattern
match depends not only on the number of cycles of the associated instruction, but
also on the number of cycles required to move its operands from the location they
6 \Delta Guido Araujo and Sharad Malik
currently are to the location where the instruction requires them to be.
3.2 Problem Solution
A solution for this problem is to use a variation of the Aho-Johnson algorithm [Aho
and Johnson 1976] such that at each node we keep not only all possible costs for
matches at that node, but also all possible costs resulting from matching the node
and moving the result from where it is originally computed into any other reachable
location in the datapath.
Tree-grammar parsers have been used as a way to implement code-generators [Aho
et al. 1989] [Fraser et al. 1993] [Tjiang 1993]. They combine dynamic programming
and efficient tree-pattern matching algorithms [Hoffman and O'Donnell 1992] for
optimal instruction selection. We have implemented combined instruction selection
and register allocation using the olive [Tjiang 1993] code-generator generator.
olive is based on the techniques proposed in iburg [Fraser et al. 1993]. It takes
as input a set of grammar rules where tree-patterns are described in a prefixed
linearized format. The IR patterns from Table 1 were converted into the olive
description of Figure 2, by rewriting each instruction three-address representation
into that format. Notice that the instruction destination registers are now associated
to grammar non-terminals, and that these are represented by lower case letters
in
Figure
2.
Fig. 2. Partial olive specification for the TMS320C25 processor (instruction numbers and names
on the right are not part of the specification)
Rules 1 to 3 and 4 to 5 correspond to instructions that take two operands and
store the final result in a particular register (a and p respectively). Rule 6 describes
an immediate load into register a. Rules 7 to 10 are associated to data transference
instructions and bring the cost of moving data through the datapath into the total
cost of a match. We should point out that, for sake of simplicity, we do not
represent in Figure 2 all patterns corresponding to commutative operations. For
example, instruction add m can be specified in two different ways: PLUS(a,m) and
PLUS(m,a). Nevertheless, we will consider for the remainder of this paper that all
commutative forms of any operation pattern are available whenever required.
If we do not consider instruction scheduling and the associated spills at this
point, then the algorithm proposed above is optimal. This follows from the fact
that this algorithm is a variation of the provably optimal Aho-Johnson dynamic
programming algorithm [Aho and Johnson 1976].
Code Generation for Fixed-Point DSPs \Delta 7
4. SCHEDULING
Optimal instruction selection and register allocation for an expression tree is not
enough to produce optimal code. For optimal code the instructions must be scheduled
in such a way that no memory spills are introduced. Notice that memory
positions allocated in the previous phase are not considered spills. They result
from the optimal selection of memory-register instructions in the ISA and not from
the presence of resource conflicts.
Aho and Johnson [1976] showed that, by using dynamic programming, optimal
code can be generated in linear time for a wide class of architectures. The schedule
they propose is based on their Strong Normal Form Theorem. This theorem guarantees
that any optimal code schedule for an expression tree, for a homogeneous
register architectural model, can always be transformed into Strong Normal Form
(SNF). A code sequence is in SNF if it is formed by a set of code sub-sequences
separated by memory storages, where each code sub-sequence is determined by a
Strongly Contiguous (SC) schedule. A code sequence is a SC schedule if it is formed
as follows: at every selected match m, with child subtrees T 1 and T 2 , continuously
schedule the instructions corresponding to subtree T 1 followed by the instructions
corresponding to T 2
, and finally the instruction corresponding to pattern m. Wess
[1990] used SNF as a heuristic to schedule instructions for the TMS320C25 DSP.
4.1 Problem Definition
SC schedules are not an efficient way to schedule instructions for heterogeneous
register set architectures. They produce code sequences whose quality is extremely
dependent on the order the subtrees are evaluated. Consider for example the IR
tree of Figure 3(a). The expression tree was optimally matched using the approach
proposed in Section 3 and the target ISA. It takes variables at memory positions m 0
to m 4 and stores the resulting computation into one variable at memory position
using m 5 as temporary storage.
The code sequences generated for three different schedules and its corresponding
three-address representation are showed in Figure 3(b-d). Memory position m 7
was used whenever a spill location was required by the scheduler. For the code of
Figure
3(b) the left subtree of each node was scheduled first followed by its right
subtree and then the instruction corresponding to the node operation. The opposite
approach was used to obtain the code of Figure 3(c). Neither the SC schedules in
Figure
3(b) and (c), nor any SC schedule will ever produce optimal code. This is
obtained using a non-SC schedule that first schedules the addition
then the rest of the tree, as in Figure 3(d). Notice that this schedule is indeed
an SNF schedule, since first the subtree corresponding to m 2
is contiguously
scheduled followed by a storage operation into memory position m 5
, and by another
code sequence resulting from a SC schedule of the rest of the tree.
From
Figure
3 we can verify how the appropriate SNF schedule minimizes spilling.
For example, if the tree of Figure 3(a) is scheduled using left-first, the result of
operation
\Theta m 1 is first stored in p and then moved into a. Just after that,
register a has to be used to route the result of position
m 5 . But a still contains a live result (the result of m 0
In this case, the
code-generator has to emit code to spill the value of a into memory and recover it
(a)
a
a
a
a
mpy m0 p / t * [m0] lac m3 a / [m3] add m2 a / a
pac a / p add m2 a / a
sacl m7 [m7] / a sacl m5 [m5] / a lt m1 t / [m1]
lac m3 a / [m3] mpy m5 p / t * [m5] mpy m0 p / t * [m0]
add m2 a / a
sacl m5 [m5] / a pac a / p lt m4 t / [m4]
mpy m5 p / t * [m5] mpy m0 p / t * [m0] spac a / a - p
lac m7 a / [m7] pac a / p sacl m6 [m6] / a
spac a / a - p lt m7 t / [m7]
spac a / a - p
sacl m6 [m6] / a
(b) (c) (d)
Fig. 3. (a) Matched IR tree for the TMS320C25; (b) SNF Left-first schedule; (c) SNF Right-first
schedule; (d) Optimal schedule
later. This would not be required if the scheduler had first stored m 2
into
, before loading a with the result of m 0
Problems like the one illustrated above are very common in DSP architectures.
The obvious question it raises is - Does there exist a guaranteed SNF schedule such
that no spilling is required ? We will prove that this schedule exist, under certain
conditions that depend exclusively on the ISA of the target processor. But before
doing so, let us define the problem formally: Given an optimally covered expression
tree for an heterogeneous register architecture, determine an instruction schedule
that does not introduce any spill code.
4.2 Problem Solution
This section is divided as follows. In Section 4.2.1 we state and prove a sufficient
condition that an heterogeneous register architecture has to satisfy in order to
enable spill free schedules. In Section 4.2.2 we introduce the concept of Register
Transfer Graph (RTG) and show how it impacts the code generation task. Finally,
Code Generation for Fixed-Point DSPs \Delta 9
we prove the existence of an optimal linear time scheduling algorithm for a class
DSP architectures which have acyclic RTGs.
Let T be an expression tree with unary and binary operations. Let
be a function which maps nodes in T to the set R[M , where
is a set of N registers, and M the set of memory locations. Let u be the root of
an expression tree, with v 1
children of u. Consider that after allocation is
performed, registers L(v 1
are assigned to v 1
respectively.
and T 2
be the subtrees rooted at v 1
, as in
Figure
4. From now on the
terms expression tree and allocated expression tree will be used interchangeably,
with the context distinguishing if the tree is allocated or not.
4.2.1 Allocation Deadlock
Definition 1. An expression tree contains an allocation deadlock iff the following
conditions are true: (a) L(v 1
and (c) there
exist nodes w 1
and w 2
and w 2
such that L(w 1
The above definition can be visualized in Figure 4. This is the situation when two
sibling subtrees T 1
and T 2
contain each at least one node allocated to the same
register as the register assigned to the root of the other sibling tree. Using this
definition it is possible to propose the following result.
Fig. 4. Allocation deadlock in an expression tree
Theorem 1. Let T be an expression tree. If T does not have a spill free schedule
then it contains at least one subtree which has an allocation deadlock.
Proof. Assume that all nodes u in T are such that T u is free of allocation
deadlocks and that no valid schedule exist for T . According to Definition 1 T u does
not have an allocation deadlock when:
(a) In this case, a SNF schedule exists if subtree T 1
is scheduled first followed by subtree
This case cannot happen since no non-unary operator of an
expression tree takes its two operands simultaneously from the same location.
(a) (b) (c)
Fig. 5. Trees without allocation deadlock
(c) exist for which L(w 2
In this case, it is possible to schedule T 1
first, followed by T 2
and the instruction
corresponding to node u. This is a valid schedule because just after the schedule
of T 1
is finished only register r 1
is live, and therefore, since no register r 1
exists in
, no resource conflict will occur when this subtree is scheduled (Figure 5(a)).
exist for which L(w 1
This is symmetric to the previous case. Schedule T 2 first, followed by T 1 and the
instruction corresponding to u (Figure 5(b)).
exist. This case is trivial, any SC
schedule results in a spill free schedule (Figure 5(c)).
Since the above conditions can be applied to any node u, T will have a valid
schedule that is free of memory spilling code. This contradicts the initial assumption
Corollary 1. Let T be an expression tree. If T has no subtree containing an
allocation deadlock then it must have a spill free schedule. Moreover this schedule
can be computed using the proof of Theorem 1.
Proof. Directly from the theorem above.
4.2.2 The RTG Model and Theorem
Definition 2. The RTG is a directed labeled graph where each node represents a
location in the datapath architecture where data can be stored. Each edge in the
RTG from node r i to node r j is labeled after those instructions in the ISA that
take operands from location r i and store the result into location r j .
The nodes in the RTG represent two types of storage: register files and single
registers. Register file nodes represent a set of locations of the same type which can
store multiple operands. A datapath single register (or simply single register) is a
register file of unitary capacity. Register file nodes are distinguished from single
register nodes by means of a double circle. Because of its uniqueness, memory is
not described in the RTG. Arrows are used instead to represent memory operations.
An incoming (outgoing) arrow pointing to (from) an RTG node r is associated to
Code Generation for Fixed-Point DSPs \Delta 11
a load (store) operation from (into) memory. Notice that the RTG is a labeled
graph where each edge has labels corresponding to the instructions that require
that operation. In other words, if both instructions p and q take one operand in r i
and store its result into r j , then the edge from r i to r j will have at least two labels,
p and q. We say that an architecture RTG is acyclic if it contains no cycles. As a
consequence of that any register transfer cycle in an acyclic RTG has to go through
memory 1 .
Example 2. Consider, for example, the partial olive description in Figure 2. The
RTG of
Figure
6 was formed from that description. The numbers in parenthesis
on the right side of Figure 2 are used to label each edge of the graph. Not all ISA
instructions of the target processor are represented in the description of Figure 2,
and therefore not all edges in the RTG of Figure 6 are labeled. Notice that the
RTG of the TMS320C25 architecture is acyclic. Other DSP processors also have
acyclic RTGs, like the processors TMS320C1X/C2X/C5X and the Fujitsu FDSP-4.
This paper proposes a solution for code generation for acyclic RTG architectures.
Unfortunately, other known DSPs like the ADSP-2100 and the Motorola 56000
have cyclic RTGs. Nevertheless, as it will be shown later, code generation for these
processors can also benefit from the results of this work.
a t
p1,2,3
Fig. 6. TMS320C25 architecture has an acyclic RTG
Theorem 2. If an architecture RTG is acyclic, then for any expression tree
there exists a schedule that is free of memory spills.
Proof. Let T be an expression tree rooted at u, and v 1
its children, such
that
and L(v 2
. Let T 1
and T 2
be the subtrees rooted at
nodes
. Let P k , be subtrees of T with root p k for which the
result of operation p k is stored into memory (i.e. L(p k
(dark areas in Figure 7) as the subtrees formed in T i after removing all nodes from
subtrees P k . We will show that if the RTG is acyclic, an optimal schedule can
always be determined by properly ordering the schedules for P k (e.g. P 1
. Here we have to address two cases: (a) Assume that T has no
allocation deadlock. Therefore, from Corollary 1 T has an optimal schedule. (b)
Now consider that an allocation deadlock is present in T , and that it is caused by
registers r 1 and r 2 , as shown in Figure 7. Assume also that there exist paths from
r 2 to r 1 in the processor RTG. Observe now that for each node in T 2 (Figure 7)
allocated to r 1 , e.g. w 2 , the path that goes from w 2 to its ancestor v 2 (allocated to
1 Observe that a self-loop is not considered an RTG cycle.
wp
Fig. 7. The RTG Theorem
necessarily pass by a node allocated to memory, e.g. p 2
. This comes from
the fact that any path from r 1
to r 2
has to traverse memory, given that the RTG
is acyclic and that it contains paths from r 2
to r 1
. Notice that one can recursively
schedule subtrees P 2
and P 4
in T 2
for which the root was allocated to memory, and
that this corresponds to emitting in advance all instructions that store results in
. Once this is done, only memory locations are live and the remaining subtree Q 2
contains no instruction that uses r 1
. The nodes that remain to be scheduled are
those in subtrees T 1
and
. Therefore, the tree T 1
[fug can now be scheduled
using Corollary 1 and no spill will be required. Notice that the same result will be
obtained if one first recursively schedules all subtrees P 1
(white areas in
Figure
followed by applying Corollary 1 to schedule subtree Q 1
fug.
Based on the proof of Theorem 2 above, an algorithm can be designed which
computes the best schedule for an expression tree in any acyclic RTG architecture.
We have designed such algorithm and named it OptSchedule.
Theorem 3. Algorithm OptSchedule is optimal and has running time O(n),
where n is the number of nodes in the subject tree T .
Proof. The first part is trivial since OptSchedule implements the proof of Theorem
2. Also from Theorem 2, the algorithm divides T into a set of disjoint subtrees
recursively schedules each of them. Therefore, every node
in T is visited only once. Hence, the algorithm running time is O(n).
Remark 1. If the RTG is acyclic for a particular architecture, then optimal sequential
code is guaranteed for any expression tree compiled from programs running
on that architecture. Unfortunately, this is not true for those architectures which
Code Generation for Fixed-Point DSPs \Delta 13
do not have acyclic RTGs. Nevertheless, expression trees in those architectures can
also benefit from this work. Observe from Corollary 1 that if an expression tree is
free of allocation deadlocks then it can be optimally scheduled. This is valid for
any expression tree generated from any architecture, no matter if this architecture
has an acyclic RTG or not. Consider for example that a path is added from p to t
in the RTG of Figure 6. This creates a cycle in the architecture RTG, which does
not go through memory. On the other hand, any expression tree which do not use
this new path is free of allocation deadlocks, and therefore can still be optimally
scheduled. Such expression trees could be identified by a simple modification of the
instruction selection algorithm. The question of how many of these trees exist in a
typical program is still open though.
5. HEURISTIC FOR DAGS
Instruction selection for an expression DAG requires DAG covering, which is known
to be NP-complete [Garey and Johnson 1979]. In practical solutions to this problem
heuristics have been proposed which divide the DAG into its component trees by
selecting an appropriate set of trees. However, this dismanteling of the DAG into
component trees is not unique and there are several ways in which this can be
done. Traditionally, the heuristic employed in the case of homogeneous register
architectures is to disconnect multiple fanout nodes of the DAG [Aho et al. 1988].
Dividing a DAG into its component trees requires disconnecting (or breaking)
edges in the DAG. For the code generation task, breaking a DAG edge between
nodes u and v implies the allocation of temporary storage to save the result of
operation u while this is not consumed by operation v. This storage location is
traditionally the memory but it can, in general, be any location in the datapath.
The key idea proposed here is a heuristic which uses architectural information from
the RTG in the selection of component trees of a DAG, such that the resulting
code has minimal spills. Consider for example the DAG of Figure 8. Notice that
two different approaches can be used to decompose this DAG into its component
trees, depending on which edge (e 1
or e 2
is selected to break. From now on, we
will represent a broken edge by a line segment traversal to the subject edge. As one
can see in Figure 8(b), one extra instruction is generated when the dismanteling
heuristic is based on breaking edge e 2
instead of e 1
. Incidentally, the code in
Figure
8(a) is also the best sequential code one can generate from the subject
DAG. Observe from the architectural description in Table 1, that the multiplication
operation requests its operands from memory (m) and t, and that the result of the
addition operation always produce its result in the accumulator a.
Notice also in Figure 6 that to bring any data from a to register t one has to go
through m. From Figure 8 one can see that the result of the addition operation
has to be stored into a and must be moved to m or t in order to be
used as an operand of the multiplication operation. But to move data from a to
t one has to go through memory (m). Suppose the memory position selected to
store this temporary result is m 5 . Hence, by breaking DAG edge e 1 one is just
assigning in advance a memory operation which will appear on that edge, during
the instruction selection phase of the code generation. Notice that the existence of a
register-transfer path which always goes through memory whenever data is moved
from a to t is a property of the target datapath. Similarly, the register-transfer
14 \Delta Guido Araujo and Sharad Malik
lac m2 a / [m2] lac m2 a / [m2]
add m3 a / a
sacl m5 [m5] / a sacl m5 [m5] / a
mpy m5 p / t * [m5] add m4 a / a
add m4 a / a
mpy m5 p / t * [m5]
(a) (b)
Fig. 8. (a) Breaking edge e 1
Breaking edge e 2
path from a to p must also pass through memory.
Notice also that when edge e 2
is broken, pattern PLUS(a,m) (instruction add m 4
cannot be used to match the addition of m 4
with the result of m 2
in the
accumulator a. In this case, instruction lac m 5
in
Figure
8(b) has to be issued in
order to bring the data from m 5
back to the accumulator adding a new instruction
to the final code.
a a a
a
a
(1) (1)
(1)
(2)
Fig. 9. Expression DAG after partial register allocation was performed and natural and pseudo-natural
edges identified by its corresponding lemma.
Code Generation for Fixed-Point DSPs \Delta 15
5.1 Problem Solution
The heuristic we propose to address the problem just described is divided into four
phases. In the first phase (Section 5.1.1) partial register allocation is done for those
datapath operations which can be clearly allocated before any code generation task
is performed in the DAG. During the second phase (Section 5.1.2), architectural
information is employed to identify special edges in the DAG which can be broken
without introducing any loss of optimality for the subsequent tree mapping
stages. In the third phase (Section 5.2) edges are marked and disconnected from
the DAG. Finally component trees are scheduled and optimal code generated for
each component tree (Section 5.2).
5.1.1 Partial Register Allocation. A general property of heterogeneous register
architectures is that the results of specific operations are always stored in well defined
datapath locations. This does not imply total register allocation because data
has to be routed through the datapath to locations required by other instructions.
Take for example operations add and mul in the target processor. Notice that
they implicitly define the primary storage resources that are used for the operation
result. In this case (observe Table 1), no register allocation task is required
to determine that registers a and p are respectively used to store the immediate
result of operations add and mul. Thus, partial allocation can be performed well
in advance, even before the task of breaking the edges of the expression DAG takes
place. Again, observe that this is only possible if an operation always uses the same
register file to store its immediate result. Consider for example the expression DAG
of
Figure
9. Notice that partial register allocation can be immediately performed
for registers a and p.
5.1.2 Natural Edges. We saw before in Figure 8 that some edges have specific
properties originating from the target architecture, which allow us to disconnect
them from the DAG without compromising optimality. These edges, termed natural
edges, are defined as follows.
Definition 3. If the instruction selection matching of edge (u; v) always produces
a sequence of data transfer operations in the datapath which pass through memory,
edge (u; v) is referred to as a natural edge.
(a) (b)
r
Fig. 10. Natural edges are identified by a single line segment: (a) (u; v) is natural; (b) (u; v) is
natural if r i has no self-loop in the RTG
Now given an expression DAG D, and a target architecture which has an acyclic
RTG. It can be shown that a number of edges in D are natural edges. In order to
do that let us state a set of lemmas.
Let r 1 and r 2 be a pair of registers in the datapath of an acyclic RTG architecture.
Also let , be a function which maps nodes in D into the set of
datapath locations R [ M , where R is the set of registers in the datapath and M
the set of memory positions.
Lemma 1. Let r 1 and r 2 be registers in the architecture RTG, such that there
exist no path from r 1 to r 2 . Therefore, any edge (u; v) in D for which
and is a natural edge.
Proof. Given that a path from registers r 1
to r 2
will be traversed whenever
instruction selection is performed on edge (u; v), then a memory operation will
always be selected during instruction selection on (u; v), and therefore (u; v) is a
natural edge (Figure 10(a)).
Lemma 2. Edges (u; v) for which are natural
edges only if no self-loop exists on register node r i in the RTG representation of the
target architecture (Figure 10(b)).
Proof. If an architecture has an acyclic RTG, then any loop in the RTG (which
is not a self-loop) will traverse memory. Thus, if register r i has no self-loop in the
RTG, then any loop starting at r i will go through memory. Therefore, a memory
operation will be selected whenever instruction selection is performed on edge (u; v).
Hence (u; v) is a natural edge.
Notice that the task of breaking natural edges does not introduce any new operations
into the DAG because, as the name implies, during the instruction selection
phase a memory operation is naturally selected due to constraints in the architecture
datapath topology. As a result, no potential optimality is lost by breaking
natural edges.
Example 3. Consider each one of the lemmas above and the RTG of Figure 6.
Observe the expression DAG of Figure 9 after natural edges have been identified.
(1) From Lemma 1 we can see that when r 1
a and r 2
every edge (u; v)
such that is a natural edge.
(2) Consider now Lemma 2. First take the situation when r From the
RTG of
Figure
6 observe that register p has no self-loop. Since the RTG is
acyclic, then any DAG edge (u; v) such that is a natural edge.
Now consider the case when r Register a in Figure 6 contains a self-loop
and thus nothing can be said regarding these edges.
5.1.3 Pseudo-Natural Edges. In the following two lemmas we show that DAG
edges can sometimes interact such that one edge out of a set of two edges must
result in storage in memory. The edges in this set are called pseudo-natural edges.
Lemma 3. Consider operation v and its operand nodes u and w in Figure 11(a).
If partial register allocation of these operations is such that
are (w; v) pseudo-natural edges.
Proof. Notice that no binary operation v can take both its operands simultaneously
from the same register. We have to consider here two situations:
Code Generation for Fixed-Point DSPs \Delta 17
w
(a) (b)
r
r
Fig. 11. The selected pseudo-natural edges are identified by a double line segment: (a) One of
the edges uses a loop in the RTG; (b) One of the edges goes through memory;
(a) If node r i has a self-loop in the architecture RTG, one of the edges, e.g.
could be matched by an instruction which takes one operand from r i .
On the other hand, when this same instruction matches the other edge, i.e.
(w; v), it will make use of a register which is contained in an RTG loop (not
a self-loop) that goes from r i back to r i . Similarly as in Lemma 2, matching
(w; v) will introduce a sequence of transfer operations which necessarily goes
through memory, making (w; v) and (u; v) pseudo-natural edges.
(b) If no self-loop node r i exists in the architecture RTG, then both edges are
natural edges according to Lemma 2.
Lemma 4. Consider operation v and its operand nodes u and w of Figure 11(b).
Let the partial register allocation of these nodes be such that
. If all RTG paths between each pair of nodes are such that only one path
does not go through memory, then (u; v) and (w; v) are pseudo-natural edges.
Proof. The proof is trivial and follows from the fact that since operation v
cannot take both of its operands from the same register r j at the same time, it has
to use two paths in the RTG to bring data from register r j . Since only one path
from r j to r i does not go through memory, then the other path has to pass through
memory.
Based on the lemmas above, we need to decide which edge between (u; v) and
(w; v) is to be disconnected from the DAG. Loss of optimality might occur depending
on which edge is selected. The selected pseudo-natural edge is identified using
a double line segment to distinguish it from natural edges. Unlike natural edges,
breaking pseudo-natural edges might result in compromising the optimality of code
generation for the component trees. However, there is a good chance that this
might not happen in actual practice.
Example 4. Consider Lemmas 3 and 4 above and the RTG of Figure 6: Observe
the expression DAG of Figure 9 after pseudo-natural edges have been identified.
Lemma 3 is satisfied for the case when r
(4) In this case, if r only one path exists in the RTG from p to
a which does not go through memory.
After rules 1-4 of Examples 3 and 4 are applied, the expression DAG of Figure 9
results. Each marked edge in Figure 9 has on its side the number corresponding to
a rule used from Examples 3 and 4.
5.2 Dismanteling Algorithm
The task of dismanteling an expression DAG may potentially introduce cyclic Read
After Write RAW dependencies between the resulting tree components leading to
an impossible schedule. A similar problem was also encountered in [Aho et al. 1977a]
and [Liao et al. 1995] when the authors studied the problem of scheduling worm-
graphs derived from DAGs in single-register architectures. Consider, for example,
(b)
(a)
Fig. 12. (a) Cyclic RAW dependency; (b) Constraining the tree scheduler
the reconvergent paths from nodes u to v and the component trees T 1
and T 2
of
Figure
12(a). Dismanteling the DAG of Figure 12(a) requires that at least one of
the edges of the multiple fanout nodes u and T 2
be disconnected. Assume that edges
have been selected as the edges to break. In this case, nodes u, v
and tree T 1
can be collapsed into a single component tree T 3
, dismanteling the DAG
into trees T 3
and T 4
. When an edge between two nodes is broken, a RAW edge is
introduced (dashed lines in Figure 12), in order to guarantee that the original data-dependencies
are preserved by the scheduler. In this case, the resulting RAW edges
form a cycle between component trees T 3
and T 4
, which results in an infeasible
schedule for the component trees.
Notice that dismanteling is also possible if edge (T 2 ; w) is broken instead of
Figure
12(b)). When this occurs, RAW edge (u; T 2 ) is brought into the
resulting component tree (T 3 ). As a consequence, the potential optimality of the
tree scheduler algorithm OptSchedule can not be guaranteed anymore, since now
it has to satisfy the constraint imposed by the new RAW edge inside T 3 . A possible
solution to this problem is to modify the tree scheduler algorithm such that it can
satisfy any RAW constraint inserted into the tree. Unfortunately, this is a very
difficult task for which an efficient solution seems not to exist. Hence, we have to
dismantle the DAG such as to avoid inserting RAW edges into the component trees.
From the two situations analyzed above, we can conclude that edges on both
reconvergent paths have to be disconnected in order to guarantee proper scheduling
Code Generation for Fixed-Point DSPs \Delta 19
of operations inside component trees and between component trees. An algorithm
which dismantles the DAG should disconnect edges by using as many natural and
pseudo-natural edges as possible. We have designed such an algorithm, which we
call Dismantle.
The Dismantle algorithm starts by first breaking all natural edges, since breaking
these edges adds no cost to the total cost of the final code. After that Dismantle
proceeds identifying reconvergent paths. It traverses paths in the DAG looking for
edges marked as pseudo-natural edges. If a pseudo-natural edge can be used to
break an existing reconvergent path the edge is broken. Otherwise the outgoing
edge which starts the reconvergent path at the corresponding multiple fanout node
is broken. These edges are marked with a black dot in Figure 13. At this point
all reconvergent paths in the expression DAG have been disconnected. Additional
edges are then broken such that no node ends up with more than one outgoing edge
(these edges are also marked with black dots). The resulting DAG after applying
algorithm Dismantle is shown in Figure 13. It decomposes the original DAG into
five expression trees Finally, these expression trees are scheduled and
code is generated for each expression tree.
a a a
a
a
Fig. 13. Resulting component trees after dismanteling
6. EXPERIMENTAL RESULTS
DSPstone [Zivojnovic et al. 1994] is a benchmark designed to evaluate the code
quality generated by compilers for different DSP processors. DSPstone is divided
into three benchmark suites: Application, DSP-kernel and C-kernel. The Application
benchmark consists of the program adpcm, a well-known speech encoding
algorithm. The DSP-kernel benchmark consists of a number of code fragments,
which cover the most often used DSP algorithms. The C-kernel suite aims to test
typical C program statements. The DSPstone project was supported by a number
of major DSP manufacturers (Analog Devices, AT&T, Motorola, NEC and Texas
Instruments). We used this benchmark for experimental evaluations.
Scheduling Algorithms
Tree Origin Left-first Right-first OptSchedule
real update 5 5 5
3 dot product 8 8 8
6 iir one biquad
Table
2. Number of cycles to compute expression trees using: Right-Left, Left-Right and
OptSchedule
6.1 Expression Trees
We have applied algorithm OptSchedule to expression trees extracted from programs
in the DSP-kernel benchmark. The metric used to compare the code was the number
of cycles that takes to compute the expression tree.
Observe from Table 2 that algorithm OptSchedule produces the best code when
compared with two SC schedules, what is expected since we have proved its opti-
mality. Notice that although SC schedules can sometimes produce optimal code,
it can also generate bad quality code, as it is the case for expression tree 6. We
can also verify that the same expression tree generates different code quality when
different SC schedules are used. The structure of the expression tree dictates the
best SC schedule, and this structure is a function of the way the programmer writes
the code.
6.2 DAG Types Distribution
Expression DAGs were classified in trees, leaf DAGs and full DAGs. Leaf DAGs
are DAGs for which only leaf nodes have outdegree greater than one. We classify
a DAG as a full DAG if it is neither a tree nor a leaf DAG. As one can see from
Table
3, the classification revealed that of all basic blocks analyzed 56% were trees,
DSP kernel Basic Blocks Trees Leaf DAGs DAGs
real update
dot product
iir one biquad 1
convolution
lms
Table
3. Types of DAGs found in typical digital signal processing algorithms
Code Generation for Fixed-Point DSPs \Delta 21
DAG DAG Hand-written Standard Dismantle
Origin Type Code Heuristic Heuristic
complex update F
matrix 1x3 L 5 5 0% 5 0%
iir one biquad F 15 17 13%
convolution
lms F 7 9 28% 8 14%
Table
4. Experiments with DAGs - Leaf DAG (L); Full DAG
38% leaf DAGs and 6% full DAGs. From the set of benchmarks in Table 3 we have
noticed that the majority of the basic blocks found in these programs are trees
and leaf DAGs. Another experiment was performed, this time using the DSPstone
application benchmark adpcm. As before, basic blocks were analyzed to determine
the frequency of trees, leaf DAGs and DAGs. In this case, 94% of the basic blocks
in this program were found to be trees, 3% leaf DAGs and 3% full DAGs. Although
dynamic counting of basic blocks is required in order to provide information on the
impact on execution time, one can reasonably argue that a large portion of this
program execution time is spent in processing expression trees. Thus, tree-based
code generation is very suitable for this application domain.
6.3 Expression DAGs
In
Table
4 we list a series of expression DAGs extracted from programs in the
DSP-kernel benchmark. We have selected the largest DAG found in each kernel for
the purpose of comparison with hand-written code. Hand-written assembly code
(or assembly reference code) for each DSP-kernel program is available from the
DSPstone benchmark suite [Zivojnovic et al. 1994].
Compiled code was generated for each DAG and the resulting number of cycles
for a single loop execution reported in Table 4. Compiled code was also generated
using a standard heuristic, which dismantles the DAG by breaking all edges at
multiple fanout nodes (column Standard Heuristic). Table 4 shows the number
of processor cycles and the overhead with respect to hand-written code. Notice
that the overhead is due only to the DAG dismanteling technique. The average
overhead when comparing the compiled (Dismantle Heuristic) and the assembly
reference code was 7%. Leaf nodes are treated the same way in both heuristics.
They are simply duplicated into different nodes - one for each outgoing edge. As a
consequence, both heuristics have the same performance for the case of leaf DAGs.
The average overhead (Dismantle Heuristic) for the case of full DAGs was higher
(11%) than for the case of leaf DAGs (4%). The discrepancy is due to the existence
of memory-register and immediate instructions in the processor ISA, which can have
zero cost multiple fanout operands when these are memory references or constant
values. Although the heuristic gains may seem very small, every byte matters,
since DSPs have restricted on-chip memory size, what makes the generation of high
22 \Delta Guido Araujo and Sharad Malik
quality code the most important goal for the compiler.
7. CONCLUSION
With the increasing demand for wireless and multimedia systems, it is expected
that the usage of DSPs will continue to grow. Inspite of this, research on compiling
techniques for DSPs has not received the adequate attention. These devices
continue to offer new research challenges which originate from the need for high
quality code at low cost and power consumption.
We have proposed an optimal O(n) instruction selection, register allocation, and
instruction scheduling algorithm for expression trees, for a class of heterogeneous
register DSP architectures which have acyclic RTGs. We then extend this by
proposing heuristics for the case when basic blocks are DAGs. This approach is
based on the concept of natural and pseudo-natural edges and seeks to use architectural
information to help in the task of dismanteling the expression DAG into a
forest of trees.
The question on how to generate good code for architectures which have cyclic
RTGs remains open though. As it was mentioned before, expression trees generated
in these architectures can also benefit from this optimality provided they are free
of any allocation deadlock. An interesting question which follows from that is how
many of expression trees with this property are generated in programs running on
these architectures. More work is under way in order to answer this and other
questions.
ACKNOWLEDGMENTS
This research was supported in part by the Brazilian Council for Research and Development
(CNPq) under contract 204033/87-0, and by the Institute of Computing
(unicamp), Brazil.
--R
Code generation using tree matching and dynamic programming.
Optimal code generation for expression trees.
Code generation for expressions with common subexpressions.
Code generation for machineswith multireg- ister operations
Generalizations of the Sethi-Ullman algorithm for register allocation
Optimal code generation for embedded memory non-homogeneous register architectures
Using register-transfer paths in code generation for heterogeneous memory-register architectures
Code generation for one-register machine
Instructions sets for evaluating arithmetic expressions.
Journal of the ACM
Engineering a simple
Computers and Intractability.
Pattern matching in trees.
Data routing: a paradigm for efficient data-path synthesis and code generation
DSP Processor Fundamentals: Architectures and Features.
Programmable DSP architectures: Part I.
Programmable DSP architectures: Part II.
Instruction selection using binate covering.
Marwedel and Goosens
Efficient computation of expressions with common subexpressions.
Complete register allocation problems.
The generation of optimal code for arithmetic expressions.
Journal of the ACM 17
Digital Signal Processing Applications with the TMS320 Family.
An olive twig.
On the optimal code generation for signal flow computation.
Automatic instruction code generation based on trellis diagrams.
Circuits and Systems
--TR
Compilers: principles, techniques, and tools
Generalization of the Sethi-Ullman algorithm for register allocation
Code generation using tree matching and dynamic programming
Digital signal processing applications with the TMS320 family (vol. 2)
Optimal code generation for embedded memory non-homogeneous register architectures
Instruction selection using binate covering for code size optimization
Optimal register assignment to loops for embedded code generation
Using register-transfer paths in code generation for heterogeneous memory-register architectures
Data routing
Tree-based mapping of algorithms to predefined structures
The Generation of Optimal Code for Arithmetic Expressions
The Generation of Optimal Code for Stack Machines
Optimal Code Generation for Expression Trees
Code Generation for a One-Register Machine
Code Generation for Expressions with Common Subexpressions
Efficient Computation of Expressions with Common Subexpressions
Pattern Matching in Trees
Instruction Sets for Evaluating Arithmetic Expressions
Code-generation for machines with multiregister operations
Code Generation for Embedded Processors
Computers and Intractability
--CTR
Jeonghun Cho , Yunheung Paek , David Whalley, Fast memory bank assignment for fixed-point digital signal processors, ACM Transactions on Design Automation of Electronic Systems (TODAES), v.9 n.1, p.52-74, January 2004
Jeonghun Cho , Yunheung Paek , David Whalley, Efficient register and memory assignment for non-orthogonal architectures via graph coloring and MST algorithms, ACM SIGPLAN Notices, v.37 n.7, July 2002
Alain Pegatoquet , Emmanuel Gresset , Michel Auguin , Luc Bianco, Rapid development of optimized DSP code from a high level description through software estimations, Proceedings of the 36th ACM/IEEE conference on Design automation, p.823-826, June 21-25, 1999, New Orleans, Louisiana, United States
Shuvra S. Bhattacharyya , Praveen K. Murthy, The CBP Parameter: A Module Characterization Approach for DSP Software Optimization, Journal of VLSI Signal Processing Systems, v.38 n.2, p.131-146, September 2004
Minwook Ahn , Jooyeon Lee , Yunheung Paek, Optimistic coalescing for heterogeneous register architectures, ACM SIGPLAN Notices, v.42 n.7, July 2007 | register allocation;scheduling;code generation |
290839 | Estimation of lower bounds in scheduling algorithms for high-level synthesis. | To produce efficient design, a high-level synthesis system should be able to analyze a variety of cost-performance tradeoffs. The system can use lower-bound performance estimated methods to identify and puune inferior designs without producint complete designs. We present a lower-bound performance estimate method that is not only faster than existing methods, but also produces better lower bounds. In most cases, the lower bound produced by our algorithm is tight.Scheduling algorithms such as branch-and-bound need fast and effective lower-bound estimate methods, often for a large number of partially scheduled dataflow graphs, to reduce the search space. We extend our method to efficiently estimate completion time of partial schedules. This problem is not addressed by existing methods in the literature. Our lower-bound estimate is shown to by very effective in reducing the size of the search space when used in a branch-and-bound scheduling algorithm.Our methods can handle multicycle operations, pipelined functional units, and chaining of operations. We also present an extension to handle conditional branches. A salient feature of the extended method is its applicability to speculative execution as well as C-select implementation of conditional branches. | Introduction
High-level synthesis takes an abstract behavioral specification of a digital system and finds a
register-transfer level structure that realizes the given behavior. Usually, there are many different
structures that can be used to realize a given behavior. One of the main goals of a synthesis
system is to find the structure that best meets the constraints, such as limitations on the number
of functional units, registers, power, while minimizing some other parameters like the number of
time steps. Operation scheduling and datapath construction are the core of high-level synthesis in
obtaining efficient designs in terms of area and speed. Scheduling datapath operations into the best
time steps is a task whose importance has been recognized in many systems [12, 14, 15, 16]. Since
scheduling is an intractable problem, most high-level synthesis systems use heuristics to find a good
schedule. In the absence of good lower-bound estimates, it is difficult to evaluate the performance
of heuristics.
For a synthesis system to produce efficient designs, it should have the capability to analyze
different cost-performance trade-offs. So, a scheduler has to explore the design space with a variety
of resource constraints. Instead of producing schedules with each and every resource constraint,
a scheduler can use estimation to identify and prune inferior designs. Furthermore, estimation of
lower bounds can be used to evaluate a heuristic solution. For an estimation tool to be useful, it has
to be much faster than the actual scheduler and the lower bounds it produces should be as tight as
possible. We have proposed an efficient estimation technique for the lower-bound performance. We
tested our estimation method on a number of benchmarks and compared our results with those of
some other known methods in the literature [17, 20]. Our method faster than the methods in [17]
and [20]. Our method produces better lower bounds than both of them in many cases. In most
cases, the lower bound is tight.
Many scheduling algorithms such as the branch and bound methods [3] and multi-schedule
methods [2] search through the design space by constructively scheduling operations, one step
at a time. During the search process, schedules for a subset of operations in the DFG will be
produced and evaluated to check if they can lead to a complete schedule with a target upper-bound
performance. Such scheduling methods need a method to estimate lower bounds on the completion
time of partial schedules. Since the number of partial schedules is generally very high in a design
space search process, this estimation should be faster than the estimation for the entire DFG.
In this paper, we have proposed a fast and effective lower-bound estimation method for partially
scheduled DFGs. It is an extension of our method for the lower-bound estimation of the entire
DFGs. In our approach we defined some very useful data structures that need to be computed only
once for a given DFG before the exploration. Using those data structures, our method can compute
a lower bound for a partial schedule in O(k) time where k is the number of ready and unfinished
operations (defined later in this paper) at the partial schedule. The methods in [5, 17, 20, 21]
are originally proposed for estimation for the entire DFG and do not address the estimation from
partial schedules. They are too slow to be used for partial schedules. For example, if the methods
in [17], [20] are used for partial schedules, they take O(c 2 respectively to compute
lower bound for a partial schedule where n is the number of operations to be scheduled and c is
the critical path length.
We implemented our method and the method in [17] separately into a branch and bound
scheduling algorithm and tested on a number of benchmarks. The results show that our method is
at least 20 times faster and equally effective in reducing the size of the search space. Our method
can be used in any scheduling algorithm that schedules one step at a time. We used this method in
an optimal dynamic programming scheduling algorithm [1] we developed, and it drastically reduced
the size of the search space. We could obtain optimal schedules in very short computational times.
Our methods can handle multi-cycle operations, pipelined functional units and chaining of
operations. We extended our methods to handle conditional branches. The extended method is
applicable to speculative execution as well as C-select execution of operations in the conditional
branches. To our knowledge, no other estimation method in the literature can support speculative
execution. In the next section, a brief overview of previous works for lower bound estimation is
presented. In section 3, our model and terminology is defined. The method for the estimation of
lower bounds for the entire DFG is presented in section 4. The computational complexity of our
method as compared to Sharma's method [20] is analyzed in the same section. Our lower-bound
estimation method for partial schedules is presented in section 5. Extensions to handle conditional
branches and chaining are explained in section 6. Experimental results are presented in section 7
and conclusions are in section 8 .
Previous Work
There are several methods proposed in the literature for the lower-bound estimation of cost as
well as performance. Jain et al [7] proposed a mathematical model for predicting the area-delay
curve. Their lower-bound method is very fast, but is too trivial and does not consider precedence
constraints at all. The technique proposed by Fernandez and Bussell [4] computes the minimum
number of operations which must be scheduled in each sub-interval of time steps. It then derives
maximum increase in total execution time over all intervals for not having enough processors to
accommodate all the operations in that interval. Their method considers only homogeneous resources
and can be applied only to multiprocessor schedules. Their method has been extended to
high-level synthesis by Sharma et al [20]. They compute the increase in the length of each interval
due to concentration of each type of operations in that interval. They also address lower bounds
on area cost including interconnect cost. Their method has a computational complexity of O(nc 2 )
where n is the number of nodes in the DFG and c is the critical path length. The method proposed
by Ohm et al [10] estimates lower bounds on functional units as well as registers. Their technique
for functional unit estimation is a refinement of the basic technique of [20]. It is not applicable to
lower-bound performance estimation. The complexity of their method is O( n(c 2 +n+ e) ) where e
is the number of edges in the DFG. The method proposed in [17] uses a relaxation technique of the
ILP formulation of the scheduling problem for the lower-bound estimation of performance under
resource constraints. It has a computational complexity of O(n is the number of
time steps and produced lower bounds as good as [20] on many benchmarks. The method in [21]
is similar to [17] in that it relaxes the precedence constraints and solves the relaxed problem using
a slack driven list scheduling algorithm. Hu et al [5] proposed a method to estimate lower bounds
on iteration time and functional unit cost for functional pipelined DFGs. The complexity of their
method is O(nck 2 ) where k is the initiation latency. A recursive technique is proposed in [8] for
lower-bound performance estimation and it has a complexity of O(n The complexity of
our method for the lower-bound performance estimation of the entire DFGs has a complexity of
3 Model and Definitions
A DFG (Data Flow Graph) E) is a (directed acyclic) graph representation of a behavioral
description where the set of nodes V represents the set of operations and the set of edges E denotes
the set of dependencies (precedence constraints) between the operations. For any two operations
should be finished before operation y can start. A node x
is called a predecessor of y (and y, a successor of x) if there is a directed path from x to y in G
using the arcs in E. An operation without any predecessors is called an input operation and an
operation without any successors is called an output operation. Associated with each operation x
in V , there is a single type indicating that a functional unit of that type should be used to execute
that operation. Resource constraints are given by a set of ordered tuples ! t;
D(t) is the delay (number of time steps or clock cycles) an operation of type t takes to complete and
N(t) is the number of available resources of type t. If a resource type t is pipelined, one instance
of that resource type can be assigned to more than one operation in an overlapping fashion. The
latency with which they can overlap is denoted by ffi t . Clearly,
Let d x denote the delay of an operation x, which is equal to the delay of the type of resource
that executes x. For all (as soon as possible) is the earliest time-step that
v can be scheduled to start execution, assuming unlimited resources. For all
MSAT (v) (Minimum Steps After This operation) as the minimum number of time steps that any
schedule of G is going to take after the completion of operation v, assuming unlimited resources.
The critical path length is the minimum number of time-steps that any schedule of G is going to
take, assuming unlimited resources. It can be computed as with the
maximum taken over all output operations v.
4 Lower-bound estimation for performance
The intuitive idea behind our lower-bound estimation is as follows. For each resource type t, we
group the operations of that type into three non-overlapping intervals and compute a lower-bound
as the sum of the lengths of those intervals. The final lower bound is the maximum among all
resource types and all possible groupings of operations of each type.
Let P t be the number of operations of type t in the DFG. If there are oe A (i; t) operations of type
t with an ASAP value less than or equal to i, then there are at least P operations
that cannot be scheduled in the first i time steps. Similarly, if there are oe M (j; t) type t operations
with a MSAT value less than j, then there are at least P operations that cannot
In this paper, resources refer to single-function functional units only
be scheduled to execute in the last j time steps. Thus, there are at least P
t operations that cannot either be scheduled in the first i time steps or be scheduled to execute in
the last j steps of any schedule. The three intervals considered are : first i steps (interval I 1 ), last
steps (interval I 3 ) and an interval between the two (interval I 2 ) that does not overlap with the
other two. The lengths of intervals I 1 and I 3 are i and j respectively. The length of I 2 depends
on the minimum number of type t operations in that interval as well as the number of available
resources of type t. The number of operations in I 2 depends on the ASAP and MSAT values of
operations which are determined by data dependencies. Thus, the lower-bound estimation takes
into account both precedence constraints and resource constraints.
Note that it takes at least i time steps before any of the operations in I 2 can start execution
and at least j steps after the last operation has finished execution. The lengths of I 1 and I 3 are
independent of the set of operations scheduled in those intervals. Thus, for the purposes of lower-bound
estimation, the three intervals are non-overlapping. We denote the minimum number of
type t operations in interval I 2 by q(i; j; t) and the minimum length of I 2 due to type t operations
by h(i; j; t). As explained above, q(i; j; and the value of h(i; j; t) can be
computed from q(i; j; t) as follows.
operations can be scheduled into dk=re stages. If type
t resources are not pipelined, each stage takes D(t) time steps where D(t) is the delay of a type t
resource. If they can be pipelined with a latency ffi t , each stage except the last takes ffi t steps. The
last stage in either case takes D(t) steps. Hence,
pipelined
r e D(t) otherwise.
lower bound on the the completion time of any schedule of the given DFG, - is
oe A (1;
oe A (2;
oe A (3;
oe A (4;
oe A (5;
oe A (6;
The ordered pair next to each node shows its ASAP and MSAT values
Figure
1. An example for the lower-bound estimation for the entire DFG
given by max (t;i+j-c) fh(i; c is the critical path length.
Proof : The above discussion implies that i is a lower bound for a given i, j and
type t. The expression for - is the best lower bound among all i, j and t. The condition
makes sure that the intervals are non-overlapping. 2
Figure
1 shows an example for lower-bound computation. The ordered pair next to each node
indicates its ASAP and MSAT values respectively. It is assumed that addition takes one time step
and multiplication takes two. There are one adder and one non-pipelined multiplier. All values of
oe A and oe M for multiplication and addition are shown in the figure. For example, the multiplication
operations 1 and 2 each has ASAP value less than or equal to 2. Hence, oe A (2; ) is 2. Similarly,
the addition operations 6; 7; 8 and 9 each has MSAT value less than 2. Hence, oe M (2; +) is 4. For
the example DFG, the maximum value for - is obtained when i=0 and j=2.
Complexity analysis :
The ASAP values can be computed in a top-down fashion starting from the input
operations as follows. If v is an input operation, ASAP (v) = 1. Otherwise,
g. The MSAT values are similarly computed in a
bottom-up fashion starting from the output operations as follows. If v is an output operation,
g. Let n be the number
of operations in the DFG. The number of edges in a DFG will grow linearly in n since the number
of inputs of each operation is generally bounded by small number such as two. Hence, the ASAP
and MSAT values can be found in O(n) time. The number of resource types is generally bounded
by a small number. For any i and t, oe A (i; t) and oe M (i; t) can be found recursively as follows.
oe A (i; is the number of type t operations with ASAP value i.
Similarly, oe M (i; is the number of type t operations
with MSAT value 1. The values for A and M can be found during the computation of ASAP
and MSAT values without affecting its complexity. Hence, computing oe A and oe M takes O(c) time
where c is the critical path length. Finally, computing lower bound using these values takes O(c 2 )
time because there are O(c 2 ) intervals. Thus, the complexity of our algorithm to estimate lower
bound of the entire DFG is O(n
The method in [20] is similar to our method in that it estimates the length of each interval
of time steps. It computes the required computation cycles of each type as the sum of minimum
overlaps of all operations of that type in each interval. Then the difference in the required and
available computation cycles of each type is divided by the number of available functional units of
that type to get any increase in the length of that interval. In their method, for each interval the
minimum overlap of each operation has to be determined. Hence, it has a complexity of O(nc 2 ).
Our method computes only the number of operations in each interval in constant time using the
precomputed data structures oe A and oe M , thus having a complexity of only O(n
5 Estimating a lower bound for a partially scheduled DFG
Scheduling algorithms such as branch and bound methods need to compute lower bounds on the
completion times from a large number of partially scheduled DFGs. The methods in [20] or [17]
are proposed for the purpose of estimating the lower bound for the entire graph. If those methods
are used for estimating lower bounds for partial schedules, the time spent in estimation itself may
be so high that the advantage of estimation is nullified. They take O(nc 2
to compute lower bound for each partial schedule. Our method in the previous section also takes
time. In this section, we present an extension to that method such that the lower-bound from
a partial schedule can be computed more efficiently. Our method takes O(k) time, where k is the
number of ready and unfinished operations (defined later in this section) at the partial schedule.
In the rest of the paper, we call a partial schedule a configuration. If a configuration R is
the result of scheduling r time steps, any unscheduled operation at R can only be scheduled at a
time step greater than r. We call r the depth of R denoted by depth(R). Let f(v) denote the
time step at which the operation v is scheduled to start execution. An operation x is said to be
ready at R if it is not scheduled yet and if all its predecessors are scheduled and finished at R i.e.
y such that y is a predecessor of x. Note that d y is the delay of
y. The set of ready operations at R is denoted by ready(R). A multi-cycle operation x is said to
be unfinished at R if it is scheduled to start execution at a time step less than or equal to depth(R),
but not finished at depth(R) i.e. f(x) - depth(R) 1. The set of unfinished
operations at R is denoted by unfinished(R). The number of unscheduled operations of type t at
configuration R is denoted by unsch(R; t).
The basic idea behind the estimation for the partial schedules is as follows. At a partial schedule,
a subset of operations is already scheduled so as to satisfy precedence constraints as well as resource
constraints. So, instead of considering all possible values for i and j (to divide operations into
intervals), we can consider the following special case for each resource type t. For the unscheduled
portion of the DFG, we find I = maxfijoe A (i; 0g. Intuitively,
I is the number of time steps after the current step before any type t unscheduled operation can
start execution. And, J is the number of time steps that any complete schedule from the current
configuration takes after the last type t operation has finished executing. The steps in computing
a lower bound from a configuration R for a resource type t are as follows.
1. Compute I and J .
2. q(I; J; t) / unsch(R; t). (Since oe A (I;
3. Compute h(I; J; t) from q(I; J; t). (As explained in the previous section)
a lower bound on the number of time steps to schedule the
remaining operations of type t, the quantity depth(R) J is a lower bound
on the completion time of any schedule from R. The maximum of these lower bounds over all
resource types t (only if there are any unscheduled operations of type t) gives a lower bound on the
completion time of any schedule that configuration R can lead to.
The only non-trivial step in computing the lower bound is the computation of I and J (Step
1). The most important merit of our algorithm is that it computes I and J in a very efficient way
as described in Figure 2. For each node u and each operation type t, ff(u; t) is defined as the
minimum number of time steps that any type t successor of u can start after the starting time of
u. The value of ff(u; t) is set to infinity if u has no successors of type t. For each node u and each
The value for I is 0 if there is a type t operation in ready(R).
Otherwise, it is given by min(a; b) where
ff(u; t) and
is the number of time steps u finished at R.
The value for J is given by min(a; b), where
fi(u; t) and
is the set of operations in unfinished(R) with a type t successor.
Figure
2. Computing the values for I and J
operation type t, fi(u; t) is defined as the minimum number of time steps that any schedule of the
given DFG is going to take after the completion of all type t successors of u. The value for fi(u; t)
can be computed as min v fMSAT (v)g such that v is a type t successor of u. If u has no successors
of type t, then : if u is a type t operation, fi(u; otherwise, it is set to infinity. Note
that a lower bound for a type t is computed only when there are some unscheduled operations of
that type. Therefore, the values of I and J will never be infinity.
The formulas for the computation of I and J are based on the following lemma.
unscheduled operation x at configuration R, x is either a member of ready(R)
or there exists a y in (ready(R) [ unfinished(R)) such that y is a predecessor of x.
x be an unscheduled operation. If x is not in ready(R), then there is a predecessor p
of x that is not scheduled or scheduled but not finished executing. Among all such p, let q be the
farthest from x i.e. the length of the longest path from q to x is maximum among all p.
(i) If q is scheduled but not finished executing, q is in unfinished(R).
q is not in ready(R), then there is a predecessor q 1 of q that is not scheduled
or is in unfinished(R). Note that q 1 is a predecessor of x also. And, q 1 is farther than q from x,
which is a contradiction. Hence, q is in ready(R). 2
F or type +;?
completion
operation in ready(R)
completion
Figure
3. An example for the estimation of lower-bound completion time of partial schedules
Figure
3 shows an example for the estimation of lower-bound completion time of a partially
scheduled DFG. It is assumed that the scheduling of the first step is finished. There is one adder
and one multiplier both with a delay of one time-step. The lower-bound completion time is 7
time-steps. If the target performance is 6 time-steps, the lower-bound estimation suggests that the
selection of operations 2 and 3 in the first time-step is wrong.
The method in this section is especially useful for a class of scheduling algorithms that compute
the lower bound for a large number of configurations during the design space exploration. The
matrices ff and fi for all the operations in the DFG and all resource types are computed only once
before the design space exploration. This can be done by computing the transitive closure. For a
directed graph, the transitive closure can be computed using depth-first search in O(n(e+n)) where
e is the number of edges in the graph [18]. As already explained, e grows linearly in n in a DFG
since the number of inputs of each operation is generally bounded by a small number such as two.
Thus, ff and fi can be computed in O(n 2 ) time. Note that these values for only the operations in
ready(R) and unfinished(R) are used in computing the values of I and J . Hence, a lower-bound
for any configuration R can be computed in O(k) time where k is the number of operations in
Another major advantage of our method is that it introduces very little memory overhead. The
only overhead is to store the matrices ff and fi. When there are a large number of partial schedules
the memory requirement is dominated by the amount of information stored at each configuration.
Any scheduling algorithm taking full advantage of our lower-bound estimation needs to store very
little information at each configuration.
6 Extensions
6.1 Conditional Branches
We use the same approach of dividing each type of operations into three non-overlapping intervals.
As explained in section 4, the lengths of the first and the third intervals are independent of resource
constraints. The length of the second interval is a function of the total resource requirement of
the operations that should be scheduled in that interval. If there are no conditional branches, the
resource requirement is equal to the number of operations. In the presence of conditional branches,
however, more than one operation can share one resource in the same time-step. Effectively, an
operation requires only a fraction of the resource. If an operation can share resources with at most
/* Computes weights of all operations in the CDFG */
f
Partition all the operations into conditional blocks
For each operation x in the CDFG f
b /\Gamma block of x
number of blocks that have a type t operation and
mutually exclusive with the block b
Figure
4. Outline of the procedure to compute the weights of operations
other operations in the same time-step, its minimum resource requirement is 1=(n 1). We refer
to this quantity as the weight of that operation. For any given resource type t, the minimum total
resource requirement in an interval can be computed as the sum of weights of all type t operations
in that interval. Given the weights of individual operations, the computation of the sum of weights
of operations in each interval is similar to the computation of the number of operations with no
impact on the complexity. For partial schedules, we use the sum of weights of the unscheduled
operations of each type t in place of unsch(R; t) at each configuration R. Thus, the only increase
in complexity with our extension to conditional branches is due to the computation of the weights
of operations.
Figure
4 shows an outline of the procedure to compute the weights of individual operations.
We partition all the operations into blocks such that all the operations with the same conditional
behavior are placed into the same block. Since all the operations in a block have the same control
behavior, the concept of mutual exclusiveness between operations can be easily extended to blocks.
If x is a type t operation in the block b, then at any given control step x can share a resource with
at most one other type t operation from any other block that is mutually exclusive with b. Hence,
if there are n blocks that are mutually exclusive with b and that have a type t operation, then the
weight of x is 1=(n 1).
The method in [19] to handle conditional branches is extension of [20]. In their approach, for
each interval the operations from only one conditional path are considered so as to maximize the
minimum resource requirement in that interval. Since the conditional path analysis is performed for
each interval, their method is very slow. When used in scheduling algorithms for partial schedules,
the actual time spent in estimation itself can outweigh the advantage of the resultant pruning. In
comparison, our method computes the weights of operations only once and the complexity of the
remaining steps remains unchanged. Their method is based on distribute-join representation of
CDFGs which is a C-select implementation. In C-select implementation, the operations in conditional
branches cannot be executed until the corresponding condition is resolved. Many scheduling
algorithms in the recent literature allow execution of branch operations before the corresponding
conditional [24, 23, 26, 27]. This is known as speculative execution and is shown to produce faster
schedules on many benchmarks [27]. Our estimation method can support both C-select implementation
and speculative execution. In C-select implementation, the control precedences are treated
the same way as data dependencies are considered in computing ASAP and MSAT values of op-
erations. In speculative execution, control dependencies are ignored while computing ASAP and
MSAT values.
6.2 Chaining
Chaining of operations is handled by dividing time steps into time-units and extending the definitions
of MSAT and ASAP values in terms of time-units. Let the length of each time-step be T
time-units. Let - v denote the delay of an operation v in terms of time-units. If two operations u and
are chained, the functional unit executing u cannot be freed until v is finished [19]. Therefore, if
spans across time-steps, this may result in under-utilization of resources. To avoid this, we follow
the same assumption as in [19] that an operation v can be chained at the end of operation u only if
there is a enough time for v to be finished in the same time step in which u has finished execution.
This condition is imposed by the checking that - u mod T 6= 0 and
Let A(v) and M(v) be the ASAP and MSAT values of an operation in terms of time-units. The
A and M values can be recursively computed similar to the computation of ASAP and MSAT . For
any (u; v) 2 E, the earliest time-unit that the execution result of u is available for v is A(u)
However, if v cannot be chained to u, v can start execution only at the beginning of the next time
step. Hence, A(v) is given by:
can be chained to u
cannot be chained to u.
Similarly, M(v) is given by:
ae(v;
can be chained to v
cannot be chained to v.
From the A and M values, the corresponding ASAP and MSAT values are derived. The lower-bound
is then computed using the ASAP and MSAT values as explained in section 4.
7 Experimental Results
We implemented our methods in C language on a SUN Sparc-2 workstation. We tested them using
a number of benchmarks in the literature. The benchmarks we used are the AR Filter [6], the fifth-order
elliptic Wave Filter [13], twice unfolded Wave Filter, the complex Biquad recursive digital
Filter [13], the sixth-order elliptic Bandpass Filter [13], Discrete Cosine Transform [11], and Fast
Discrete Cosine Transform [9]. For the Biquad Filter example, we used three time steps for the
multiplier and one for adder (we used the same resource type adder to do addition, subtraction
and comparison). For all the other examples, we used two time steps for multiplication and one for
adder.
7.1 Lower bound estimation of partial schedules in branch and bound methods
As mentioned in section 1 , branch and bound scheduling methods rely on estimating lower bounds
for partial schedules to keep the design space from exploding. Generally, lower bounds need to
be estimated for a large number of partial schedules (configurations). If the time spent in lower-bound
estimation itself is too high, it will have a big negative impact on the over-all time taken
by the scheduling algorithm. We implemented a branch and bound scheduling algorithm [3] and
tested on the benchmarks. We first find a schedule using a list scheduling algorithm. We use that
performance as an upper bound in the branch and bound algorithm and search the design space
exhaustively for an optimum schedule. From each partial schedule, we estimate a lower bound for
the schedule completion time. If it exceeds the upper-bound, the partial schedule cannot lead to a
complete schedule with the target performance and it is not explored further.
We separately measured the time spent in lower-bound estimation using our method of section 5
and Rim's method [17]. The results are reported in table 1. Our method is at least 20 times faster
in all cases. As a measure of the effectiveness of the lower-bound estimation in reducing the size
of the search space, we also measured the number of configurations visited using each method
separately. These results are also reported in table 1. Both the methods are equally effective. Since
their estimation is very slow, the CPU time taken using their method is more than the time taken
without using any lower-bound estimation in a few cases. However, in a majority of the cases, the
search space exploded without lower-bound estimation, thus showing the necessity of estimation in
Resources Configurations
Our method Rim [17] Our method Rim
AR Filter 2 3 0.35 8.2 1769 3497
Once unrolled 2 3 0.13 2.9 770 723
Twice unrolled
Filter 2 3 4.0 81.2 33182 27767
Filter
Fast Discrete 1 1 0.14 3.1 688 892
Table
1. CPU time and number of configurations in branch and bound algorithm
branch and bound scheduling algorithms. Our method is more suitable than the existing methods
to be used in such scheduling algorithms. We incorporated this method in a dynamic programming
scheduling (DPS) algorithm [1] that we developed and obtained excellent results.
7.2 Lower-bound estimation for the entire DFGs
We tested all the benchmarks with different resource constraints for pipelined multiplier (latency
is 1) and non-pipelined multiplier. For all the cases, our lower bound is compared with an optimal
solution we obtained using our DPS algorithm [1]. In tables 2 and 3, we present the lower bounds
for some of the cases as obtained by our method of section 4 . The column DPS in the tables shows
the number of steps in an optimal solution. The lower bound is tight in 156 out of 198 cases. In 22
more cases, the difference is only one step. We also implemented the algorithms by Rim [17] and
Sharma [20], and compared the results with ours. Our method gives better lower bound than [17]
in nine cases. In five cases, our lower bounds are better than [20]. In [17], the lower bounds by one
more method, Jain [7] are also reported. Those are copied into the second last column (Jain) of our
our lower bound and an
Resources optimum solution other lower bounds
lower bound difference Rim [17] Jain [7] Sharma [20]
AR Filter 1 3
Twice unrolled
Wave Filter 3 2 50 50 0 50
Fast
(*) Complex multiplication takes 3 time steps
(y) Our lower bound for this case is better than Rim's
(z) Our lower bound for this case is better than Sharma's
Table
2. Lower bounds with non-pipelined multiplier
tables. For the benchmarks and cases not reported in our tables, our lower bounds are identical to
Rims' [17]. The average CPU times are 21 ms, 25 ms and 270 ms for our method, Rim's method
and Sharma's method respectively. Thus, our method is faster than the fastest non-trivial method
in the literature [17] and produces better lower bounds in more cases. Our method is one order
faster than the method in [20] and still produces better results in some cases. The lower bounds
of [7] are far inferior to ours.
our lower bound and an
Resources optimum solution other lower bounds
lower bound difference Rim [17] Jain [7] Sharma [20]
Fast 1 1 26 26 0 26 - 26
Transform
(*) Complex multiplication takes 3 time steps
Table
3. Lower bounds with pipelined multiplier
7.3 Results for CDFGs
Table
4 shows the results for examples with conditional behavior - Maha from [14], Parker from
High Level Synthesis Benchmark Suite, Kim from [25], Waka from [22] and MulT from [23]. The
resources column lists the number of adders, subtracters and comparators used in each case. All
additions, subtractions and comparisons are single-cycle. We presented the number of time steps in
the schedules obtained by our DPS algorithm. The lower bounds in all but a few cases are tight. We
obtained schedules both with C-select implementation and by allowing speculative execution. In
C-select implementation, operations from mutually exclusive branches can always share resources.
However, since control precedences are strict precedences, critical path length may increase. In
table 4, Maha and Parker are two examples with a high degree of branching. In the C-select imple-
mentation, the advantage of conditional resource sharing is nullified by the increase in the critical
path length. The length of the schedules could not be reduced even by adding more resources. In
comparison, speculative execution gives much superior results and adding more resources reduces
schedules lengths.
Benchmark Resources # Time Steps
C-select Spec. Exec.
Maha 1,1,1 11 y7
Maha 2,1,1 11 y6
Maha 2,2,2 11 5
Parker 1,1,1 11 y6
Parker 2,2,1 11 5
Parker 2,2,2 11 5
y For these cases, lower bound is one step less. All other lower bounds are tight.
Table
4. C-select and Speculative execution in Conditional Branch benchmarks
8 Conclusions and future research
We have presented simple and efficient techniques for estimating lower-bound completion time for
the scheduling problem. The proposed techniques can handle multi-cycle operations, pipelined
functional units, conditional branches and chaining of operations. Our method for the entire DFGs
is faster and produces better lower bounds than [17] and [20].
We have also presented an extension to our technique that is especially suitable for finding lower-bound
for partially scheduled DFGs. The extended method is very useful to keep the search space
from exploding in scheduling algorithms such as branch and bound method. Exising methods in the
literature do not give any special consideration for computing the lower bounds for partial schedules.
We conducted extensive experiment using our method and the fastest non-trivial method known
in the literature [17] for the estimation of partial schedules in a branch and bound algorithm. Our
method is found to be at least 20 times fatser than theirs while being equally effective in reducing
the size of the search space.
We are currently investigating estimation of lower bounds in the presence of loops and when
multi-function functional units are used. We are also investigating estimation of lower bounds with
additional constraints such as interconnect and storage.
--R
"Optimum Dynamic Programming Scheduling under Resource Constraints"
"A Multi-Schedule Approach to High-Level Synthesis"
"Some Experiments in Local Microcode Compaction for Horizontal Machines"
"Bounds on the Number of processors and Time for Multi-processor Optimal Schedules"
"Lower Bounds on the Iteration Time and the Number of Resources for Functional Pipelined Data Flow Graphs"
"Experience with the ADAM Synthesis System"
"Predicting system-level area and delay for pipelined and non-pipelined designs"
"A Recursive Technique for Computing Lower-Bound Performance of Schedules"
"A new approach to pipeline optimization"
"Comprehensive Lower Bound Estimation from Behavioral Descriptions"
"Personal Communication"
"Slicer: A State Synthesizer for Intelligent Silicon Compiler"
"A High Level Synthesis Technique Based on Linear Pro- gramming"
"MAHA: A Program for Datapath Synthesis"
"SEHWA: A Software Package for Synthesis of Pipelines for Synthesis of Pipelines from Behavioral Specifications"
"Lower-Bound Performance Estimation for the High-Level Synthesis Scheduling Problem"
"Algorithms in C"
"Estimation and Design Algorithms for the Behavioral Synthesis of ASICS"
"Estimating Architectural Resources and Performance for High-Level Synthesis Applications"
"Estimating Implementation Bounds for Real Time DSP Application Specific Circuits"
"A resource sharing and control synthesis method for conditional branches"
"Global Scheduling Independent of Control Dependencies Based on Condition Vectors"
"A Tree-Based Scheduling Algorithm For Control-Dominated Circuits"
"A Scheduling Algorithm For Conditional Resource Sharing"
"Global Scheduling For High-Level Synthesis Applications"
"A New Symbolic Technique for Control-Dependent Scheduling"
--TR
Experience with ADAM synthesis system
Algorithms in C
Global scheduling independent of control dependencies based on condition vectors
A tree-based scheduling algorithm for control-dominated circuits
Comprehensive lower bound estimation from behavioral descriptions
Global scheduling for high-level synthesis applications
MAHA
A Multi-Schedule Approach to High-Level Synthesis
Estimation and design algorithms for the behavioral synthesis of asics
A new approach to pipeline optimisation
--CTR
Shen Zhaoxuan , Jong Ching Chuen, Lower bound estimation of hardware resources for scheduling in high-level synthesis, Journal of Computer Science and Technology, v.17 n.6, p.718-730, November 2002
Helvio P. Peixoto , Margarida F. Jacome, A new technique for estimating lower bounds on latency for high level synthesis, Proceedings of the 10th Great Lakes symposium on VLSI, p.129-132, March 02-04, 2000, Chicago, Illinois, United States
Margarida F. Jacome , Gustavo de Veciana, Lower bound on latency for VLIW ASIP datapaths, Proceedings of the 1999 IEEE/ACM international conference on Computer-aided design, p.261-269, November 07-11, 1999, San Jose, California, United States
Margarida F. Jacome , Gustavo de Veciana, Lower bound on latency for VLIW ASIP datapaths, Readings in hardware/software co-design, Kluwer Academic Publishers, Norwell, MA, 2001
Margarida F. Jacome , Gustavo de Veciana , Viktor Lapinskii, Exploring performance tradeoffs for clustered VLIW ASIPs, Proceedings of the 2000 IEEE/ACM international conference on Computer-aided design, November 05-09, 2000, San Jose, California | high-level synthesis;lower-bound estimated;dynamic programming;scheduling |
291058 | Value speculation scheduling for high performance processors. | Recent research in value prediction shows a surprising amount of predictability for the values produced by register-writing instructions. Several hardware based value predictor designs have been proposed to exploit this predictability by eliminating flow dependencies for highly predictable values. This paper proposed a hardware and software based scheme for value speculation scheduling (VSS). Static VLIW scheduling techniques are used to speculate value dependent instructions by scheduling them above the instructions whose results they are dependent on. Prediction hardware is used to provide value predictions for allowing the execution of speculated instructions to continue. In the case of miss-predicted values, control flow is redirected to patch-up code so that execution can proceed with the correct results. In this paper, experiments in VSS for load operations in the SPECint95 benchmarks are performed. Speedup of up to 17% has been shown for using VSS. Empirical results on the value predictability of loads, based on value profiling data, are also provided. | INTRODUCTION
Modern microprocessors extract instruction level
parallelism (ILP) by using branch prediction to break
control dependencies and by using dynamic memory
disambiguation to resolve memory dependencies [1].
However, current techniques for extracting ILP are still
insufficient. Recent research has focused on value
prediction hardware for dynamically eliminating flow
dependencies (also called true dependencies) [2], [3], [4],
[6], [7], [8], [9]. Results have shown that values produced
by register-writing instructions are potentially highly
predictable using various value predictors: last-value,
stride, context-based, two-level, or hybrid predictors. This
work illustrates that value speculation in future high
performance processors will be useful for breaking flow
dependencies, thereby exposing more ILP. This paper
examines ISA, hardware and compiler synergies for
exploiting value speculation. Results indicate that this
synergy enhances performance on difficult, integer
benchmarks.
Prior work in value speculation utilizes hardware-only
schemes (e.g. [2], [3]). In these schemes, the instruction
address (PC) of a register-writing instruction is sent to a
value predictor to index a prediction table at the beginning
of the fetch stage. The prediction is generated during the
fetch and dispatch stages, then forwarded to dependent
instructions prior to their execution stages. A value-
speculative dependent instruction must remain in a
reservation station (even while its own execution
continues), and be prevented from retiring, until verification
of its predicted value. The predicted value is compared
with the actual result at the state-update stage. If the
prediction is correct, dependent instructions can then
release reservation stations, update system states, and retire.
If the predicted value is incorrect, dependent instructions
need to re-execute with the correct value. Figure 1
Figure
1. Pipeline Stages of Hardware Value Speculation
Mechanism for Flow Dependent Instructions. The dependent
instruction executes with the predicted value in the same cycle as
the predicted instruction.
illustrates the pipeline stages for value speculation utilizing
a hardware scheme.
Little work has been done on software-based schemes to
perform value prediction and value speculation of
dependent instructions. In a related approach to a different
problem, the memory conflict buffer [1] was presented to
dynamically disambiguate memory dependencies. This
allows the compiler to speculatively schedule memory
references above other, possibly dependent, memory
instructions. Patch-up code, generated by the compiler,
ensures correct program execution even when the memory
dependencies actually occur. Speculatively scheduled
memory references improves performance by aggressively
scheduling references that are highly likely to be
independent of each other. Likewise, value-speculative
scheduling attempts to improve performance by
aggressively scheduling flow dependencies that are highly
likely to be eliminated through value prediction. Patch-up
code is used when values are miss-predicted. We apply this
scheme to value speculation and propose a combined
hardware and software solution, which we call value
speculation scheduling (VSS).
Hardware pipeline stages for the VSS scheme are shown in
Figure
2. Two new instructions, LDPRED and UDPRED,
are introduced to interface with the value predictor during
the execution stage. LDPRED loads the predicted value
generated by the predictor into a specified general-purpose
register. UDPRED updates the value predictor with the
actual result, resetting the device for future predictions after
a miss-prediction. Figure 3 shows an example of using
LDPRED and UDPRED to perform VSS.
In the original code sequence of Figure 3(a), instructions I1
to I6 form a long flow dependence chain, which must
execute sequentially. If the flow dependence from
Figure
2. Pipeline Stages of Value Speculation Scheduling
Scheme. Two new instructions, LDPRED and UDPRED,
interface with the value predictor during the execution stage.
(a) Original code
I3: LW R4 # 0(R3)
I4: ADD R5 # R4, 1
I5:
Next: .
(b) New code after value speculation of R4 (predicted instruction I3)
I3: LW R4 # 0(R3)
I7: LDPRED R8 # index // load prediction into R8
I5':
I8: BNE Patchup R8, R4 // verify prediction
Next: .
Patchup:
I9: UDPRED R4, index // update predictor with R4
I4: ADD R5 # R4, 1
I5:
I10: JMP Next
Figure
3: Example of Value Speculation Scheduling.
instruction I3 to I4 is broken, via VSS, the dependence
height of the resulting dependence chain is shortened.
Furthermore, ILP is exposed by the resulting data
dependence graph. Figure 4 shows the data dependence
graphs for the code sequence of Figure 3 before and after
breaking the flow dependence from instruction I3 to I4.
Assume that the latencies of arithmetic, logical, branch,
store, LDPRED and UDPRED instructions are 1 cycle, and
that the latency of load instructions is 2 cycles. Then, the
schedule length of the original code sequence of Figure
4(a), instructions I1 to I6, is seven cycles. By breaking the
flow dependence from instruction I3 to I4, VSS results in a
schedule length of five cycles. Figure 4(b) illustrates the
schedule now possible due to reduced overall dependence
height and ILP exposed in the new data dependence graph.
Fetch Dispatch Execute State-
Update
Value Predictor Prediction
Verification
Fetch Dispatch Execute State-
Update
Predicted Value
Actual Value
(Predicted
(Dependent
Fetch Dispatch Execute State-
Update
Value
Predicted Value
LDPRED
UDPRED
This improved schedule length, from seven cycles to five
cycles, does not consider the penalty associated with miss-
prediction due to the required execution of patch-up code.
The impact of patch-up code on performance will be
discussed in section 3.
Figure
4. Data Dependence Graphs for Codes of Figure 3. The
numbers along each edge represent the latency of each instruction.
In 4(a), the schedule length is seven cycles. In 4(b), because of
exposed ILP and dependence height reduction, the schedule
length is reduced to five cycles.
In
Figure
3(b), the value speculation scheduler breaks the
flow dependence from instruction I3 to I4. Instructions I4,
I5 and I6 now form a separate dependence chain, allowing
their execution to be speculated during scheduling. They
become instructions I4' I5' and I6', respectively. An
operand of instruction I4' is modified from R4 to R8.
Register R8 contains the value prediction for destination
register R4 of the predicted instruction I3.
Instruction I7, LDPRED, loads the value prediction for
instruction I3 into register R8. When the prediction is
incorrect (R8-R4), instruction I9, UDPRED, updates the
value predictor with the actual result of the predicted
instruction, from register R4. Note that the resulting
UDPRED instruction is part of patch-up code and its
execution is only required when a value is miss-predicted.
To ensure correct program execution, the compiler inserts
the branch instruction, I8, after the store instruction, I6', to
branch to the patch-up code when the predicted value does
not equal the actual value. The patch-up code contains
UDPRED and the original dependent instructions, I4, I5
and I6. After executing patch-up code, the program jumps
to the next instruction after I8 and execution proceeds as
normal.
Each LDPRED and UDPRED instruction pair that
corresponds to the same value prediction uses the same
table entry index into the value predictor. Each index is
assigned by the compiler to avoid unnecessary conflicts
inside the value predictor. While the number of table
entries is limited, possible conflicts are deterministic and
can be factored into choosing which values to predict in a
compiler approach. A value predictor design, featuring the
new LDPRED and UDPRED instructions, will be described
in section 2.
By combining hardware and compiler techniques, the
strengths of both dynamic and static techniques for
exploiting ILP can be leveraged. We see several possible
advantages to VSS:
. Static scheduling provides a larger scheduling scope
for exploiting ILP transformations, identifying long
dependence chains suitable for value prediction and
then re-ordering code aggressively.
. Value-speculative dependent instructions can execute
as early as possible before the predicted instruction that
they depend.
. The compiler controls the number of predicted values
and assigns different indices to them for accessing the
prediction table. Only instructions that the compiler
deems are good candidates for predictions are then
predicted, reducing conflicts for the hardware.
. Patch-up code is automatically generated, reducing the
need for elaborate hardware recovery techniques.
. Instead of relying on statically predicted values (e.g.,
from profile data), LDPRED and UDPRED access
dynamic prediction hardware for enhanced prediction
accuracy.
. VSS can be applied to dynamically-scheduled
processors, statically-scheduled (VLIW)
processors, or EPIC (explicitly parallel instruction
processors [14].
There is a drawback to VSS. Because static scheduling
techniques are employed, value-speculative instructions are
committed to be speculative and therefore always require
predicted values. Hardware only schemes can dynamically
decide when it is appropriate to speculatively execute
instructions. The dynamic decision is based on the value
predictor's confidence in the predicted value, avoiding
miss-prediction penalty for low confidence predictions.
The remainder of this paper is organized as follows: Section
2 examines the value predictor design for value speculation
scheduling. Section 3 introduces the VSS algorithm.
Section 4 presents experimental results of VSS. Section 5
concludes the paper and mentions future work.
2. VALUE PREDICTOR DESIGN
Microarchitectural support for value speculation scheduling
(VSS) is in the form of special-purpose value predictor
hardware. Value prediction accuracy directly relates to
performance improvements for VSS. Various value
predictors, such as last-value, stride, context-based, two-
level, and hybrid predictors [2], [3], [4], [6], [7], [9],
(a) Before breaking dependence (b) After breaking dependence from I3 to I4
provide different prediction accuracy. Value predictors
with the most design complexity, in general, provide for the
highest prediction accuracy. In order to feature LDPRED
and UDPRED instructions for VSS, previously proposed
value predictors must be re-designed slightly.
Figure
5 shows the block diagram of a value predictor that
includes LDPRED and UDPRED instructions. In this value
predictor, there are three fundamental units, the current
state block, the old state block and the prediction hardware
block. The current state block may contain register values,
finite state machines, history information, or machine flags,
depending on the prediction method employed. The old
state block hardware is a duplicate of the current state block
hardware. Predictions are generated by the prediction
hardware with input from the current state block. Various
prediction mechanisms can be used. For example,
generating the prediction as the last value (last value
predictors [2], [3]). Or, generating the prediction as the
sum of the last value and the stride, which is the difference
between the most recent last values (stride predictors [4],
[6], [7], [9]). Also, two-level predictors [7] allow for the
prediction of recently computed values. For two-level
predictors, a value history pattern indexes a pattern history
table, which in turn is used to index a value prediction from
recently computed values. Two-level value prediction
hardware is based on two-level branch prediction hardware.
Figure
5. Block Diagram of Value Predictor featuring
LDPRED and UDPRED.
Both the LDPRED and UDPRED instructions contain an
immediate operand that specifies the value predictor table
index. In general (independent of the prediction hardware
chosen) the LDPRED instruction performs three actions.
The compiler assigned number indexes each action. First,
the prediction hardware generates the predicted value by
using input from the current state block. Second, current
state information is shifted to the old state block. Last, the
current state block is updated based on the predicted value
from the prediction hardware. Information used by the
prediction hardware is updated simultaneously with the
current state block update. Note that for the LDPRED
instruction, the predicted value is used to update the current
state block speculatively.
The compiler assigned number also indexes the operation
of the UDPRED instruction. When the value prediction is
incorrect, the patch-up basic block of Figure 3(b) must be
executed. The execution of UDPRED instructions only
occurs in patch-up code, or only when values are miss-
predicted. The UDPRED instruction causes the update of
both the current state block and the prediction hardware
with the actual computed value and the old state block.
If the compiler can ensure that each LDPRED/UDPRED
instruction pair is executed in turn (each prediction is
verified and value predictions are not nested) the old state
block requires only one table entry. The same table entry in
the old state block is updated by every LDPRED
instruction, and used by every UDPRED instruction, in the
case of miss-prediction.
Figure
6. Hybrid Predictor (Stride and Two-Level). Saturating
counters are compared to select between the prediction
techniques.
In the VSS scheme, a prediction needs to be generated for
each LDPRED instruction. There is no flag in the value
predictor to indicate if a value prediction is valid or not.
The goal of the value predictor is to generate as many
correct predictions as possible. In this paper, stride, two-level
and hybrid value predictors [7] are implemented to
find the design which provides the highest prediction
accuracy for use in the VSS scheme. Stride predictors
predict arrays and loop induction variables well. Two-level
predictors capture the recurrence of recently used values
and generate predictions based on previous patterns of
values. However, neither of them alone can obtain high
prediction accuracy for all programs, which exhibit
different characteristics. Therefore, hybrid value
predictors, consisting of both stride and two-level
prediction are designed to cover both of these situations.
Current State
Old State
Prediction
Hardware
Actual
Value
(LDPRED,
Predicted
Value
LDPRED
UDPRED
Prediction
Index Stride
Two-Level
Counters for
Counters for
Two-Level
Figure
6 shows such a hybrid predictor that obtains high
prediction accuracy. The selection between the stride
predictor and the two-level predictor is different from that
in [7]. Every table entry has a saturating counter in the
stride predictor and in the two-level predictor. The
saturating counter increments when its corresponding
prediction is correct, and decrements when its prediction is
incorrect. Both saturating counters and predictors are
updated for each prediction, regardless of which prediction
is actually selected. The hybrid predictor selects the
predictor with the maximum saturating counter value. In
the event of a tie, the hybrid predictor favors the prediction
from the two-level predictor. Prediction accuracy results
for the three value predictors will be presented in section 4.
3. VALUE SPECULATION SCHEDULING
Performance improvement for value speculation scheduling
(VSS) is affected by prediction accuracy, the number of
saved cycles (from schedule length reduction) and the
number of penalty cycles (from execution of patch-up
code). Suppose that after breaking a flow dependence,
value-speculative dependent instructions are speculated,
saving S cycles in overall schedule length when the
prediction is correct. Patch-up code is also generated and
requires P cycles. Prediction accuracy for the speculated
value is X. In this case, speedup will be positive if S > (1-
holds. For the example of Figure 3(b) VSS saves 2
cycles (from 7 cycles to 5 cycles) and the resulting patch-up
code contains 5 instructions, requiring 3 cycles in an ILP
processor. Therefore, for positive speedup, the prediction
accuracy must be greater than 33%. If the actual prediction
accuracy is less, performance will be degraded by VSS.
With these performance considerations in mind, an
algorithm for VSS is proposed in Figure 7.
The first step is to perform value profiling. The scheduler
must select highly predictable instructions to improve
performance through VSS. Results from value profiling
under different inputs and parameters have been shown to
be strongly correlated [5], [6]. Therefore, value profiling
can be used to select highly predictable instructions on
which to perform value speculation.
Value profiling can be performed for all register-writing
instructions. If profiling overhead is a concern, a filter may
be used to perform value profiling only on select
instructions. Select instructions may be those that reside in
critical paths (long dependence height) or those that have
long latency (e.g., load instructions). In [5], estimating and
convergent profiling are proposed to reduce profiling
overhead for determining the invariance of instructions.
Similar techniques could be applied for determining the
value predictability of instructions.
Next, the value speculation scheduler performs region
formation. Treegion formation [10] is the region type
chosen for our experiments. A treegion is a non-linear
region that includes multiple execution paths in the form of
a tree of basic blocks. The larger scheduling scope of
treegions allows the scheduler to perform aggressive
control and value speculation. A data dependence graph is
then constructed for each treegion. In step four, a threshold
of prediction accuracy is used to determine whether or not
to perform value speculation on each instruction. For each
instruction, the scheduler queries the value profiling
information to get the estimate of its predictability. If the
predictability estimate is greater than the threshold, value
prediction is performed. For aggressive scheduling, more
instructions can be speculated by choosing a low threshold.
Suggested values for the threshold are derived from
experimental results in section 4.
When an instruction is selected for value prediction, a
LDPRED instruction is inserted directly after it. The
LDPRED instruction has an immediate value that is
assigned by the scheduler to be its chosen index into the
value predictor. A new register is also assigned as the
destination of the LDPRED instruction. Once the new
destination register has been chosen for the LDPRED
instruction, any dependent instruction(s) need to update
their source register(s) to reflect the new dependence on the
LDPRED instruction. Only the first dependent instruction
in a chain of dependent instructions needs to update its
register source, the remaining dependencies in the chain are
1. Perform Value Profiling
2. Perform Region Formation
3. Build Data Dependence Graph for Region
4. Select Instruction with Prediction Accuracy (based on Value Profiling) greater than a Threshold
5. Insert LDPRED after Predicted Instruction (selected instruction of step
6. Change Source Operand of Dependent Instruction(s) to Destination Register of LDPRED
7. Insert Branch to Patch-up Code
8. Generate Patch-up Code (which contains UDPRED)
9. Repeat Steps 4 - 8 until no more Candidates Found
10. Update Data Dependence Graph for Region
11. Perform Region Scheduling
12. Repeat Steps 2 - 11 for each Region
Figure
7. Algorithm of Value Speculation Scheduling.
unaffected. Even though more than one chain of dependent
instructions may result from just one value prediction, only
one LDPRED instruction is needed for each value
prediction.
In step seven, a branch to patch-up code is inserted for
repairing miss-predictions. Only one branch per data value
prediction is required and the scheduler determines where
this branch is inserted. Once the location of the branch is
set, all instructions in all dependence chains between the
predicted instruction and the branch to patch-up code are
candidates for value-speculative execution. It is therefore
desirable to schedule any of these instructions above the
predicted instruction. Actual hardware resources will
restrict the ability to speculatively execute these candidates
for value speculation. Also, as all candidates for value
speculation are duplicated in patch-up code, their number
directly affects the penalty for miss-prediction. These
factors affect the scheduler's decision on where to place the
branch to patch-up code.
In step eight, patch-up code is created for repairing miss-
predictions. The patch-up code contains the UDPRED
instruction, a copy of each candidate for value-speculative
execution, and an unconditional jump back to the
instruction following the branch to patch-up code. The
UDPRED instruction uses the same immediate value,
assigned by the scheduler, as its corresponding LDPRED
instruction for indexing the value predictor. The other
source operand for the UDPRED instruction is the
destination register of the predicted instruction (the actual
result of the predicted instruction). The UDPRED
instruction index and the actual result are used to update the
value predictor.
Finally, in steps ten and eleven, the data dependence graph
is updated to reflect the changes and treegion scheduling is
performed. Because of the machine resource restrictions
and dependencies, not all candidates for value speculation
are speculated above the predicted instruction. Section 4
shows the results of using different threshold values for
determining when to do value speculation.
4. EXPERIMENTAL RESULTS
The SPECint95 benchmark suite is used in the experiments.
All programs are compiled with classic optimizations by the
IMPACT compiler from the University of Illinois [11] and
converted to the Rebel textual intermediate representation
by the Elcor compiler from Hewlett-Packard Laboratories
[12]. Then, the LEGO compiler, a research compiler
developed at North Carolina State University, is used to
insert profiling code, form treegions, and schedule
instructions [10]. After instrumentation for value profiling,
intermediate code from the LEGO compiler is converted to
C code. Executing the resultant C code generates value
profiling data.
For the experiments in value speculation scheduling (VSS),
load instructions are filtered as targets for value
speculation. Load instructions are selected because they are
usually in critical paths and have long latencies. Value
profiling for load instructions is performed on all programs.
Table
1 shows the statistics from these profiling runs. The
number of total profiled load instructions represents the
total number of load instructions in each benchmark, as all
load instructions are instrumented (profiled). The number
of static load instructions represents the number of load
instructions that are actually executed. The difference
between total profiled and static load instructions is the
number of load instructions that are not visited. The
number of dynamic load instructions is the total of each
ed ic t
0%
10%
20%
30%
40%
50%
70%
80%
90%
100%
res s 130 . 132 peg 134 pe rl 147 vo rt e x A ri hm e ic M ean
Prediction
Accuracies
of
Load
Instructions
S tr de Two - Leve Hy b r i d
Figure
8. Prediction Accuracy of Load Instructions under Stride, Two-Level, and Hybrid Predictors.
load executed multiplied by its execution frequency.
Stride, two-level, and hybrid value predictors are simulated
during value profiling to evaluate prediction accuracy for
each load instruction. Since the goal of this paper is to
measure the performance of VSS rather than the required
capacities of the hardware buffers, no indices conflicts
between loads are modeled. An intelligent index
assignment algorithm likely will produce results similar to
this, but development of such an algorithm is outside the
Total
Profiled
Load
Instructions
Static Load
Instructions
Dynamic Load
Instructions
129.compress 96 72 4,070,431
132.ijpeg 5,104 1,543 118,560,271
134.perl 6,029 1,429 4,177,141
147.vortex 16,587 10,395 527,037,054
Table
1. Statistics of Total Profiled, Static and Dynamic Load
Instructions.
scope of this paper and left for future work. During value
profiling, after every execution of a load instruction, the
simulated prediction is compared with the actual value to
determine prediction accuracy. The value predictor
simulators are updated with actual values, as they would be
in hardware, to prepare for the prediction of the next use.
Each entry for the stride value predictor used has two fields,
the stride, the current value. The prediction is always the
current value plus the stride. The stride equals the
difference between the most recent current values. The
stride value predictor always generates a prediction. No
finite state machine hardware is required to determine if a
prediction should be used.
The two-level value predictor design is as in [7], with four
data values and six outcome value history patterns in the
value history table of the first level. The value history
patterns index the pattern history table of the second level.
The pattern history table employs four saturating counters,
used to select the most likely prediction amongst the four
data values. The saturating counters in the pattern history
table increment by three, up to twelve, and decrement by
one, down to zero. Selecting the data value with the
maximum saturating counter value always generates a
prediction.
The hybrid value predictor of stride and two-level value
predictors utilizes the previous description illustrated earlier
in
Figure
6 of section 2. In the hybrid design, the saturating
counters, used to select between stride and two-level
prediction, also increment by three, up to twelve, and
decrement by one, down to zero.
Figure
8 shows the prediction accuracy of load instructions
under stride, two-level, and hybrid predictors. The
prediction accuracy of the two-level predictor is higher than
that of the stride predictor for all benchmarks except
129.compress and 132.ijpeg. However, the average
prediction accuracy for the stride predictor is higher than
that for the two-level predictor because of the large
performance difference in 129.compress. Examining the
value trace for 129.compress shows many long stride
sequences that are not predicted correctly by the history-based
two-level predictor. The hybrid predictor, capable of
leveraging the advantages of each prediction method, has
the highest prediction accuracy, at 63% on average across
all benchmarks.
Figures
show prediction accuracy distribution for
load instructions using the hybrid predictor. Figure 9 is the
distribution for static loads and Figure 10 is the distribution
for dynamic loads. For 124.m88ksim, 90% of dynamic
load instructions have prediction accuracy of 90%. For
129.compress, 80% of dynamic load instructions have
prediction accuracy of 90%. For 124.m88ksim, 45% of the
static loads have prediction accuracy 90%, representing
most of the dynamic load instructions. For 129.compress,
70% of the static loads have prediction accuracy of 90%.
These loads are excellent candidates for VSS. Such high
prediction accuracy results in low overhead due to the
execution of patch-up code. However, for benchmarks
099.go and 132.ijpeg respectively, only 15% and 25% of
Figure
9. Prediction Accuracy Distribution for Static Load
Instructions Using Hybrid Predictor.
red ic tor1030507090-90% -80% -7 0% -60% -5 0% -40% -30% -20% -10% -0%
Pred ict ion Accurac ies
Percentage
of
Load
124 . m88ksim
126 . gcc
129 . compress
. li
132 . ijpeg
134 . per
147 . vor tex
Figure
10. Prediction Accuracy Distribution for Dynamic
Load Instructions Using Hybrid Predictor.
Hybrid Pred ictor1030507090-90% -80% -70% -60% -50% -40% -30% -20% -10% -0%
Prediction Accuracies
Percentage
of
Dynamic
Load
126.gcc
129.compress
130. li
132. ijpeg
134.perl
147.vortex
dynamic load instructions have prediction accuracy above
50%. Therefore, they will not gain much performance
benefit from VSS.
The VSS algorithm of Figure 7 is performed on the
programs of SPECint95. Prediction accuracy threshold
values of 90%, 80%, 70%, 60% and 50% are evaluated.
The number of candidates for value-speculative execution
is limited to three for each value prediction. This parameter
was varied in our evaluation, with the value of three
providing good results.
For the evaluation of speedup, a very long instruction word
architecture machine model based on the Hewlett-Packard
Laboratories PlayDoh architecture [13] is chosen.
One cycle latencies are assumed for all operations
(including LDPRED and UDPRED) except for load (two
cycles), floating-point add (two cycles), floating-point
subtract (two cycles), floating-point multiply (three cycles)
and floating-point divide (three cycles). The LEGO
compiler statically schedules the programs of SPECint95.
The scheduler uses treegion formation [10] to increase the
scheduling scope by including a tree-like structure of basic
blocks in a single, non-linear region. The compiler
performs control speculation, which allows operations to be
scheduled above branches. Universal functional units that
execute all operation types are assumed. An eight universal
unit (8-U) machine model is used. All functional units are
fully pipelined, with an integer latency of 1 cycle and a load
latency of 2 cycles. Program execution time is measured by
using the schedule length of each region and its execution
profile weight. The effects of instruction and data cache are
ignored, and perfect branch prediction is assumed in an
effort to determine the maximum potential benefits of VSS.
Figure
11 shows the execution time speedup of programs
scheduled with VSS over without VSS. Five different
prediction accuracy thresholds are used to select which load
operations are value speculated.
The maximum speedup for all benchmarks is 17% for
147.vortex. As illustrated in Figure 10, 147.vortex has
many dynamic load operations that are highly predictable.
While 147.vortex does not have the highest predictability
for load operations, the sheer number, as illustrated in
Table
1, results in the best performance. Benchmarks
124.m88ksim and 129.compress also show impressive
speedups, 10% and 11.5% respectively, using a threshold of
50%. Speedup for 124.m88ksim actually goes up, even as
the prediction threshold goes down, from 90% to 50%.
This result can be deduced from the distribution of dynamic
loads. For 124.m88ksim, there is a steady increase in the
number of dynamic loads available as the threshold
decreases from 90% to 50%. There is a tapering off in
speedup though, as more miss-predictions are seen near a
threshold of 50%. For 129.compress, the step in the
distribution of dynamic loads from 80% to 70% is reflected
in a corresponding step in speedup. Performance gains for
126.gcc are more reflective of the large number of dynamic
load operations than of their predictability. Penalties for
miss-prediction at the lower thresholds reduce speedup for
126.gcc. Benchmark 130.li, with a distribution of dynamics
loads similar to 126.gcc, has lower performance due to
fewer dynamic loads. Benchmark 134.perl clearly suffers
8U Machine Model1.021.061.11.141.18
099.go 124.m88ksim 126.gcc 129.compress 130.li 132.ijpeg 134.perl 147.vortex
90% 80% 70% 60% 50%
Figure
11. Execution Time Speedup for VSS over no VSS.
Prediction accuracy threshold values of 90%, 80%, 70%, 60% and 50% are used.
from not having many dynamic loads. Benchmarks 099.go
and 132.ijpeg do not have good predictability for load
operations.
Based on these performance results, a predictability
threshold of 70% appears to be a good selection. From the
distribution of predictability for dynamic loads in Figure
10, a threshold 70% includes a large majority of the
predictable dynamic loads. Choosing a threshold of
predictability lower than 70% results in a tapering off in
performance for some benchmarks. This is due to both a
higher penalty for miss-prediction and saturation of
functional unit resources, resulting in fewer saved execution
cycles.
5. CONCLUSIONS AND FUTURE WORK
This paper presents value speculation scheduling (VSS), a
new technique for exploiting the high predictability of
register-writing instructions. This technique leverages
advantages of both hardware schemes for value prediction
and compiler schemes for exposing ILP. Dynamic value
prediction is used to enable aggressive static schedules in
which value dependent instructions are speculated. In this
way, VSS can be thought of as a static ILP transformation
that relies on dynamic value prediction hardware. The
results for VSS presents in this paper are impressive,
especially when considering that only load operations were
considered for value speculation. Future work will include
the study of heuristics for selecting register-writing
operations in critical paths. Available functional unit
resources and remaining data dependencies affect the
ability to improve the static schedule and the penalty for
patch-up code. VSS should also be applied to operations
other than loads based on their predictability and potential
benefit to speedup. How many candidates for value-
speculative execution (dependent instructions between the
predicted instruction and the branch to patch-up code) to
allow is also an important parameter. In general, better
heuristics for deciding when to speculate values and how
many VSS candidates to allow (directly affecting the
amount of patch-up code) will be studied.
6.
ACKNOWLEDGMENTS
This work was funded by grants from Hewlett-Packard,
IBM, Intel and the National Science Foundation under
MIP-9625007.
We would like to thank Bill Havanki, Sumedh Sathaye,
Sanjeev Banerjia, and other members in the Tinker group.
We also thank the anonymous reviewers for their valuable
comments.
7.
--R
"Dynamic Memory Disambiguation Using the Memory Conflict Buffer,"
"Value Locality and Load Value Prediction,"
"Exceeding the Dataflow Limit via Value Prediction,"
"The Predictability of Data Values,"
"Value Profiling,"
"Can Program Profiling Support Value Prediction?,"
"Highly Accurate Data Value Prediction using Hybrid Predictors,"
"The Effect of Instruction Fetch Bandwidth on Value Prediction,"
"Speculative Execution based on Value Prediction,"
"Treegion Scheduling for Wide-Issue Processors,"
"The Superblock: An Effective Technique for VLIW and Superscalar Compilation,"
"Analysis Techniques for Predicated Code,"
"HPL PlayDoh Architecture Specification: Version 1.0,"
"Intel, HP Make EPIC Disclosure,"
--TR
The superblock
Dynamic memory disambiguation using the memory conflict buffer
Value locality and load value prediction
Analysis techniques for predicated code
Exceeding the dataflow limit via value prediction
The predictability of data values
Value profiling
Can program profiling support value prediction?
Highly accurate data value prediction using hybrid predictors
Treegion Scheduling for Wide Issue Processors
--CTR
Dean M. Tullsen , John S. Seng, Storageless value prediction using prior register values, ACM SIGARCH Computer Architecture News, v.27 n.2, p.270-279, May 1999
Tarun Nakra , Rajiv Gupta , Mary Lou Soffa, Value prediction in VLIW machines, ACM SIGARCH Computer Architecture News, v.27 n.2, p.258-269, May 1999
Mikio Takeuchi , Hideaki Komatsu , Toshio Nakatani, A new speculation technique to optimize floating-point performance while preserving bit-by-bit reproducibility, Proceedings of the 17th annual international conference on Supercomputing, June 23-26, 2003, San Francisco, CA, USA
Daniel A. Connors , Wen-mei W. Hwu, Compiler-directed dynamic computation reuse: rationale and initial results, Proceedings of the 32nd annual ACM/IEEE international symposium on Microarchitecture, p.158-169, November 16-18, 1999, Haifa, Israel
Huiyang Zhou , Jill Flanagan , Thomas M. Conte, Detecting global stride locality in value streams, ACM SIGARCH Computer Architecture News, v.31 n.2, May
Compiler controlled value prediction using branch predictor based confidence, Proceedings of the 33rd annual ACM/IEEE international symposium on Microarchitecture, p.327-336, December 2000, Monterey, California, United States
Youfeng Wu , Dong-Yuan Chen , Jesse Fang, Better exploration of region-level value locality with integrated computation reuse and value prediction, ACM SIGARCH Computer Architecture News, v.29 n.2, p.98-108, May 2001
Lucian Codrescu , D. Scott Wills , James Meindl, Architecture of the Atlas Chip-Multiprocessor: Dynamically Parallelizing Irregular Applications, IEEE Transactions on Computers, v.50 n.1, p.67-82, January 2001
Martin Burtscher , Amer Diwan , Matthias Hauswirth, Static load classification for improving the value predictability of data-cache misses, ACM SIGPLAN Notices, v.37 n.5, May 2002
Chao-ying Fu , Jill T. Bodine , Thomas M. Conte, Modeling Value Speculation: An Optimal Edge Selection Problem, IEEE Transactions on Computers, v.52 n.3, p.277-292, March | value speculation;instruction level parallelism;VLIW instruction schedulings;value prediction |
291061 | An empirical study of decentralized ILP execution models. | Recent fascination for dynamic scheduling as a means for exploiting instruction-level parallelism has introduced significant interest in the scalability aspects of dynamic scheduling hardware. In order to overcome the scalability problems of centralized hardware schedulers, many decentralized execution models are being proposed and investigated recently. The crux of all these models is to split the instruction window across multiple processing elements (PEs) that do independent, scheduling of instructions. The decentralized execution models proposed so far can be grouped under 3 categories, based on the criterion used for assigning an instruction to a particular PE. They are: (i) execution unit dependence based decentralization (EDD), (ii) control dependence based decentralization (CDD), and (iii) data dependence based decentralization (DDD). This paper investigates the performance aspects of these three decentralization approaches. Using a suite of important benchmarks and realistic system parameters, we examine performance differences resulting from the type of partitioning as well as from specific implementation issues such as the type of PE interconnect.We found that with a ring-type PE interconnect, the DDD approach performs the best when the number of PEs is moderate, and that the CDD approach performs best when the number of PEs is large. The currently used approach---EDD---does not perform well for any configuration. With a realistic crossbar, performance does not increase with the number of PEs for any of the partitioning approaches. The results give insight into the best way to use the transistor budget available for implementing the instruction window. | Introduction
To extract significant amounts of parallelism from sequential
programs, instruction-level parallel (ILP) processors often
perform dynamic scheduling. The hardware typically
collects decoded instructions in an instruction window, and
executes instructions as and when their source operands become
available. In going from today's modest issue rates
to 12- or 16-way issue, centralized dynamic schedulers face
complexity at all phases of out-of-order execution [2] [10].
The hardware needed to forward new results to subsequent
instructions and to identify ready-to-execute instructions
from the instruction window limits the size of the hardware
window. It is very important, therefore, to decentralize the
dynamic scheduling hardware.
The importance of decentralization is underscored in recently
developed processors/execution models such as the
MIPS R10000 [20] and R12000, the DEC Alpha 21264 [7],
the multiscalar model [4] [14], the superthreading model [17],
the trace processing model [13] [15] [19], the MISC (Multiple
Instruction Stream Computer) [18], the PEWs (Parallel Execution
model [6] [11], and the multicluster model
[3]. All of these execution models split the dynamic instruction
window across multiple processing elements (PEs) so
as to do dynamic scheduling and parallel execution of in-
structions. Dynamic scheduling is achieved by letting each
execute instructions as and when their operands become
available.
An important issue pertaining to decentralization is the criterion
used for partitioning the instruction stream among
the PEs. Three types of decentralization approaches have
been proposed based on the criterion they use for this parti-
tioning: (i) Execution unit Dependence based Decentralization
(EDD), (ii) Control Dependence based Decentralization
(CDD), and (iii) Data Dependence based Decentralization
(DDD). The first category groups instructions that use the
same execution unit-such as an adder or multiplier-into
the same PE. Examples are the R10000, R12000, and Alpha
21264. The second category groups control-dependent instructions
into the same PE. The multiscalar, superthread-
ing, and trace processing models come under this category.
The last category groups data-dependent instructions into
the same PE. Examples are the MISC, PEWs, and multi-cluster
models.
Each of the three categories has different hardware requirements
and trade-offs. This paper reports the results of a
set of experiments that were conducted to provide specific,
quantitative evaluations of different trade-offs. We address
the following specific questions:
What kind of programs benefit from each kind of partitioning
ffl How well does performance scale with each decentralization
ffl How much benefit would there be if a crossbar is used
to interconnect the PEs?
The question of how to select the best decentralization approach
to use at each granularity of parallelism is an important
one, and we discuss how this might be accomplished. Of
more immediate concern is the question of whether it is even
worth attempting to use such decentralization techniques for
more than a few PEs. While we do not yet know the exact
shape these execution models will take in the future,
we show that if the right choices are made, these decentralization
approaches can provide reasonable improvements in
instruction completion rate without much of an impact on
the cycle time.
The rest of this paper is organized as follows. Section 2 provides
background and motivation behind decentralization
of the dynamic scheduling hardware. It also describes the
three decentralization approaches under investigation. Section
3 describes our experimentation methodology. Section
presents detailed simulation results of the different decentralization
approaches. In particular, it examines the impact
of increasing the number of PEs, and the effects of two different
PE interconnection topologies. Section 5 presents a
discussion of the results, and the conclusions of this paper.
Decentralized ILP Execution Models
Programs written for current instruction set architectures
are generally in control-driven form, i.e., control is
assumed to step through instructions in a sequential or-
der. Dynamically scheduled ILP processors convert the total
ordering implied in the program into a partial ordering
determined by dependences on resources, control, and
data. This involves identifying instructions that are mutually
resource-independent, control-independent, and data-
independent. In order to scale up the degree of multiple is-
sue, resources that are in high demand are decentralized. To
more reordering and parallel execution of instruc-
tions, constraints due to resource dependences are overcome
(i) by replicating resources such as the fetch unit, the decode
unit, the physical registers, and the execution units (EUs)
(i.e., functional units), and (ii) by providing multiple banks
of resources such as the Dcache, as shown in Figure 1(a).
2.1 Decentralizing the Dynamic Scheduler
On inspecting the block diagram of Figure 1(a), we can see
that the important structures that remain to be decentralized
are the dynamic scheduler (DS), the register rename
Dynamic
(DS)
Scheduler
Control Misprediction Information
ISA-visible
Registers
Decode
Units
Register
Rename
Unit
Control
Flow
Icache
Physical
Registers
Memory
Address
Resolution
Dcache
Banks
(a)
EU
EU
EU
EU
EU
EU
EU
EU
Dynamic
(DS)
Scheduler
Dynamic
(DS)
Scheduler
Dynamic
(DS)
Scheduler
Dynamic
(DS)
Scheduler
Control Misprediction Information
Flow
Control Instruction
ISA-visible
Registers
ICN
Distribution
Unit
Address
Dcache
Banks
Memory
Resolution
(b)
Figure
1: Generic Organization of
Dynamically Scheduled ILP Processors
(a) Centralized Scheduler; (b) Decentralized Scheduler
unit, and the memory address resolution unit 1 . Incidentally,
these are the most difficult parts to decentralize because
they deal with inter-instruction dependences, which preclude
decentralization by mere replication. Of these parts,
the DS is the hardest to decentralize because it often needs
to handle all of the active instructions that are simultaneously
present in the processor. Detailed studies with 0.8-m,
0.35-m, and 0.18-m CMOS technology [10] also confirm
that a centralized DS does not scale well. Thus, it is important
to decentralize the DS. Many researchers have proposed
decentralizing the DS with the use of multiple PEs,
each having a set of EUs, as shown in Figure 1(b).
In the decentralized processor, the dynamic instruction stream
is partitioned across the PEs, which operate in parallel. The
1 The complexity of these structures can be partly reduced by off-loading
part of their work to special hardware that is not in the critical
path of program execution [8] [9] [19].
instructions assigned to PEs can have both control dependences
and data dependences between them. A natural
question that arises at this point is: on what basis should instructions
be distributed among the decentralized PEs? The
criterion used for partitioning the instruction stream is very
important, because an improper partitioning could in fact
increase inter-PE communication, and degrade performance!
True decentralization should not only aim to reduce the demand
on each PE, but also aim to minimize the demand
on the PE interconnect by localizing a major share of the
inter-instruction communication occurring in the processor
to within the decentralized PEs.
The three current approaches for grouping instructions into
PEs revolve around three important constraints to execute
instructions in parallel- (i) execution unit dependences,
(ii) control dependences, and (iii) data dependences. We
shall look at each of the three decentralization approaches.
For the ensuing discussion, we use the example control flow
graph (CFG) and code shown in Figure 2. This CFG consists
of three basic blocks A, B, and C, with block B control-dependent
on the conditional branch in A, and block C
control-dependent on the conditional branch in B. We shall
assume that the control flow predictor has selected blocks
A, B, and C to be a trace.
I12: BR IF R13 == 0
I9: BR IF R4 >= 0
A I2:
I4: BR IF R4 == 0
Figure
2: Example Control Flow Graph and Code
2.2 Execution unit Dependence based Decentralization
In this type of decentralization, instructions are assigned to
PEs based on the EU that it will execute on. Thus, instructions
that are resource dependent on a particular EU execute
in the same PE. An artifact of this arrangement is that
instructions wait near where its EU dependence is resolved.
Interestingly, one of the pioneer dynamic scheduling schemes,
implemented in IBM 360/91 [16], had incorporated this type
of decentralization in 1967 itself! Very recently the MIPS
R10000 and R12000 processors also use this approach [20].
A potential advantage of the EDD approach is that each PE
need have only one or a few types of execution units. Another
advantage is that instruction partitioning is straight-forward
and static in nature when only a single PE has an
EU of a particular type. In such a situation, dynamic instances
of a given static instruction always get assigned to
the same PE. When multiple PEs have an EU of a particular
type, then there is a choice involved in allocating
instructions that require that EU type. One option in that
situation is to do a static allocation by a compiler or by off-line
hardware. Another option is to do dynamic allocation
(as in Alpha 21264 [7]), perhaps based on the queue lengths
in each of the concerned PEs. With either option, a ready
instruction may sometimes have to wait for its allotted EU
to become free, although another EU of the same type (in
another PE) is free. Furthermore, if the processor performs
speculative execution, then recovery actions arising from incorrect
speculations will necessitate selective discarding of
instructions from different PEs. the main shortcoming with
the EDD approach, however, is that generally the result
from a PE may be needed in any other PE, necessitating a
global interconnect between the PEs, which does not scale
well [10].
2.3 Control Dependence based Decentralization
(CDD)
In the second decentralization approach, a contiguous portion
of the dynamic instruction stream is assigned to the
same PE. Thus, instructions that are control-dependent on
the same conditional branch are generally assigned to the PE
to which the branch has been assigned, and instructions wait
near where their control dependences will be resolved. Examples
for this approach are the multiscalar execution model [4]
[14], the superthreading model [17], and the trace processing
model [13] [15] [19] 2 . Control-dependence-based decentralization
fits well with the control-driven program specification
typically adopted in current ISAs. Because control-dependent
instructions tend to be grouped together in the
program executable, partitioning of instructions among the
PEs can be easily done by statically partitioning the CFG.
Furthermore, no regrouping of instructions is needed at instruction
commit time.
CDD hardware implementations proposed so far, such as the
multiscalar processor, the superthreading processor, and the
trac processors, all organize the PEs as a circular queue as
shown in Figure 3. The circular queue imposes a sequential
order among the PEs, with the head pointer indicating the
oldest active PE. Programs execute on these processors as
follows. Each cycle, if the tail PE is idle, the control flow
predictor (CFP) predicts the next task in the dynamic instruction
stream, and invokes it on the tail PE; a task is
a path or subgraph contained in the CFG of the executed
program. For instance, if a CDD processor uses trace-based
tasks, then blocks A, B, and C of our example code (which
forms a trace) are assigned to a single PE. After invocation,
the tail pointer is advanced, and the invocation process continues
at the new tail in the next cycle. The successor task
in our example code will be the one starting at the predicted
target of the conditional branch in block C. Thus, the CFP
steps through the CFG, distributing tasks (speculatively)
Multiprocessors also partition instructions based on control de-
pendences. However, their partitioning granularity is generally much
coarser (several hundreds of instructions or more per task). Fur-
thermore, multiple tasks in a multiprocessor do not share the same
register space.
to the PEs. When the head PE completes its task, its instructions
are committed, and the head pointer is advanced,
causing that PE to become idle. When a task misprediction
is detected, all PEs between the incorrect speculation point
and the tail PE are discarded in what is known as a squash.
Figure
3: Block Diagram of an 8-PE CDD Processor
Whereas trace processors consider a trace (a single path consisting
of multiple basic blocks) as a task, multiscalar processors
consider a subgraph of the control flow graph as a
task, thereby embedding alternate flows of control in a task.
These processors also differ in terms of how the instructions
of a task are fetched. Whereas trace processors fetch all instructions
of a task in a single cycle and supply them to a
PE, multiscalar processors let all of the active PEs parallelly
fetch their instructions, one by one. Architectural support is
provided in them to facilitate the hardware in determining
data dependences.
Studies [5] [19] have shown that in CDD processors, most
of the register operands are produced in the same PE or a
nearby PE, so that a unidirectional ring-type PE interconnect
is quite sufficient. Each PE typically keeps a working
copy of the register file, which also helps to maintain precise
state at task boundaries.
2.4 Data Dependence based Decentralization
(DDD)
In the third approach of decentralization, data dependences
are used as the basis of partitioning. That is, instructions
that are data dependent on an instruction are typically dispatched
to the PE to which the producer instruction has
been dispatched. Mutually data-independent instructions
are most likely dispatched to different PEs. Thus, instructions
wait near where their data dependences will be resolved.
The MISC (Multiple Instruction Stream Computer) [18], the
PEWs execution model [6] [11], the dependency-based model
given in [10], and the multicluster model [3] come under this
category. As data dependences dictate most of the communication
occurring between instructions, the DDD approach
attempts to minimize communication across multiple PEs.
Because the instructions in a PE are mostly data-dependent,
it becomes less important to do run-time scheduling within
each PE [10]. However, partitioning of instructions in a
DDD processor is generally harder than that in a CDD pro-
cessor. This is because programs are generally written in
control-driven form, which causes individual strands of data-dependent
instructions to be often spread over a large segment
of code. Thus, the hardware has to first construct the
data flow graph (DFG), and then do the instruction parti-
tioning, as shown in Figure 4. Notice that if programs were
specified in data-driven form, then data-dependence-based
partitioning would have been easier. To reduce the hardware
complexity, the DFG corresponding to a path (or trace) can
be generated by off-line hardware, and stored in a special
i-cache for later re-use.
I4: BR IF R4 == 0
I9: BR IF R4 >= 0
I12: BR IF R13 == 0
Figure
4: Register Data Flow Graph (RDFG)
of Trace ABC in Figure 2
The DDD hardware implementations proposed so far, such
as the PEWs [6] [11], the dependence-based model in [10],
and the multicluster [3], differ in terms of how the PEs are
interconnected. PEWs uses a unidirectional ring-type con-
nection, whereas the MISC and dependence-based model of
[10] use a crossbar. When a crossbar is employed, all PEs are
of same proximity to each other, and hence the instruction
partitioning algorithm becomes straightforward. However,
as discussed earlier, a crossbar does not scale well.
In the multicluster execution model, the ISA-visible registers
are partitioned across the PEs. An instruction is assigned a
PE based on its source and destination (ISA-visible) regis-
ters. Thus, its partitioning is static in nature. In the PEWs
execution model, the partitioning is done dynamically. In
order to reduce the burden on the partitioning hardware
and the complexity on the instruction pipeline, the DFG
corresponding to a path is built by off-line hardware, and
stored in a special i-cache [11]. Alternately, architectural
support can be provided to permit the compiler to convey
the DFG and other relevant information to the hardware.
2.5 Comparison
We have seen three approaches for partitioning instructions
amongst decentralized processing elements. Table 1 succinctly
compares the different attributes and hardware features
of the three decentralization approaches. From the
implementation point of view, CDD and EDD potentially
have an edge, because of the static nature of their partition-
ing. CDD implementations have a further advantage due
Attribute EDD CDD DDD
Basis for partitioning Resource usage Control dependence Data dependence
Execution unit types in a PE Only a few EU types All EU types All EU types
Logical ordering among PEs
Partitioning granularity Instruction Task Instruction
Time at which partitioning is done Static/Dynamic Static/Dynamic Static/Dynamic
Complexity of dynamic partitioning hardware Moderate Moderate High
Table
1: Comparison of Different Decentralization Approaches
to partitioning at a higher level. Instead of having a 16-
way instruction fetch mechanism that fetches and decodes
instructions every cycle from an i-cache or a trace cache,
the instruction fetch mechanism (including the i-cache) can
be distributed across the PEs, as is done in the multiscalar
processor [4] [14].
3 Experimental Methodology
The previous section presented a detailed description and
comparison of three decentralization approaches. Next, we
present a detailed simulation-based performance evaluation
of these three decentralization approaches.
3.1 Experimental Setup
The setup consists of 3 execution-driven simulators-based
on the MIPS-II ISA-that simulate the 3 decentralization
approaches in detail. The simulators do cycle-by-cycle sim-
ulation, including execution along mispredicted paths. The
simulators are equivalent in every respect except for the instruction
partitioning strategy. In particular, the following
aspects are common for all of the simulators.
Instruction Fetch Mechanism: All execution models use
a common control flow predictor to speculate the outcome of
multiple branches every cycle. This high-level predictor, an
extension of the tree-level predictor given in [1], considers a
tree-like subgraph of the dynamic control flow graph as the
basis of prediction. A tree of depth 4, having up to 8 paths,
is used. The predictor predicts one out of these 8 paths using
a 2-level PAg predictor. Each tree-path (or trace) is allowed
to have a maximum of 16 instructions. The first level table
(Subgraph History Table) of the predictor has 1024 entries,
is direct mapped, and uses a pattern size of 6. The second
level table (Pattern History Table) entries consist of 3-bit
up/down saturating counters.
A 128 Kbyte trace cache [12] is used to store recently seen
traces. The trace cache is 8-way set-associative, has 1 cycle
access time, and a block size of 16 instructions. All traces
starting at the same address map to the same set in the
trace cache. Every cycle, the fetch mechanism can fetch
and dispatch up to 16 instructions.
Data Memory System: All execution models use the same
memory system, with an L1 data cache and a perfect L2
cache (so as to reduce the memory requirements of the simu-
lators). The L1 data cache is 64 Kbytes, 4-way set-associative,
32-way interleaved, non-blocking, 16 byte blocks, and 1 cycle
access latency. Memory address disambiguation is performed
in a decentralized manner using a structure called
arcade [11], which has the provision to execute memory references
prior to doing address disambiguation.
Instruction Retirement: All of the investigated execution
models retire (i.e., commit) instructions in program order,
one trace at a time, so as to support precise exceptions.
PE Interconnection Topology: Three types of PE inter-connects
are modeled in the simulators-a unidirectional
ring, a bi-directional ring, and a crossbar. The rings take
1 cycle for each adjacent PE!PE transfer. The crossbar
takes log 2 p cycles for all PE!PE transfers, where p is the
number of PEs.
Parameters for the Study:
ffl Maximum Fetch Size (f): instructions.
ffl PE issue width (d): the maximum number of instructions
executed from a PE per cycle is fixed at 3 (be-
cause higher values gave only marginal improvements).
Thus, each PE has 3 EUs.
ffl PE issue strategy: the default strategy is to use out-
of-order execution within each PE.
The experiments involve varying 3 parameters: the partitioning
strategy, the number of PEs (p), and the PE inter-connect
3.2 Benchmarks and Performance Metrics
Table
2 gives the list of SPEC95 integer programs that we
use, along with the input files we use. The compress95
Average Path
Benchmark Input File Trace Length Prediction
gcc stmt.i 13.06 81.78%
go 9stone21.in 14.29 70.17%
li test.lsp 12.28 91.04%
vortex vortex.raw 13.59 94.98%
Table
2: Benchmark Statistics
program is based on the UNIX compression utility, and
performs a compression/decompression sequence on a large
buffer of data. The gcc program is a version of the GNU
C compiler. It has many short loops, and has poor instruction
locality. The go program is based on the internationally
ranked Go program, "The Many Faces of Go". The li
Benchmark Percentage of Instrs Using an EU type EU!EU communication
Program Integer Load/Store Branch Int!Int Int!Load/Store Int!Branch Load/Store!Int
gcc 43.2% 36.1% 20.7% 26.2% 32.6% 13.9% 11.2%
go 52.4% 32.2% 15.4% 34.9% 28.9% 9.0% 14.5%
li 26.6% 48.7% 24.7% 14.3% 37.8% 6.3% 7.6%
vortex 28.7% 52.4% 18.9% 13.5% 47.6% 7.5% 8.7%
Table
3: Distribution of Instructions based on Execution Unit Used
program is a lisp interpreter written in C. The m88ksim program
is a simulator for the Motorola 88100 processor, and
the vortex program is a single-user object-oriented database
program that exercises a system kernel coded in integer C.
The programs are compiled for a MIPS R3000-Ultrix platform
with a MIPS C (Version 3.0) compiler using the optimization
flags specified with the SPEC benchmark suite.
The benchmarks are simulated to completion or up to 500
million instructions, depending on whichever occurred first.
Table
also gives some execution statistics, such as the number
of instructions simulated, the average tree-path (trace)
length, and the path prediction accuracy. From these statis-
tics, we can see that gcc and go have very poor control flow
predictability, primarily arising from poor instruction local-
ity, which causes too many conflicts in the first level table
of the predictor.
For measuring performance, execution time is the sole metric
that can accurately measure the performance of an integrated
software-hardware computer system. Accordingly,
our simulation experiments measure the execution time in
terms of the number of cycles required to execute a fixed
number of instructions. While reporting the results, the execution
time is expressed in terms of instructions per cycle
(IPC). Notice that the IPC figures include only the committed
instructions and do not include nops. We also measure
register traffic to get more insight into the behavior
of the different decentralization approaches.
3.3 Partitioning Algorithms Simulated
EDD: In the EDD system, each PE has execution units
(EUs) of a particular type. To decide how many PEs should
have EUs of a particular type, we measured the percentage
of instructions that use each EU type. Table 3 gives these
percentages. Based on the percentage of instructions using a
particular EU, we used the following EU assignments. When
the system has a single PE, all 3 EUs of that PE can execute
any type of instruction. When the system has 2 PEs, the
first PE houses 3 Integer/FP EUs, and the second PE houses
3 Load/Store/Branch EUs. When the system has 4 or more
PEs, the division of PEs is as in Table 4. PEs having EUs
Number of Integer Load/Store Branch FP
PEs PEs PEs PEs PEs
Table
4: Division of PEs for EDD Scheme
of the same kind are placed adjacent to each other. The
set of PEs with the Load/Store EUs is placed immediately
after the set of PEs with the Integer EUs, because there is
significant amount of traffic from integer EUs to Load/Store
EUs (c.f.
Table
3). The instruction partitioning strategy has
a dynamic component in that when an instruction can be
assigned to multiple PEs, it is assigned to the candidate PE
having the least number of instructions.
CDD: For studying the CDD partitioning approach, we
connect the PEs in a circular queue-like manner. Two different
task sizes, namely 8 and 16, are used. In the first case,
called CDD-8, a trace of up to 8 instructions is fetched in a
cycle and assigned to the PE at the tail of the PE circular
queue. In the second case, called CDD-16, a trace of up to
instructions is fetched in a cycle and assigned to the tail
PE.
DDD: For studying the DDD partitioning approach, we use
two different partitioning algorithms. The first algorithm,
called (DDD-Multicluster), follows the multicluster
approach depicted in [3]. A subset of the ISA-visible registers
is assigned to each PE such that each ISA-visible register
has the notion of a home-PE. For our studies, the n th
PE was considered the home-PE for registers r
through
r
is the number of general-purpose
registers and p is the number of PEs. The assignment of
instructions to PEs is done as depicted in Table 5.
Number of Number of PE to which
Source Dest. Instruction
Registers Registers is Assigned
of dest. register
of source register
of dest. register
st source register
If 2 or more source registers &
destination register are same,
then, home-PE of that register;
else, home-PE of dest. register
Table
5: Instruction Assignment for
DDD-M Partitioning Scheme
The second DDD algorithm, called DDD-P (DDD-PEWs),
makes better use of data dependence information. It uses
off-line hardware to construct the register data flow graph
(RDFG) for each trace (tree-path) when the trace is encountered
for the first time. Once the RDFG of a trace is
data dependence chains (or strands) are identified
in the RDFG. Some dependence strands may have communication
between them. Once the strands are identified, a
relative PE assignment is made for the strands, with a view
IPC
Number of PEs
DDD-M
EDD
IPC
gcc
Number of PEs31
go
IPC
Number of PEs31
li
IPC
Number of PEs31
IPC
Number of PEs31
vortex
IPC
Number of PEs
Figure
5: IPC without Nops for Varying Number of PEs, with Unidirectional Ring PE Interconnect
to reduce the communication latency between strands. That
is, if there is flow of data from one strand to another, the
strands are given a relative PE assignment such that the
consumer strand's PE is the one immediately following the
producer strand's PE. Strands that do not have data dependences
with any other strands of the trace are marked
as relocatable. At the time of instruction dispatch, the dispatch
unit decides the PE placement for each strand based
on its dependences to data coming from outside the trace
and the relative PE placement decided statically by the off-line
hardware. A 2-cycle penalty (stall) is imposed when a
trace is seen for the first time in order to form the RDFG
and the relative PE assignments. If the PE assigned to an
instruction is full, the instruction is assigned to the closest
succeeding PE having an empty slot.
Performance Results
4.1 IPC with Unidirectional Ring
Our first set of studies focuses on comparing the performance
of different partitioning algorithms as the number of
PEs is varied, and a unidirectional ring is used to connect
the PEs. Figure 5 plots the IPC values obtained with the default
parameters (PE scheduler
for each
benchmark. The values of p that we consider are f1, 2, 4, 8,
12, 16g. Each graph in Figure 5 corresponds to a particular
benchmark program, and has 3 plots, one corresponding to
each decentralization approach.
EDD: First of all, the EDD approach does not perform well
at all with a ring-type PE interconnect, as expected. This
is because the EDD approach is unable to exploit localities
of communication, which is very important when using a
ring topology to interconnect the PEs. The performance
increases slightly as the number of PEs is increased to 2,
but thereafter it is downhill.
DDD: The performance of the two DDD partitioning algorithms
are quite different. The performance of the DDD-M
algorithm is generally poor, and similar to the performance
of the EDD algorithm simulated. To get good performance
from the DDD-M approach, an optimizing compiler needs
to rename register specifiers considering the idiosyncrasies
of the DDD-M execution model; otherwise, very little of the
data dependence localities are likely to be captured. For the
DDD-P approach, performance generally keeps increasing as
the number of PEs (p) is increased from 1 to 8. For these
values of p, the DDD-P algorithm performs the best among
the investigated partitioning algorithms. This is because
DDD-P is better able to exploit localities of communication
when instructions are spread across a moderate number of
PEs. The most striking observation is that the performance
of DDD-P starts dropping when the number of PEs is increased
beyond 8. This drop in performance is because some
data-dependent instructions are getting allocated to distant
PEs, resulting in large delays in forwarding register values
between these distant PEs. One reason for the spreading
of data-dependent instructions is that the RDFG formation
and the instruction partitioning are done on an individual
trace basis. If a knowledge of the subsequent traces is available
and made use of while partitioning instructions, then a
better placement of instructions can be made.
CDD: The performance of both CDD schemes keeps increasing
steadily as the number of PEs is increased from 1
to 16. This is because of two reasons: (i) available parallelism
increases with instruction window size, and (ii) most
register instances have a short lifetime [5] [19], resulting in
very little communication of register values between non-adjacent
PEs. As the number of PEs is increased beyond 8,
the CDD approach starts performing better than the DDD
approaches; both DDD and EDD begin to perform worse
in this arena! Notice, however, that for three of the benchmarks
(compress95, li, and m88ksim), the performance of
DDD-P with 4 PEs is better than the performance of CDD-
with And for the remaining three benchmarks,
the performance of a 4 PE DDD-P processor is not much
lower than that of a 16 PE CDD-16 processor. This highlights
the importance of developing DDD algorithms that
can perform better distribution of instructions over a large
number of PEs.
4.2 IPC with Bi-directional Ring
To investigate if the unidirectional nature of the ring was
the cause of the drop in DDD-P's performance for higher
values of p, we also experimented with a bi-directional ring.
Table
6 tabulates the IPC values obtained for 12 PE DDD-
P with the unidirectional PE interconnect and with the bi-directional
PE interconnect. (We simulated the bi-directional
ring configuration for 12 PEs, because the performance of
DDD-P starts dropping at
The data in Table 6 indicate that a bi-directional ring does
little to improve the performance of DDD-P when
(except for m88ksim which registers a modest improvement
from 3.13 to 3.46).
Benchmark IPC obtained with
Program Unidirectional Ring Bi-directional Ring
gcc 2.27 2.27
go 1.82 1.82
li 3.15 3.17
vortex 3.59 3.64
Table
Unidirectional Ring and Bi-directional Ring
PE Interconnects
4.3 IPC with Crossbar
The results presented so far were obtained with ring-type interconnections
between the PEs. Next, we investigate how
the decentralization approaches scale when the PEs are interconnected
by a realistic crossbar. Figure 6 plots the IPC
values obtained when the PEs are interconnected by a log 2 p-cycle
crossbar. A comparison of the data in Figures 5 and
6 show that with a crossbar interconnect, the performance
of EDD has improved slightly for some of the benchmarks.
For DDD-P, the performance has decreased (compared to
that with ring interconnect) for lower values of p, and remains
more or less the same as before for higher values of p.
For CDD, the performance with a crossbar is consistently
lower than the performance with a ring. In fact, contrary to
the case with a ring-type interconnect, the performance of
CDD-16 with a realistic crossbar decreases as the number of
PEs is increased. Overall, the results with a realistic cross-bar
show the performance of DDD-P to be slightly better
than that of CDD-16 for most benchmarks.
4.4 Register Traffic
In order to get a better understanding of the IPC results
seen so far, we next analyze the register traffic occurring in
the decentralized processors when different partitioning algorithms
are used. Figure 7 plots for the distribution
of register results based on the number of PEs they had to
travel. For each benchmark, distributions are given for the
EDD, CDD-16, and DDD-P partitioning algorithms.
The curves for EDD indicate a significant amount of register
traffic between distant PEs. For both CDD and DDD, the
amount of register traffic between PEs steadily decreases as
PE distance increases. For DDD-P, the traffic dies down to
almost zero as register values travel about 7 PEs, which explains
why using a bi-directional ring does not fetch much of
performance improvements. However, a noticeable fraction
of register values travel up to 5-6 hops, which affects the performance
of the DDD-P scheme. One of the reasons for this
is that the DDD-P scheme forms the DFGs for each trace
independently, and assigns instructions of a trace to the PEs
without considering the DFGs of the subsequent traces who
need the values produced by this trace. For CDD, register
traffic almost dies down to almost zero, as register values
travel about 3 PEs. This is because most register instances
have a short lifetime [5] [19], which explains why the performance
of CDD with a ring-type interconnect continues to
increase as the number of PEs is increased.
5 Discussion and Conclusions
The central idea behind decentralized execution models is to
split the dynamic execution window of instructions amongst
smaller, parallel PEs. By keeping each PE relatively small,
the circuitry needed to search it when forwarding newly produced
values is greatly reduced, thus reducing the impact of
dynamic scheduling on clock speed. By allocating dependent
instructions to the same PE as much as possible, communication
localities can be exploited, thereby minimizing
global communication within the processor. We examined
three categories of decentralized execution models, based on
the type of dependence they use as the basis for instruction
partitioning. These categories are (i) Execution unit
Dependence based Decentralization (EDD), (ii) Control Dependence
based Decentralization (CDD), and (iii) Data Dependence
based Decentralization (DDD).
The detailed performance results that we obtained, on an
ensemble of well-known benchmarks, lead us to two important
conclusions. First, the currently used approach-
EDD-does not provide good performance even when the
instruction window is split across a moderate number of PEs
and when a crossbar is used to connect the PEs. Second,
when a unidirectional ring is used to interconnect the PEs,
the DDD-P approach provides the best IPC values when a
IPC
Number of PEs
12Number of PEs
gcc
IPC
EDD
DDD-P
Number of PEs
Number of PEs
li31
Number of PEs31
Number of PEs
vortex
Figure
Nops with a Realistic (log 2 p cycle) Crossbar PE Interconnect
moderate number of PEs is used. This is due to its ability
to exploit localities of communication between instructions.
When a large number of PEs is used, the performance of
DDD-P starts dropping, and the CDD approach begins to
perform better. This is because of the inability of the implemented
DDD-P algorithm to judiciously partition complex
data dependence graphs across a large number of PEs.
Nevertheless, the performance of the implemented DDD-P
algorithm with 4 PEs is comparable to or better than the
performance of the implemented CDD with PEs.
Although the results presented in this paper help in understanding
the general trends in the performance of different
decentralization approaches, the study of this topic is not
complete by any means. There are a variety of execution
model-specific techniques (both at the ISA-level and at the
microarchitectural level) that need to be explored for each of
the decentralized execution models before a conclusive verdict
can be reached. In addition, it is important to investigate
the extent to which factors such as value prediction,
instruction replication, and multiple flows of control introduce
additional wrinkles to performance. Finally, it would
be worthwhile to explore the possibility of a good blending
of the CDD and DDD models by using a DDD-P processor
(i.e., a cluster of DDD-P PEs) as the basic PE in a CDD
processor. Such a processor can attempt to exploit data independences
at the lowest level of granularity and control
independences at a higher level.
Acknowledgements
This work was supported by the US National Science Foundation
(NSF) through a Research Initiation Award (CCR
9410706), a CAREER Award (MIP 9702569), and a research
grant (CCR 9711566). We are indebted to the reviewers for
their comments on the paper and to Dave Kaeli for helps in
getting the SPEC95 programs compiled for the MIPS-Ultrix
platform.
--R
"Control Flow Prediction with Tree-like Subgraphs for Superscalar Processors,"
"Understanding Some Simple Processor- Performance Limits,"
"The Multicluster Architecture: Reducing Cycle Time Through Partitioning,"
"The Multiscalar Architecture,"
"Register Traffic Analysis for Streamlining Inter-Operation Communication in Fine-Grain Parallel Processors,"
"PEWs: A Decentralized Dynamic Scheduler for ILP Processing,"
"The Alpha 21264: A 500 MHz Out-of-Order Execution Microprocessor,"
"Exploiting Fine Grained Parallelism Through a Combination of Hardware and Software Techniques,"
"Complexity-Effective Superscalar Processors,"
"Complexity- Effective PEWs Microarchitecture,"
"Trace Cache: a Low Latency Approach to High Bandwidth Instruction Fetching,"
"Trace Processors,"
"Mul- tiscalar Processors,"
"Multiscalar Execution along a Single Flow of Control,"
"An Efficient Algorithm for Exploiting Multiple Arithmetic Units,"
"The Superthreaded Archi- tecture: Thread Pipelining with Run-Time Data Dependence Checking and Control Speculation,"
"MISC: A Multiple Instruction Stream Computer,"
"Improving Superscalar Instruction Dispatch and Issue by Exploiting Dynamic Code Sequences,"
"The MIPS R10000 Superscalar Micro- processor,"
--TR
Exploiting fine-grained parallelism through a combination of hardware and software techniques
MISC
Register traffic analysis for streamlining inter-operation communication in fine-grain parallel processors
The multiscalar architecture
Multiscalar processors
Control flow prediction with tree-like subgraphs for superscalar processors
Trace cache
Improving superscalar instruction dispatch and issue by exploiting dynamic code sequences
Exploiting instruction level parallelism in processors by caching scheduled groups
Complexity-effective superscalar processors
Trace processors
The multicluster architecture
Understanding some simple processor-performance limits
The MIPS R10000 Superscalar Microprocessor
Multiscalar Execution along a Single Flow of Control
The Alpha 21264
The Superthreaded Architecture
--CTR
D. Morano , A. Khalafi , D. R. Kaeli , A. K. Uht, Realizing high IPC through a scalable memory-latency tolerant multipath microarchitecture, ACM SIGARCH Computer Architecture News, v.31 n.1, March
Aneesh Aggarwal , Manoj Franklin, Scalability Aspects of Instruction Distribution Algorithms for Clustered Processors, IEEE Transactions on Parallel and Distributed Systems, v.16 n.10, p.944-955, October 2005
Ramadass Nagarajan , Karthikeyan Sankaralingam , Doug Burger , Stephen W. Keckler, A design space evaluation of grid processor architectures, Proceedings of the 34th annual ACM/IEEE international symposium on Microarchitecture, December 01-05, 2001, Austin, Texas
Joan-Manuel Parcerisa , Julio Sahuquillo , Antonio Gonzalez , Jose Duato, On-Chip Interconnects and Instruction Steering Schemes for Clustered Microarchitectures, IEEE Transactions on Parallel and Distributed Systems, v.16 n.2, p.130-144, February 2005
Rajeev Balasubramonian, Cluster prefetch: tolerating on-chip wire delays in clustered microarchitectures, Proceedings of the 18th annual international conference on Supercomputing, June 26-July 01, 2004, Malo, France
Balasubramonian , Sandhya Dwarkadas , David H. Albonesi, Dynamically managing the communication-parallelism trade-off in future clustered processors, ACM SIGARCH Computer Architecture News, v.31 n.2, May | speculative execution;instruction-level parallelism;hardware window;control dependence;dynamic scheduling;decentralization;execution unit dependence;data dependence |
291129 | Metric details for natural-language spatial relations. | Spatial relations often are desired answers that a geographic information system (GIS) should generate in response to a user's query. Current GIS's provide only rudimentary support for processing and interpreting natural-language-like spatial relations, because their models and representations are primarily quantitative, while natural-language spatial relations are usually dominated by qualitative properties. Studies of the use of spatial relations in natural language showed that topology accounts for a significant portion of the geometric properties. This article develops a formal model that captures metric details for the description of natural-language spatial relations. The metric details are expressed as refinements of the categories identified by the 9-intersection, a model for topological spatial relations, and provide a more precise measure than does topology alone as to whether a geometric configuration matches with a spatial term or not. Similarly, these measures help in identifying the spatial term that describes a particular configuration. Two groups of metric details are derived: splitting ratios as the normalized values of lengths and areas of intersections; and closeness measures as the normalized distances between disjoint object parts. The resulting model of topological and metric properties was calibrated for 64 spatial terms in English, providing values for the best fit as well as value ranges for the significant parameters of each term. Three examples demonstrate how the framework and its calibrated values are used to determine the best spatial term for a relationship between two geometric objects. | Figure
1: Geometric interpretations of the 19 line-region relations that can be realized from the
9-intersection (Egenhofer and Herring 1991).
The only other topological invariant used here is the concept of the number of component. A
component is a separation of any of the nine intersections (Egenhofer and Franzosa 1995). The
number of components of an intersection is denoted by #(A. B). For example, for line-region
relation LR 14, #(L. R) 2, whereas for LR 10, #(L. R) =1.
The 19 line-region relations can be arranged according to their topological neighborhoods
(Egenhofer and Mark 1995a) based on the knowledge of the deformations that may change a
topological relation by pulling or pushing the line's boundary or interior (Figure 2). The
topological neighborhoods establish similarities that were shown to correspond to groupings
people frequently make when using a particular natural-language term (Mark and Egenhofer
1994b). For example, the term crosses was found to correspond to the five relations located in the
diagonal from the lower left to the upper right of the conceptual neighborhood diagram (LR 8 to
LR 14 in
Figure
2). Such groupings of the 9-intersection relations in the conceptual neighborhood
diagram may serve as a high-level measure to define the meaning of natural-language spatial
relations. However, topology per se may be insufficient as the only measure, particularly in
border-line cases where small metric changes have a significant influence on topology.
Figure
2: The conceptual neighborhood graph of the nineteen line-region relations(Egenhofer
and Mark 1995a).
The following sections define two metric concepts-splitting and nearness-that apply to
topological relations and may enhance each of the nineteen topological relations to distinguish more
details.
Splitting
Splitting determines how a region's interior, boundary, and exterior are divided by a line's interior
and boundary, and vice versa. To describe the degree of a splitting, the metric concepts of the
length of a line and the area of a region are used. In the context of topological relations between
lines and regions, length applies to the line's interior, any non-empty intersection with a line's
interior, or their components; and to region boundaries, any non-empty intersection between a
region's boundary and a line's exterior, or their components. Area applies to the interior or
regions, the intersections between a line's exterior and a region's interior or exterior, and their
components. Among the entries of the 9-intersection for a line and a region, there are seven
intersection that can be evaluated with a length or an area (Table 1). Only the three intersections
between the line's boundary cannot be evaluated with a length or area measure, because these
intersections are 0-dimensional (i.e., points).
R R R-
length(L. R) length(L. R) length(L. R- )
L- area(L- . R) length(L- . R) area(L- . R- )
Table
1: Area and length measures applied to the nine intersections of the line's interior
boundary (L), and exterior ( L- ) with the region's interior ( R), boundary
(R), and exterior (R- ).
To normalize these lengths and areas, each of them is put into perspective with the line and the
region: The two area intersections are compared with the area of the region, resulting in two
splitting measures. Another ten splitting measures are obtained by comparing the four length
intersections with the length of the line, and the length of the region's perimeter.
3.1 Inner Area Splitting
Inner area splitting describes how the line's interior divides the region's interior. With this
separation a one-dimensional object splits a two-dimensional object into two (or more) parts such
that parts of the region's interior are on one side of the line, and others are located on the opposite
side of the line (Figure 3). Inner area splitting only applies to a subset of the 19 region-line
relations. Those relations for which the line's interior intersects with the region's interior
L. R= ), but the line's boundaries is outside of the region's interior (L . R= ),
always have a value for inner area splitting. In addition, inner area splitting may apply if the line's
interior intersects with the region's boundary and interior ( L. R= and L.
and the line's boundary intersects with the region's interior (L . R= ). In such situations it
is necessary that there are more components in the interior-interior intersection than there are
components of the intersection between the line's boundary and the region's interior, i.e.,
#(L. R) > #(L . R).
Figure
3: Inner area splitting: the line's interior divides the region's interior into parts on two
opposite sides (more complex configurations may have multiple separations on
either side of the line).
A normalized measure of this property is the inner areasplitting ratio (IAS) as the smaller sum
of the areas on either side of the line-left and right are chosen arbitrarily and their choice does not
influence the measure-over the total area of the region (Eq. 2). The range of IAS is
It would reach 0 if the interior-interior intersection between the line and the region
was empty, and is 0.5 if the line separates the region's interior into areas that total the same size on
the left-hand side and the right-hand size.
min(area(leftComponents(L- . R)), area(rightComponents(L- . R)))
area(R)
3.2 Outer Area Splitting
Outer area splitting occurs if the line's interior interacts with the exterior of the region such that it
produces separations of the exterior between the interior of the line and the boundary of the region.
This involves a one-dimensional object that splits a two-dimensional object (the region's exterior)
into two (or more) two-dimensional parts: (1) parts of the region's exterior that are bounded
because they are completely surrounded by the line's interior and the region's boundary, and (2)
parts of the region's exterior that are unbounded (Figure 4).
Figure
4: Outer area splitting: the line's interior divides the region's exterior into bounded and
unbounded areas (more complex configurations may have multiple areas that are
bounded by the same line).
Outer area splitting requires that the line's interior intersects with the region's exterior
and that the line's boundary is located in the region (L .
splitting also may apply to configurations for which line interiors intersect with both the region's
interior and boundary ( L. whose line boundaries intersect
with the region's exterior (L . For these situations, it is necessary that the region's
exterior contains more components of the line's interior than of the line's boundary
(#(L. R- ) > #(L . R-)). A normalized measure of outer area splitting is the outerareasplitting
ratio (OAS) as the ratio of the sum of the region's area and the bounded exterior, which is the part
of the exterior that is enclosed by the line's interior and the region's boundary, over the region's
area (Eq. 3). It is greater than zero such that the larger the bounded area, the larger the splitting
ratio. It would reach 0 if the bounded area was non-existent (i.e., either an empty intersection
between the line's interior and the region's exterior, or an insufficient number of components in the
intersection between the line's interior and the region's exterior.
area(boundedComponents(L- . R- )
area(R)
3.3 Inner Traversal Splitting
The region's interior separates the line's interior into inner and outer line segments. This involves a
two-dimensional object splitting a one-dimensional object into two one-dimensional parts (or sets
of parts): line parts that are inside the closure of the region, and line parts outside of the region
Figure
5).
Figure
5: Inner traversal splitting: the region's interior divides the line into parts of inner and
outer segments (more complex configurations may have multiple inner and outer
segments for a line).
Inner traversal splitting applies to relations in which the line's interior is located at least partially
in the region's interior ( L. R= ). A normalized measure for the traversal is the inner
traversal splitting ratio (ITS) between the length of the inner parts of the line and the length of the
total line (Eq. 4). Its range is 0 < ITS 1. ITS would be 0 if the interior-interior intersection
between the line and the region was empty. The greatest value is reached if the line's interior is
completely contained in the region's interior.
length(L. R)
length(L)
3.4 Entrance Splitting
While the inner traversal splitting normalizes the common interiors with respect to the line's length,
the entrance splitting compares the length of the common interiors to the length of the region's
boundary. It applies under the same conditions as the inner traversal splitting. Its measure, called
the entrancesplittingratio (ENS), captures how far the line enters into the region (Eq. 5). All
values of the entrance splitting ratio are greater than zero, but no upper bound exists.
length(L. R)
length(R)
3.5 Outer Traversal Splitting
While the inner traversal splitting describes how much of the line is in the region's interior, the
outer traversal splitting refers to the part of the line that is in the region's exterior. Outer traversal
splitting applies to relations in which the line's interior is located at least partially in the regions'
exterior (L. normalized measure for the traversal is the outer traversal splitting
ratio (OTS) between the length of the outer parts of the line and the length of the total line (Eq. 6).
length(L. R- )
length(L)
3.6 Exit Splitting
Analog to the pair of inner traversal splitting and entrance splitting, the outer traversal splitting has
a dual, the exit splitting. It captures how far the line exits the region, and applies under the same
conditions as the outer traversal splitting. The exit splitting ratio (EXS) normalizes the length of the
line's interior that lays in the region's exterior with respect to the length of the region's boundary
(Eq. 7). It is greater than 0 and has no upper bound.
length(L. R- )
length(R)
3.7 Line Alongness
The region's boundary interacts with the line's interior such that it separates the line into two sets
of line parts: line segments that are outside of the region's boundary (i.e., either in the region's
interior or exterior), and line segments that are contained in the boundary. This separation makes a
one-dimensional object splitting another one-dimensional object into two or more one-dimensional
parts (
Figure
6).
Figure
Line alongness: the region's boundary separates the line's interior into parts of
outer and inner segments (more complex configurations may have multiple
components in the intersection between the region's boundary and the line's
interior).
In order to consider line alongness, the line's interior must intersect with the region's boundary
As the measure for the separation, we introduce the notion and concept line
alongness ratio (LA) as the ratio between the length of all line parts contained in the boundary, and
the total length of the line (Eq. 8). The range of the line alongness ratio is 0 LA 1. LA is 0 if
the line intersects the region's boundary exclusively in 0-dimensional components, and it reaches 1
if L R.
length(L. R)
length(L)
3.8 Perimeter Alongness
The line's interior separates the region's boundary into two sets of objects, one that coincides whit
the line's interior, and another that is disjoint from the line's interior. The separation is such that a
one-dimensional object splits another one-dimensional object into two (or more) one-dimensional
objects. The perimeter alongness can be measured for relations in which the line's interior
intersects with the region's boundary ( L. R = ). The perimeter alongness is measured by
the ratio between the length of coinciding parts between the line's interior and the region's
boundary and the perimeter, called the perimeteralongnessratio (PA) (Eq. 9). The range of the
perimeter alongness ratio is 0 PA < 1. PA is 0 if the interior-boundary intersection between the
line and the region consists exclusively of disconnected 0-dimensional components. PA would
reach the maximum of 1 if cycles were permitted as lines and such a cycle would coincide with the
region's boundary.
length(L. R)
length(R)
3.9 Perimeter Splitting
Perimeter splitting occurs if the line splits the region's boundary into two or more parts. This
involves two (or more) zero-dimensional or one-dimensional objects-the line's boundary or
interior-cutting another one-dimensional object (the region's boundary) (Figure 7).
Figure
7: Perimeter splitting: the line separates the region's boundary into segments (more
complex configurations may create multiple segments in the region's boundary).
Perimeter splitting requires that the line intersects the region's boundary
such that the region's boundary is split into at least two
components (#(R- L) 2). The perimeter splitting ratio (PS) is the ratio between the longest of
these components and the region's perimeter (Eq. 10). Its range is 0 < PS < 1.
max(length(components(L- . R)))
length(R)
3.10 Length Splitting
While the perimeter splitting compares the length of the longest perimeter component with the total
length of the perimeter, the length splitting compares it with the length of the line. The metric
measure is the line splitting ratio (LS) (Eq. 11), which is great than 0 without an upper bound.
max(length(components(L- . R)))
length(L)
3.11 Comparison of the Splitting Ratios
Each splitting ratio applies to several different topological relations. Figure 8 shows how the
criteria for the ten splitting ratios map onto the conceptual neighborhood graph of the line-region
relations (Egenhofer et al. 1993). Each constraint covers a contiguous area.
IAS ITS and ENS LA and PA
OAS OTS and EXS LS and PS
Figure
8: The relations that qualify for inner area splitting (IAS), outer area splitting (OAS),
inner traversal splitting (ITS), entrance splitting (ENS), outer traversal splitting
(OTS), exit splitting (EXS), line alongness (LA), perimeter alongness (PA), line
splitting (LS), and perimeter splitting (PS). Black, gray, and white indicate that the
metric measure applies always, sometimes, and never, respectively.
4 Closeness
Unlike splitting, which requires coincidence and describes how much is in common between two
objects, closeness describes how far apart disjoint parts are. The object parts involved are the
boundary and the interior of the line, and the boundary of the region. There is no need to consider
the region's interior, since it is delineated by its boundary, and therefore no additional information
in could be found by considering it in addition to the region's boundary.
Closeness involves considerations of distances among points and lines. For the configurations
considered, there are four types of closeness measures of interest (the metric axioms for distances
apply, i.e., there is a null element, distances are symmetric, and the triangle inequality holds):
(1) the distance between a line's boundary and the region's boundary if the line's boundary is
located in the exterior of the region;
(2) the distance between a line's boundary and the region's boundary if the line's boundary is
located in the interior of the region;
(3) the distance of the shortest path between a line's interior and the region's boundary if the
line's interior is located in the exterior of the region; and
(4) the distance of the shortest path between a line's interior and the region's boundary if the
line's interior is located in the interior of the region.
The closeness measures are not completely orthogonal, since depending on the shape of the
line or the region, they may have the same values. For instance, for the configuration in Figure 9a,
the distance from the region's boundary to the line's boundary (i.e., its two endpoints as defined in
Section 2) is the same as the distance from the region's boundary to the line's interior, since the
line's boundary is the line's closest part to the region's boundary; however, in Figure 9b, the same
parameters have different values because the line's interior is closer to the region's boundary than
the line's boundary.
(a) (b)
Figure
9: Two configurations with (a) identical and (b) different values for the distance
measures from the line's boundary and interior to the region's boundary.
Distances are commonly defined between points; however, the closeness measures require
distance measures between a point and a line, or between two lines.
1: The distance between a point p and the boundary of a region (R) is defined as
the length of the shortest path from p to R (Eq. 12).
$/ q R | dist( p,q) < dist( p,r)
Therefore, there is no other point on the region's boundary that would be closer to p
Definition 2: The distance between the interior of a line ( L) and the boundary of a region
(R) is defined as the length of the shortest path from L to R (Eq. 13).
Therefore, there are no other parts in the line's interior that would be closer to any point on the
region's boundary
4.1 Outer Closeness
The outer closeness describes the remoteness of the region's boundary R from p, a boundary
point of a line located in the exterior of the region (Figure 10a). Outer closeness only applies to
those line-region relations with at least one point of the line's boundary being located in the
region's exterior (L . purely quantitative measure for the remoteness would be the
distance between the region's boundary and the line's boundary point(s) in the region's exterior
Figure
10b). It is the shortest connections between the line's boundary and the region, i.e., there
exists no other point in the region's boundary that would be closer to the line's boundary (Eq. 14).
Since this measure is only applicable if L . can never be 0.
(a) (b) (c)
Figure
10: Outer closeness: (a) the line's boundary in the region's exterior, (b) the remoteness
measure BE from the region's boundary to the line's boundary, and (c) the
region's outer buffer zone as an equi-distant enlargement of the region.
While the actual distance between the two boundaries is a precise measure, it varies
significantly with the scale of the representation. For instance, a scaling by a factor of 2 would
make any two objects be twice as much remote. A variety of dimension-independent measures
could be thought of, such as the proportion by which the line would have to be extended, or
shrunken, so that its boundary coincides with the region's boundary. We selected two outer
closeness measures: (1) the outer line closeness as the ratio between the distance from the line's
boundary to the region's boundary, and the line's length (Figure 10b), and the outer area closeness
as the ratio between the area made up by an equi-distant enlargement of the region-also known as
a buffer zone (Laurini and Thompson 1992)-and the actual area (Figure 10c).
We define the outer area closeness measure (OAC) in terms of the area of the region R and the
area made up by the buffer zone, denoted by D (R). It is of width BE and extends into the
region's exterior (Eq. 15). OAC is greater than 0 with no upper bound, and would be 0 if BE
were 0. The normalization area(D (R))/(area (D (R))+area (R)) would produce values between
0 and 1, however, the distribution would be non linear, particularly for area(R) area(D (R) ).
area(D (R))
area(R)
The outer line closeness measure (OLC) is defined in terms of BE, the distance from the line's
boundary to the region's boundary, and the line's length (Eq. 16). Its values are greater than 0
without an upper bound. It would be 0 if BE were 0.
length(L)
4.2 Inner Closeness
Analogous to the outer closeness, the inner closeness captures the remoteness of the line's
boundary, located in the interior of the region (criterion: L . R= ), from the region's
boundary (Figure 11a). The mere distance between the boundaries of the region and the line are
captured by a quantitative measure BI (Eq. 17). This distance is greater than 0, because the line's
boundary must be located in the region's interior. If both boundary points of the line are inside R,
then BI is the distance of that boundary point closest to the region's boundary.
BI
BI
DBI
(a) (b)
Figure
11: Inner closeness: (a) the line's boundary in the region's interior and (b) the region's
inner buffer zone as an equi-distant reduction of the region.
min(dist ( p,R)) | p (L . R) (17)
The innerareacloseness (IAC) is then defined as the ratio between the area made up by an
equi-distant reduction of the region and the actual area (Figure 11b). The buffer zone D (R) has
BI
the width b and is taken from the region's boundary into the region's interior (Eq. 18). Its rage is
D (R)
area(R)
The inner line closeness (ILC) refers to the relative amount the line has to be extended or
shortened to coincide with the region's boundary. The increment is normalized with respect to the
line's actual length (Eq. 19).
BI
length(L)
4.3 Outer Nearness
The outer nearness describes how far the line's interior is from the region's boundary (Figure
12a). It only applies to one line-region relation, namely the one with the line's boundary and
interior completely contained in the region's exterior (L R- ). The quantitative measure for outer
nearness is the length of the shortest connection between the line and the region (Figure 12b). It is
always greater than zero, because L must be completely contained in R's exterior (Eq. 20).
(a) (b) (c)
Figure
12: Outer nearness: (a) the line is completely contained in the region's exterior, (b) the
remoteness measure IE from the region's boundary to the line's interior, and (c) the
region's outer buffer zone as an equi-distant enlargement of the region.
The outerareanearness (OAN) is then defined as the ratio between the area made up of an
equi-distant reduction of the region of width IE, denoted by D (R) , and the actual area of the
II
region R (Eq. 21). OAN's values are greater than 0, with no upper bound. OAN would be 0 if IE
were 0.
area(D (R))
area(R)
The outer line nearness (OLN) normalizes the length by which the line would have to be
extended or shortened such that its boundary would coincide with the region's boundary, with
respect to the length of the initial line (Eq. 22). The values of the outer line nearness are greater
than 0 and increase linearly with the length of IE.
length(L)
4.4 Inner Nearness
Complementary to the outer nearness, the inner nearness describes how far the line's interior,
located in the interior of the region (criterion: L R), is from the region's boundary (Figure 13a)
This distance is greater than zero, because the line must be completely contained in the region's
interior (Eq. 23).
I
DII
(a) (b)
Figure
13: Inner nearness: (a) the line completely contained in the region's interior and (b) the
region's inner buffer zone as an equi-distance reduction of the region.
The inner area nearness (IAN) is then defined as the ratio between the area made up by a buffer
zone of width II, denoted by D (R) , that extends from the boundary into the region's interior
II
Figure
13b). Its range is 0 < IAN <1 (Eq. 24).
area(D (R))
area(R)
The inner line nearness (ILN) captures by how much the line would have to be extended in
order to intersect with the region's boundary. It is measured as the ratio between the distance to the
region's boundary and the length of the line (Eq. 25). The values of the inner line nearness must be
greater than zero.
II
length(L)
4.5 Comparison of the Closeness Measures
From the criteria for the closeness measures, one can derive which topological relations may be
refined by the corresponding measures (Figure 14). Except for the six topological relations in the
bottom triangle of the neighborhood graph, all relations have at least one closeness measure. Those
six relations without a closeness measure are such that both line boundaries coincide with the
region's boundary, therefore, the distances from the line's parts to the boundaries region are all
zero and no refinements can be made to these relations.
5 Parsing and Translating a Graphical Relation into a Verbal
Expression
With the two sets of parameters we can perform a detailed analysis of a simple spatial configuration
with a line and a region, capturing the configuration's topology and analyzing it according to its
metric properties. This per se would provide the basis for a computational comparison of two or
more spatial configurations for similarity (Bruns and Egenhofer 1996). Here we pursue a different
path by mapping the parsed configuration onto a natural-language term that would best describe the
spatial relation between the two geometric objects. For the time being, any semantic or
presentational aspects (Mark et al. 1995) are ignored in this mapping.
The mappings from the topological and metric measures onto corresponding natural-language
terms are based on results from human-subject experiments (Shariff 1996). A total of sixty-four
English-language terms were tested, for which subjects sketched a road with respect to a given
outline of a park such that the sketch would match the corresponding natural-language term that
describes the spatial relation. By analyzing the sketches' topological relations and their splitting and
closeness measures, we obtained the mappings from the geometry of a configuration onto the
corresponding, significant parameters and their values. Significant parameters were distinguished
from non-significant ones through a cluster analysis (Shariff 1996). The criterion for a parameter
to be considered significant for a specific spatial term was that its standard score was greater than
one (i.e., the mean of such a parameter is at least one standard deviation higher than the mean of
the entire data set). To demonstrate how the model developed here can be used for such
translations, we give three examples in which the spatial relation of a geometric configuration is
translated into a natural-language spatial term.
5.1 Example 1
Figure
15 shows a configuration in which a line (e.g., a road) crosses the boundary of a region
(e.g., a park). Based on the topology (LR 18), the applicable metric parameters for splitting and
closeness are found in Figures 8 and 14, respectively.
IAC and ILC IAN and ILN
OAC and OLC OAN and OLN
Figure
14: The relations that qualify for inner area closeness (IAC), inner line closeness (ILC),
outer area closeness (OAC), outer line closeness (OLC), inner area nearness (IAN),
inner line nearness (ILN), outer area nearness (OAN), and outer line nearness
(OLN).
The human-subject tests found that only a subset of these parameters-inner traversal splitting,
outer traversal splitting, inner area closeness, and outer area closeness-are significant for the
terms that are represented by LR 18. Table 2 shows a sample of eight terms-ends at, ends in,
ends just inside, ends outside, enters, goes into, goes out, and goes to-that apply to LR 18,
together with the significant parameters. For each parameter, the mean value (i.e., the best fit) and
the range of values is given. The value range of a metric parameter refers to the minimum and
maximum value obtained from the subjects' sketches. The goal is now to determine which of these
terms are a better match for the particular configuration, and which do not convey the meaning the
meaning of the configuration.
Topological
Relation
Spatial Term ITS ETS IAC OAC
mean range mean range mean range mean range
LR ends at 0.24 0.02-0.65 0.76 0.36-0.98 0.79 0.19-0.98 7.69 2.09-20.73
LR ends in 0.51 0.17-0.91 0.49 0.09-0.83 0.81 0.18-0.99 3.66 0.59-9.94
LR just inside 0.16 0.05-0.71 0.84 0.29-0.95 0.55 0.05-0.89 3.95 0.86-10.27
LR
LR
LR
LR
LR goes to 0.20 0.04-0.57 0.81 0.44-0.96 0.75 0.18-0.99 7.37 1.63-12.04
Table
2: Spatial terms of topological relation LR 18, with means and value ranges of their
significant parameters for splitting and closeness measures.
Table
3 summarizes for the four parameters how they are calculated and provides the values
obtained for the configuration in Figure 15.
inner traversal
splitting
length(L)
I
R
outer traversal
splitting
outer area
closeness
length(L)
LI
R
area(D
area(R)
R
inner area area(D )
closeness
area(R)
BI
R
BI
Table
3: Calculating the inner traversal splitting, the outer traversal splitting, the outer area
closeness, and the inner area closeness for the configuration displayed in Figure 15.
By comparing these values with the calibrated model, the terms are ranked according to best fit.
The terms ends in, ends outside, enters, goes into, and goes out fall outside of the value ranges of
at least two parameters (Table 2) and, therefore, these terms are not considered for this
configuration. Among the remaining three terms, ends just inside is the best fit for three
parameters; goes to is the second best for three parameters, and ends at ranks third in three out of
four times. Therefore, the sentence, The road ends just inside the park would be selected as the
best fit, while valid alternatives would be, The road goes to the park or The road ends at the
park.
R
Figure
15: Does the line enter or end just inside the region?
5.2 Example 2
Figure
shows a configuration in which a line intersects a region such that it is close to the
region's boundary from the inside and farther from the region's boundary in the exterior. A sample
of terms that may fit this description are crosses, cuts through, goes through, runs into, and splits.
R
Figure
Does the line cross or cut through the region?
For the configuration's topological relation, LR 14, the human-subject tests found two metric
parameters to be significant: inner area splitting and outer area closeness. Table 4 displays the mean
and the value range for each parameter.
Topological Spatial Term IAS OAC
Relation mean range mean range
LR 14 cuts through 0.32 0.01-0.50 1.75 0.41-6.04
LR 14 runs into 0.13 0.09-0.44 3.42 0.54-11.96
Table
4: Spatial terms of topological relation LR 14, with means and value ranges of their
significant parameters for splitting and closeness measures.
For the configuration in Figure 16, the term runs into does not qualify, because the
configuration is not located within the range of the inner area splitting. From among the remaining
four spatial terms, splits comes closest to the mean values of inner area splitting and outer area
closeness; therefore, it is selected as the term to describe the configuration. The ranking of the
terms in between is more difficult, because they are subject to more subtle differences. Certainly,
crosses would be better to describe the scene than cuts through, since both parameters have values
that are closer to the mean of crosses than to the mean of cuts through. The term goes through,
however, has a better match with the inner area closeness than both crosses and goes through
have, however, it ranks considerably lower in the outer area closeness.
inner area
splitting
R 2outer area
closeness
area(D
area(R)
R
Table
5: Calculating the inner area splitting and the outer area closeness for the configuration
displayed in Figure 16.
5.3 Example 3
The following characteristics describe the configuration in Figure 17, in which a line is outside of
the region, but follows the shape of the region. Candidate terms to describe this configuration are
bypasses, goes up to, and runs along (Table 6).
R
Figure
17: Does the line run along or bypass the region?
Topological Spatial Term OAN OAC
Relation mean range mean range
goes up to 0.33 0.03-0.75 0.39 0.03-4.79
runs along 0.33 0.16-1.29 1.06 0.30-7.13
Table
Spatial terms of topological relation LR 1, with means and value ranges of their
significant parameters for splitting and closeness measures.
Based on the topological relation, LR 1, the significant parameters are outer area closeness and
outer area nearness. The term bypasses does not fall within the ranges of outer area nearness or
outer area closeness (Table 7), and is therefore not considered. Both terms goes up to and runs
along have the same values for outer area nearness, but since runs along has a significantly lower
value for the outer area closeness, it is chosen as the better term to describe the configuration than
goes up to.
outer area
closeness
area(D
area(R)
R
outer area
nearness
area(D
area(R) IE
R
Table
7: Calculating the outer area closeness and the outer area nearness for the
configuration displayed in Figure 17.
6 Conclusions
This paper developed a computational model to describe the semantics of natural-language spatial
terms based on their geometry. The model is based on the 9-intersection topological model and
refines is with metric details in the form of splitting and closeness ratios. Splitting ratios describe
the proportion of an intersection with respect to the interior or boundary of the two objects. Their
normalized values all fall within the interval between 0 and 1 and grow linearly with the size of the
intersection. Closeness ratios specify distances between boundaries and interiors. For inclusion or
containment relations, the (inner) closeness ratios are normalized to range between 0 and 1, while
closeness ratios for disjoint relations are greater than zero with no upper limit. While this may
appear to be an inconsistency in the model, it is necessary to obtain measures that grow linearly
with the distance between the parts. The model was only developed for relations between a region
and a line, however, the concepts generalize to relations between other geometric types, such as
two regions or two lines.
Splitting and closeness measures can be implemented with standard GIS software. A prototype
implementation with the Arc/Info GIS, however, requires the separation of the two objects into
different layers (Shariff 1996). A method for computing the intersections necessary to determine
the topological relation, using the described by Mark and Xia (1994). In
order to determine the metric parameters, AMLs were written to compute intersections, lengths,
and areas. Although this method demonstrated the feasibility of implementing the required
operators with a commercial GIS, it was cumbersome, because Arc/Info does not support an object
concept, and performance was slow. The use of GIS data structures that support an object model,
and the integration of algorithms that are tailored to the operations necessary for efficient
implementations of the 9-intersection and the metric refinements, are subjects for future
investigations.
The model developed applies to a number of applications in the area of spatial reasoning, such
as similarity retrieval and intelligent spatial query languages. We demonstrated how to use the
model to generate natural-languages terms for simple spatial configurations. Based on a calibration
of the 9-intersection with splitting ratios and closeness ratios, using human-subjects experiments
for sixty-four English-language (Shariff 1996), we showed how a geometric configuration with a
linear and an areal object can be analyzed to determine the pertinent features of their spatial
relations. Values obtained from this method lead to the selection of appropriate natural-language
spatial terms for such spatial scenes.
While the splitting and closeness ratios as refinements of topology cover much of the critical
properties of the spatial relations, there are other parameters left that may make additional
contributions to better choices of natural-language terms. Further investigations-both
formalizations and human-subject tests-are necessary to develop a comprehensive and robust set
of definitions of the semantics of natural-languages spatial relations. Some of these considerations
were outlined in a larger-scale research plan (Mark et al., 1995). The most obvious aspect to study
is the influence of the meanings of the objects on the choice of the spatial terms. Whether the
objects or concern are roads and parks vs. hurricanes and islands, may lead to different mappings
from topology and metric refinements onto the same spatial terms. With respect to geometry, the
current model abstracts away all influences of orientation. This is a valid approach for modeling all
those concepts and terms that are independent of orientation (such as those based primarily on
containment, neighborhood, and closeness; however, orientation, is another parameter that may be
critical for those relations expressing information about direction. For example, orientation may be
important to distinguish north from south (or above from underneath) Orientations are invariant
under translations and scaling, but they may change under rotation. The orientation of the objects
can be assessed in several different ways: (1) the global cardinal relation between two objects, i.e.,
a relation with respect to a fixed orientation framework; (2) the orientation of an individual object,
i.e., the cardinal relation between the object's major axis and a global reference frame; and (3) a
local relation, i.e., the cardinal direction with respect to the framework established by one of the
two objects' orientations. Similar to the metric properties, one could consider purely quantitative
measures, e.g., in the form of degrees. Since people usually do not make such a fine distinction,
coarser, qualitative models are necessary to formalize the properties of the three orientation
concepts.
--R
Similarity of Spatial Scenes.
Cognitive Distance in Intraurban Space.
Problems in Cognitive Distance: Implications for Cognitive Mapping.
Symbolic Projection for Image Information Retrieval and Spatial Reasoning.
A Calculus of Individuals Based on
Representing and Acquiring Geographic Knowledge.
On the Equivalence of Topological Relations.
Categorizing Binary Topological Relationships Between Regions
Modeling Conceptual Neighbourhoods of Topological Line-Region Relations
Naive Geography.
A Critical Comparison of the 4-Intersection and 9-Intersection Models for Spatial Relations: Formal Analysis
Qualitative Spatial Reasoning about Distances and Directions in Geographic Space.
Using Orientation Information for Qualitative Spatial Reasoning.
Language and Spatial Cognition-An Interdisciplinary Study of the Prepositions in English
VIsual TRAnslator: Linking Perceptions and Natural Language Descriptions.
On the Robustness of Qualitative Distance- and Direction Reasoning
A System for Translating Locative Prepositions from English into French.
Modeling Spatial Knowledge.
Fundamentals of Spatial Information Systems.
The Image of a City.
Evaluating and Refining Computational Models of Spatial Relations Through Cross-Linguistic Human-Subject Testing
Calibrating the Meanings of Spatial Predicates from Natural Language: Line-Region Relations
Modeling Spatial Relations Between Lines and Regions: Combining Formal Mathematical Models and Human Subjects Testing.
Research Initiative 13 Report on the Specialist Meeting: User Interfaces for Geographic Information Systems.
Interaction with Geographic Information: A Commentary.
Determining Spatial Relations Between Lines and Regions in Arc/Info Using the 9-Intersection Model
Mental Representations of Spatial and Nonspatial Relations.
Human Factors in GeographicalInformation Systems.
The Measurement of Cognitive Distance: Methods and Construct Validity.
The Geometry of Environmental Knowledge.
Cognitive Aspects of Human-Computer Interaction for Geographic Information Systems: An Introduction
An Algorithm to Determine the Directional Relationship Between Arbitrarily-Shaped Polygons in the Plane
The Child's Conception of Space.
A Spatial Logic Based on Regions and Connection.
A Model of the Human Capacity for Categorizing Spatial Relations.
On the Internal Structure of Perceptual and Semantic Categories.
Principles of Categorization.
Natural Language Spatial Relations: Metric Refinements of Topological Properties.
Parts: A Study in Ontology.
Algebraic Topology.
How Language Structures Space.
--TR
Representing and acquiring geographic knowledge
An algorithm to determine the directional relationship between arbitrarily-shaped polygons in the plane
Qualitative Representation of Spatial Knowledge
Language and Spatial Cognition
Cognitive Aspects of Human-Computer Interaction for Geographic Information Systems
Human Factors in Geographical Information Systems
The Geometry of Environmental Knowledge
Using Orientation Information for Qualitative Spatial Reasoning
Natural-language spatial relations
--CTR
Josef Benedikt , Sebastian Reinberg , Leopold Riedl, A GIS application to enhance cell-based information modeling, Information SciencesInformatics and Computer Science: An International Journal, v.142 n.1, p.151-160, May 2002
Tiago M. Delboni , Karla A. V. Borges , Alberto H. F. Laender, Geographic web search based on positioning expressions, Proceedings of the 2005 workshop on Geographic information retrieval, November 04-04, 2005, Bremen, Germany
Haowen Yan , Yandong Chu , Zhilin Li , Renzhong Guo, A Quantitative Description Model for Direction Relations Based on Direction Groups, Geoinformatica, v.10 n.2, p.177-196, June 2006
Salvatore Rinzivillo , Franco Turini, Knowledge discovery from spatial transactions, Journal of Intelligent Information Systems, v.28 n.1, p.1-22, February 2007
Guoray Cai, Contextualization of Geospatial Database Semantics for Human---GIS Interaction, Geoinformatica, v.11 n.2, p.217-237, June 2007 | geographic information systems;topological relations;spatial relations;metric refinements;GIS |
291275 | Performance Analysis of Stochastic Timed Petri Nets Using Linear Programming Approach. | AbstractStochastic timed Petri nets are a useful tool in performance analysis of concurrent systems such as parallel computers, communication networks, and flexible manufacturing systems. In general, performance measures of stochastic timed Petri nets are difficult to obtain for practical problems due to their sizes. In this paper, we provide a method to compute efficiently upper and lower bounds for the throughputs and mean token numbers for a large class of stochastic timed Petri nets. Our approach is based on uniformization technique and linear programming. | Introduction
Stochastic Timed Petri Nets (STPN) are Petri nets where transitions have -ring delays.
Since the last decade, they have been receiving increasing interest in the modeling and
performance analysis of discrete event systems. Such a tool is particularly useful for modeling
systems which exhibit concurrent, asynchronous or nondeterministic behaviors, such
as parallel and distributed systems, communication networks and AEexible manufacturing
systems. The reader is referred to the extensive survey of [36] on theoretical analyses
and applications of Petri nets. Applications to the performance evaluation of parallel and
distributed machines (hardware components) and parallel and distributed computations
(software components) can also be found in [3] and the special issue of J. of Parallel and
Distributed Computing (Vol. 15, No. 3, July 1992).
Most literature of STPN is on Stochastic Petri Nets (SPN) [29, 35], where transition
-ring times are mutually independent exponentially distributed random variables, and
their extensions: Generalized Stochastic Petri Nets (GSPN) [2] where immediate transitions
(i.e. those without -ring delay) are allowed, and Extended Stochastic Petri Nets
(ESPN) [28] where transitions are allowed to generate random numbers of tokens upon
-rings. Numerical analysis of such nets is based on the analysis of the embedded Markov
chains. Decomposition techniques are proposed, see e.g. [19, 34] and references therein.
Analytical solutions exist in product-form for equilibrium distributions for special cases of
SPN, see [15] and references their in.
There also exist analyses of stochastic timed Petri nets without Markovian assump-
tions. Most of them provide performance bounds, see [10, 11, 17, 18, 25]. Others analyze
stability conditions [4, 9]. The reader is referred to [5] for a survey on recent results on
quantitative analysis of STPN, including approximations and simulations.
Although there exist various quantitative analysis techniques and some software tools
(e.g. GreatSPN [23] and SPNP [27]) for STPN, the applications of STPN are most often
limited to small size problems. This is mostly due to the time and space complexity of
numerical analysis algorithms and of simulations.
In this paper, we provide a new method to compute eOEciently upper and lower bounds
for linear functions of the throughputs and mean token numbers in general Markovian
Petri nets. Our approach is based on uniformization technique and linear programming.
The STPN models under consideration are closely related to GSPN models de-ned in [24],
with in addition the possibility of randomly generating tokens upon transition -rings.
Uniformization technique is one of the most useful techniques for analyzing continuous
time Markov chains [31]. In [32], such a technique was used to establish linear equality
constraints among the expectation of state variables in queueing networks. This allowed
RR n\Sigma2642
4 Z. Liu
the authors to bound the performance measures, both above and below, by solving a linear
program. Similar approaches were taken to determine lower bounds on achievable performance
of control policies in multiclass queueing networks [13], optimal control policies for
Klimov's problem [14], and stability regions of queueing networks and scheduling policies
[33]. In these studies, linear or nonlinear programming were used to obtain bounds.
The method of linear programming has already been used in operational analysis for
deriving bounds in non-Markovian STPN [17, 18, 25]. Since no statistical assumptions are
made on the distributions of -ring times, such bounds are usually loose. Several techniques
were proposed for the improvement of such bounds in special cases of Petri nets [20, 21].
In our work, we consider Markovian STPN. We show that, like in [32, 13], the Markovian
assumption allows us to establish a set of linear equality constraints among the
expectation of state variables in the Petri nets, such as token numbers in the places and
indicator functions of whether transitions are enabled. More precisely, we analyze the
evolution of state variables in steady state and write out evolution equations using the
uniformization technique. Taking the quadratic forms of these equations allows us to establish
the linear constraints. Exploiting further structural and probabilistic properties of
the Petri nets, we obtain an augmenting set of linear equalities and inequalities, some of
which are similar to those in [25]. Upper and lower bounds of performance measures are
then obtained by solving the linear program.
The paper is organized as follows. In Section 2, we de-ne the STPN models under
consideration as well as the notation. In Section 3, we derive the linear equalities based on
the uniformization technique. In Section 4, we establish other linear constraints based on
the behavioral properties and probabilistic laws. In Section 5, we provide the summary of
the linear programming formulation. In Section 6, we present applications of our technique.
Finally, in Section 7, we conclude with remarks on the extensions of our results.
Notation
A Petri Net can be viewed as a directed graph
E), where the set of vertices is
the union of the set of places P and the set of transitions T . The set of arcs E is composed
of two subsets E 0 and E 00 . The arcs of E 0 are either of the form (p; t) or of the form (t; p)
with We shall denote by
the set of transitions that precede place p in P: ffl
the set of transitions that follow place p in P:
the set of places that precede transition t in T
INRIA
Performance Analysis of Stochastic Timed Petri Nets 5
the set of places that follow transition t in g.
The arcs of E 00 are inhibitor arcs connecting places to transitions. For any t 2 T , let ffi t be
the set of places from which there is an inhibitor arc, and for any p 2 P, let p ffi be the set
of transitions to which there is an inhibitor arc. Denote by j p;t the weight of the inhibitor
arc from place p to transition t, t.
The net N is strongly connected if there is a path from any place/transition to any
place/transition.
For all p 2 P, de-ne the following set of transitions:
Tokens circulate in the Petri Net. This circulation takes place when transitions are
-red. When transition t 2 T is -red, - p;t tokens are consumed at each place p 2 ffl t, and
oe t;p tokens are created at each place p 2 t ffl . Variables - p;t and oe t;p are considered as the
weights of the arcs of E 0 .
An example of the Petri net is illustrated in Figure 1. It contains 7 places
g. Transitions t are immediate
transitions. Places p 1 and p 6 have initial marking 1, whereas the others have initial
marking 0. There are two inhibitor arcs (p represented by arcs ended
with a circle.
When the weights of the arcs are upper bounded by 1, N is called an ordinary net, as
opposed to weighted net.
In this paper, we will consider a more general case where the numbers of tokens created
by -ring completions are random variables. When transition t 2 T is -red for the n-th
time, oe t;p (n) tokens are created at each place p 2 t ffl . For all
is assumed to be a sequence of independent and identically distributed (i.i.d.) random
variables. The sequences of random variables foe t 1 ;p (n);
are, however, in general dependent for t 1 6= t 2 . Let oe t;p be the expectation of
oe t;p (n).
For all
(n) and oe t;p 2
(n) can be dependent if p 1 . For example, when
creates one token in one of its output places after each
RR n\Sigma2642
6 Z. Liu
Figure
1: An example of Petri net.
-ring. Two cases will be considered: independent token generation and selective token
generation. In the case of independent token generation, we assume that for any t 2 T ,
the sequences of random variables foe t;p (n)g n , are assumed to be (statistically)
independent. In the case of selective token generation, however, the sequences of random
variables foe t;p (n)g n , are dependent in such a way that for all n, at most one of
the output places has tokens created:
so that oe t;p 1
for any p 1 special case of selective token generation is the routing mechanism
where a token is generated at one and only one of the output places after each -ring:
below discussions on immediate transitions).
There are two special classes of ordinary Petri nets, referred to as state machines and
marked graphs. A state machine is an ordinary Petri net without inhibitor arcs such that
for each transition t, ffl t is a singleton and
marked graph
is an ordinary Petri net without inhibitor arcs such that for each place p, both ffl p and p ffl
are singleton.
Firings of transitions are timed, i.e., each -ring takes a certain amount of time before
completion. The token consumptions in places of ffl t and token creations in places of t ffl
occur simultaneously at the end of a -ring of transition t. Throughout the paper we will
assume that all the -ring times are independent random variables. The -ring times of
transition are i.i.d. random variables of exponential distribution with parameter - t .
In GSPN framework, Petri nets can have immediate transitions, i.e. transitions whose
-ring times are zero. In this case, immediate transitions have higher -ring priorities, see
[24]. Using algorithms of [26], these immediate transitions can be eliminated without
changing performance behavior of the net.
INRIA
Performance Analysis of Stochastic Timed Petri Nets 7
Of particular interests are immediate transitions which play roles of synchronization
and/or routing. More precisely, in this case, we assume that for any immediate transition
t, t is the only output transition of all its input places, i.e. t. Further,
we assume that for any immediate transition t, t, and
ffl either oe t;p 0
ffl or oe t;p
with a harmless abuse of notation, the index ffl p denotes the
unique transition preceding place p.
In the Appendix, we present a direct transformation technique which removes this kind of
immediate transitions without changing the -ring behavior of the other transitions.
Thus, we will assume throughout this paper that the Petri net N has no immediate
transition, so that all parameters - t are -nite.
A transition t is enabled to -re when there are at least - p;t tokens at each place p 2 ffl t
and there are at most j tokens at each place t. We adopt the single-server
semantics for the transitions. A -ring can start only if the transition is enabled and the
previous -ring has completed. It is assumed that -rings are started as soon as possible.
The case of in-nite-server semantics will be discussed in Section 7.
A -ring of transition t is preempted when the transition is disabled (i.e. at least one
place strictly less than - p;t tokens, or at least one place has more than
or equal to j p;t tokens) before the -ring time expires. The -ring is resumed as soon as
the transition becomes enabled. The disabling of a transition is due both to competitions
with other transitions having common input places (some tokens in these places can be
consumed by other transitions during the -ring of the transition), and to token arrivals in
input places of inhibitor arcs. The -ring mechanism described here is called (cf. [1]) race
policy with age memory. Note that for the case of exponential distributions of -ring times,
the race policies with or without age memory have stochastically the same performance
behavior due to the memorylessness property of exponential distributions. However, in
Section 7, when we consider the case where -ring times have general distributions, the
race policy under consideration will be that with age memory.
The state of the system is characterized by the marking
is the number of tokens in place p at time - . The process X(-) is assumed to be
left-continuous so that X p (-) is the number of tokens in place just before time - . The
initial marking is the marking at time 0.
RR n\Sigma2642
8 Z. Liu
The Markovian Petri net described above will be denoted by
Throughout this paper we will assume that the Petri net is live. Moreover, we assume
that the net is stable in the sense that X(-) converges to a stationary variable X (of
dimension jPj) when - goes to in-nity. Moreover, we assume that the -rst and second
moments of X are -nite, i.e. E[X p
assumptions it is easy to see (using H#lder's inequality) that for all
1.
Let e t (-) be the indicator function of whether transition t is enabled at time - (or
more precisely, just before time -
Let e t be the stationary version of e t (-), and q
Denote by x the mean number of tokens in place p 2 P, and y
. The corresponding vectors are denoted by
the (asymptotic) throughput of transition t 2 T , i.e. the
number of completed -rings of transition t per unit of time, and
In the sequel, we provide a method of computing upper and lower bounds of L(x;
for any arbitrarily -xed linear function L. Our approach is based on linear programming.
The upper (resp. lower) bound is obtained by maximizing (resp. minimizing) the objective
function L under linear constraints.
3 Uniformization and Linear Equalities
We will use the uniformization technique to derive linear equalities between variables x,
y, q and '. We will consider the Petri net N where each transition t 2 T is continuously
-ring with i.i.d. exponentially distributed -ring times of parameter - t . When a -ring is
completed at transition t 2 T , there are two possibilities. If t is enabled, then tokens are
consumed in places ffl t and are created in places t ffl . Otherwise, if t is disabled when the
-ring is completed, nothing happens, and this -ring completion corresponds to a -ctive
-ring completion.
INRIA
Performance Analysis of Stochastic Timed Petri Nets 9
Let f- n g be the sequence of time epochs of, real or -ctive, -ring completions in N . It is
clear that f- n g is distributed according to a Poisson process with parameter
Let F -n denote the oe-eld generated by the events up to time - n .
Let A t (n) be the indicator function such that A t only if the n-th, real or
-ctive, -ring completion occurs at transition t 2 T . Clearly,
for any t 2 T , fA t (n)g is a sequence of i.i.d. random variables, independent of e t (- n ), such
that P
Since for any -xed t 2 T , the random variables (oe t;p (n); are i.i.d. in n, we can
assume with no loss of generality that the numbers of tokens created in places t ffl at time
- n are oe t;p (n), is enabled at time - n .
We assume without loss of generality that the system is in steady state so that, owing
to process see time average) property (cf. e.g. [6]), (X(- n ); e(- n )) has
the same law as (X; e).
The throughput of transition t 2 T can be computed as follows. In the system,
transitions are -red, either really or -ctively, at the rate of -. At each -ring completion
epoch - n , the -ring occurs at transition t 2 T with probability - t =-. Therefore, (real
or -ctive) -ring completions occur at transition t at the rate of - t . Since these -ring
completions are independent of e t , we have
The following evolution equation is essential. For all
(2)
Taking the conditional expectation yields
RR n\Sigma2642
Z. Liu
Thus,
In the steady state, E[X p (- n+1 so that
by taking expectation in (3), we obtain the following AEow balance equalities:
Calculating the second moments from (2) yields
Thus,
In the steady state, E[X 2
and E[e t (- n Hence, by taking expectation in (5), we obtain the following second
moment condition:X
INRIA
Performance Analysis of Stochastic Timed Petri Nets 11
More generally, for any p 1 we compute the expectation of the product of
numbers of tokens from (2). Assume -rst that token generations of all transitions t 2 T
are statistically independent i.e., random variables oe t;p (n), are independent. Then:
RR n\Sigma2642
Z. Liu
After some simple algebra, we obtain
In the steady state,
Thus, by taking expectation in (7) we obtain the following population covariance condition:
INRIA
Performance Analysis of Stochastic Timed Petri Nets 13
Note that when relations (6) and (8) are identical.
Assume now that token generations of some transitions are selective, i.e. for some t 2
be the subset of transitions which
have selective token generations. Then, for any t 2 T 0 and any p 1
Therefore, by a similar computation we obtain:
Observe that equality (8) can be considered as a special case of (9). Indeed, if T
then both equalities coincide.
4 Other Constraints
In this section, we derive other linear constraints of variables x, y and q. Except for (24),
the linear constraints established in this section requires no Markovian assumption and
holds for general stochastic Petri nets.
4.1 Behavioral Properties
Liveness. Since we assume that the net is live, we have that for any - , at least one
transition is enabled, so that
RR n\Sigma2642
14 Z. Liu
As a consequence,
so that
ConAEicting transitions. For all
where, by convention, - t, and j t. For any pair of transitions
enabled
only if t 1 is enabled, so that e t 1
(-) for all - . Hence,
If transitions are in equal conAEict, i.e. -(t 1 then the
above relation implies that q t 1
Boundedness. For all be the minimum and maximum
numbers, respectively, of tokens in place p. Then, trivially,
As a consequence, for any place p 2 P such that
The bounds (13) can be extended to a set of places S ' P. Let b S - 0 and
be the minimum and maximum of total numbers of tokens in places of S. Then, trivially,
Cycle population conservation. A special case of (15) is when the subset
of places consists of a cycle, i.e., there is a set of transitions
INRIA
Performance Analysis of Stochastic Timed Petri Nets 15
such that ffl
g.
Since the net is live and stable, the sum of tokens in these places is constant:
Denote by C any cycle in N , and C C the population in C. It then follows
Reachable markings. Let be the incidence matrix such that C
E). It is well-known
(see e.g. [36]) that any reachable marking X from the initial marking M can be written
as
where the superscript T denotes the transpose operator, and the (column) vector H corresponds
to the -ring sequence to reach X (or more precisely, the vector of numbers of
-rings of each transition in order to reach X). Let X in (19) be the random variable of
the marking in the stationary regime. Then, by taking expectation in (19) we obtain
where are newly introduced unknown variables.
Rewriting (20) in scalar form yields
oe t;p u
4.2 Constraints Derived from Probability Theory
path comparisons. Since for any t 2 T , e t - 1 almost surely (a.s.), the
enabling rate is bounded by one:
For the same reason, we have X p e t - X p a.s., so that
RR n\Sigma2642
Z. Liu
Another consequence is
A t
so that
According to the relation
so that, by taking the expectation, we obtain
Probabilistic inequalities. According to Chernooe's inequality, we get for all n - 1,
where
V is the iminj operation.
For bounded places
INRIA
Performance Analysis of Stochastic Timed Petri Nets 17
Therefore,
or, equivalently,
Consider any transition t 2 T such that all incoming places are bounded, i.e., for all
1. Using the fact that
we obtain that
where the last inequality comes from relations (27,29). Hence, we obtain an enabling lower
Applying again Chernooe's inequality to (30) yields
where the last inequality comes from relations (27,29). Thus, we obtain an enabling upper
RR n\Sigma2642
Z. Liu
Note that in (32), the iminj operator i
is nonlinear. However, linear inequalities
can be generated by taking either operand of any of the iminj operators.
Consider now an arbitrary bounded place p with bound B p . Then for any t 2 T ,
Thus,
Similarly,
where the last inequality comes from (28). Thus,
Single entry transitions. For all
INRIA
Performance Analysis of Stochastic Timed Petri Nets 19
so that
Little's Law. According to Little's law (see e.g. [40]), for all
is the input rate tokens at place p, and R p is the mean token sojourn time at
place p. Since R p is lower bounded by the minimum -ring times of output transitions of
p, we obtain
or, equivalently,
4.3 Subnet Throughputs
Like in [21], we derive bounds on throughputs of transitions by comparing throughputs
of N with those in the subnets (when they are considered in isolation) of N . We will
consider in particular two special classes of subnets: strongly connected state machines
(SCSM) and strongly connected marked graphs (SCMG).
E) be an arbitrary Petri net, and N subnet of N ,
is a restriction of E on fP 0 S T 0 g \Theta fP 0 S T 0 g. Assume that
the transitions of T 0 (resp. arcs of E 0 , places of P 0 ) have the same sequences of -ring times
(resp. weights, initial markings) in both nets. Assume further that none of the places of
connected with transitions of in the original net N by non-inhibitor arcs, i.e.
in N , there is no (t; p) 2
t denote the throughput of transition t 2 T 0 when the subnet N 0 is considered in
isolation. The following theorems show that under some conditions, the throughputs of
these transitions of the N 0 are upper bounds of the throughputs of the same transitions
in the original net.
Theorem 1 If N 0 is a strongly connected marked graph, then for any transition t in N 0 ,
t .
Proof. Due to the fact that in the original net N , none of the places of P 0 is connected
with transitions of non-inhibitor arcs, the subnet is connected with the rest of
RR n\Sigma2642
Z. Liu
the system only through transitions of T 0 . As N 0 is a strongly connected marked graph,
no transitions in N 0 are in conAEict. Moreover, in N , the -ring mechanism is race policy
with age memory. Thus, for any transition t 2 T 0 , the only eoeect that tokens in places
of have is delaying the -rings of t of T 0 . Thus, by the monotonicity property of
marked graphs [11], we conclude that ' t - ' 0
t for all t 2 T 0 .
Theorem 2 Assume that N 0 is a strongly connected state machine such that for any two
transitions t 1 and t 2 of N 0 , t 1 and t 2 are in conAEict in N 0 implies that t 1 and t 2 are in
equal conAEict in N , i.e. ffl t . Then for any transition t in N 0 , ' t - ' 0
t .
Proof. The proof is similar to that of Theorem 1. Note -rst that the subnet N 0 is connected
with the rest of the system only through transitions of T 0 or inhibitor arcs. Under the
assumption of the theorem, any two transitions which are in (equal) conAEict in N 0 are in
equal conAEict in N . Moreover, in N , the -ring mechanism is race policy with age memory.
Thus, in N , tokens in the places of [ t2T 0
not change the winners of -ring races
among transitions of T 0 . In other words, the only eoeect that tokens in places of ffl
have is delaying the -rings of transitions of T 0 . Thus, by the monotonicity property of
state machines [8], we conclude that ' t - ' 0
t for all t 2 T 0 .
Note that the above two theorems hold for any arbitrarily -xed sequences of -ring
times. No Markovian assumption is needed.
EOEcient computational algorithms for computing the throughput of state machines
have been proposed in the queueing literature, see e.g. [16, 22, 38, 39].
The computation of throughput of SCMGs has been investigated in [11], where various
computable upper bounds were proposed. Exact value of the throughput can be obtained
by simulation using matrix multiplications in the (max; +) algebra, see [7].
5 Summary of the Linear Programming Formulation
Theorem 3 Let be an arbitrary Markovian timed Petri net, and
an arbitrary linear function de-ned on the nonnegative state variables x;
of the net. Let ff and fi be the solutions of the linear programming problems
INRIA
Performance Analysis of Stochastic Timed Petri Nets 21
such that the linear constraints of Table (1) are satis-ed, where u
Recall that in Table 1, inequalities containing the operator i
inequalities
generated by taking either operand of any of the iminj operators.
6 Applications
In this section we illustrate applications of the above techniques to the performance ana-
lyses. We shall consider two applications, one in manufacturing system, another in parallel
computing. Unless otherwise stated, the numerical results are obtained without linear inequalities
pertaining to boundedness of subsets of places and subnet throughputs.
6.1 Production Line
The -rst example is concerned with a production line with in-nite supply, see Figure 2-
(a). In the example, there are four servers, represented by circles. The -rst server has an
in-nite-capacity buoeer with an in-nite number of production requirements, represented
by small dashed circles. The other three servers have -nite-capacity buoeers: 3, 2 and 4.
For starts a service only when the downstream buoeer 1 has at least
one empty room. This corresponds to the so-called blocking before service.
The corresponding Petri net model is depicted in Figure 2-(b), where transitions re-present
the servers and the initial markings of the places on the bottom represent the
buoeer capacities.
We assume that the service times at server i are i.i.d. exponentially distributed with
parameter 4. The objective function in this problem is the total throughput
. The numerical results are presented and compared with the exact values.
In the experimentation, we have carried out computations for -ve sets of parameters
of - i 's. The lower and upper bounds are given in the columns il.b.j, iu.b.1j and iu.b.2j,
whereas the exact values are provided in the column iexactj. The upper bounds in column
iu.b.1j are obtained by further using subnet throughput constraints. In columns io.l.b.j
and io.u.b.j, we also present the bounds computed by linear programming approch based
on the linear constraints of Section 4 without Markovian assumption (which implies in
particular that the linear equalities of Section 3 are not used).
RR n\Sigma2642
22 Z. Liu
Table
1: Summary of Linear Constraints
AEow balance
second moment 2
population
covariance
liveness
conAEicting
transitions
boundedness
cycle population
reachable marking x
sample path
comparisons y p;t - x p 8p 2
enabling bound q t
Bp \Gamma- p;t +1
xp
Bp \Gammaj p;t +1
INRIA
Performance Analysis of Stochastic Timed Petri Nets 23
(a)
(b)
server 3 server 4
server 2
server 1
Figure
2: (a): Example of production line. (b): The corresponding Petri net model.
Table
2: Bounds on the throughput of the production line
Case exact u.b.1 u.b.2 o.l.b. o.u.b.
5 1.111 1.111 1.111 1.111 1.350 2.667 2.963 2.963 1.111 4.444
RR n\Sigma2642
Z. Liu
Recall that the Petri net is a marked graph so that according to [11] the throughput
is increasing in the -ring rates of transitions. Such a fact is clearly shown in the column
iexactj for cases 1, 2, 3 and 4. It is worthwhile noticing that the lower and upper bounds
in the columns il.b.j, iu.b.1j and iu.b.2j also reAEect such monotonicity.
6.2 Cyclic Execution
Consider now performance analysis of a parallel computing system. Parallel programs are
represented by directed acyclic graphs, referred to as task graphs, where vertices correspond
to tasks of a parallel program, and directed edges correspond to precedence relations
between tasks: a task can start execution only when all its predecessors have completed
execution. The tasks are assigned to the parallel processors for execution according to
some prede-ned rules.
In our example, parallel programs have the same structure, given by the task graph
in
Figure
3-(a). These programs dioeer only in the running times of tasks which are independently
and exponentially distributed random variables, with parameters
for tasks There are three identical processors. Tasks 1 and 2 are assigned to
processor 1, tasks 3 and 4 to processor 2, and tasks 5 and 6 to processor 3.
(a) (b)
Figure
3: (a) Task graph of parallel programs. (b) Cyclic execution of the parallel program.
On each processor, dioeerent instantiations of the same task are executed according to
the rule -rst come -rst serve (FCFS). i.e., task i of the n-th arrived program can start
execution only after task i of program completes. Dioeerent tasks assigned to the
same processor are, however, executed according to the processor sharing (PS) discipline.
In our example, since only two dioeerent tasks are assigned to each processor, the processor
INRIA
Performance Analysis of Stochastic Timed Petri Nets 25
is shared by at most two tasks. A parallel program is considered completed if all its tasks
-nish their execution.
We consider cyclic execution of the task graph, cf. Figure 3-(b). The cyclic execution
is de-ned in such a way that task t1 (resp. task t2) of program n + h can start execution
only after task t5 (resp. t6) of program n completes execution, The number
h is referred to as the height of the cyclic execution in the literature [30].
The representation of this parallel computing system by STPN is illustrated in Figure 4.
The initial marking of place p1 (the same for place p2) corresponds to the height h.
p3
p6 p7 p8
Figure
4: Petri net representation of the cyclic execution
Inhibitor arcs are used to model the PS mechanism. Transitions t11 and t12 (resp. t21
and t22, t31 and t32 and t33, t41 and t42, t51 and t52, t61 and t62 and t63) represent
tasks 1 (resp. 2, 3, 4, 5, 6). Firing times of transitions t11 and t12 (resp. t21 and t22,
RR n\Sigma2642
26 Z. Liu
t31 and t32 and t33, t41 and t42, t51 and t52, t61 and t62 and t63, are exponentially
distributed with parameter Two or three
transitions are used for each task in order to represent situations whether the execution
of a task is shared with others. Note that transitions t32 and t33 (resp. t62 and t63)
are never enabled simultaneously. The use of two additional transitions for task 3 (resp.
task is due to the fact that in each program, task 4 (resp. task 5) is allowed to start
execution only when both tasks 1 and 2 (resp. tasks 3 and 4) have completed execution.
Thus, the execution of task 3 (resp. task 6) is not shared with task 4 (resp. task 5) if only
task 1 or only task 2 (resp. only task 3 or only task
The objective function in this problem is still the total throughput, i.e. the sum of
transition throughputs. It is easy to see that this total throughput is equal to six times
the throughput of the parallel system in terms of the number of programs completed per
unit of time.
In
Table
3, we provide numerical results for -ve dioeerent sets of parameters with -xed
height 4. The lower and upper bounds are given in the columns il.b.j and iu.b.j,
whereas in columns io.l.b.j and io.u.b.j, we also present the bounds computed by linear
programming approch based on the linear constraints of Section 4 without Markovian
assumption (which implies in particular that the linear equalities of Section 3 are not
used).
Table
3: Bounds on the throughput of the cyclic execution
Case
In
Figures
5 and 6, we provide the curves of the bounds as functions of the the height
of the cyclic execution in Cases 1 and 2.
7 Conclusions and Extensions
In this paper, we have established performance bounds for Markovian STPN by taking a
linear programming approach. We -rst provided a set of linear equality constraints among
the expectation of state variables in the Petri nets, such as token numbers in the places
and indicator functions of transition enabling. We further obtained an augmenting set of
INRIA
Performance Analysis of Stochastic Timed Petri Nets 27
O.L.B.
L.B.
U.B.
O.U.B.
height (h)
Figure
5: Bounds as functions of height of the cyclic execution for Case 1
O.L.B.
L.B.
U.B.
O.U.B.
height (h)
Figure
Bounds as functions of height of the cyclic execution for Case 2
RR n\Sigma2642
28 Z. Liu
linear equalities and inequalities by exploiting structural and probabilistic properties of
the Petri nets. These linear constraints allowed us to compute upper and lower bounds
of performance measures by solving the linear program. We have applied this method to
performance analyses of a manufacturing system and a parallel system.
The constraints derived in Section 4 are not restricted to exponentially distributed
-ring times. These inequalities can also be used in operational analysis of timed Petri
nets.
In Theorems 1 and 2, we compared throughputs of transitions in a net and those in a
SCMG or a SCSM subnet (when it is considered in isolation). Similar inequalities can be
obtained using monotonicity results [11, 8, 12] for other subnets.
Throughout this paper, the transitions have single-server semantics. Our analysis can
be extended immediately to STPN with bounded marking and in-nite-server transitions.
Indeed, in such a case, each in-nite-server transition can be replaced by K single-server
transitions, where K is the upper bound of the token numbers in the places. More precisely,
we replace each in-nite-server transition t by K single-server transitions
such a way that - p;t k
An example of such a
transformation is illustrated in Figure 7, where
(a) (b)312
Figure
7: Transformation from in-nite-server transition to single-server transitions. (a)
an in-nite-server transition. (b) equivalent single-server transitions.
In our analysis, we assumed that the -ring times of each transition are i.i.d. exponential
random variables with a -xed parameter. It is simple to extend the results to the case of
marking dependent -ring rates, i.e., the -ring rate of a transition depends on the marking
of input places, provided the number of dioeerent -ring rates is bounded. As an example,
consider a transition t with a single input place. Let - 1
t and - 3
t be the -ring rates of
INRIA
Performance Analysis of Stochastic Timed Petri Nets 29
transition t when there are one token, two tokens, and more than 3 tokens in the input
place. We replace the transition by three transitions t 1 , t 2 and t 3 with -ring rates - 1
t and - 3
respectively, in such a way that at any time at most one of the transitions is
enabled, see Figure 8. The set of outgoing places are the same as that of t: oe t k
3.
(a) (b)3123
Figure
8: Transformation from marking-dependent -ring rate to marking-independent
-ring rates. (a) transition with 3 -ring rates. (b) equivalent transitions with -xed -ring
rates.
Our approach can be extended to the case that -ring times have phase-type distributions
[37]. A phase-type distribution can be considered as the distribution of the time
that a token passes through an ordinary Markovian state machine with a single source
and a single sink transitions. The -ring times have exponential distributions for all transitions
except the source and the sink which are immediate transitions. The sink transition
represents the absorbing state. Let transition t have a phase-type distribution which is
represented by a Markovian state machine N 0 with source transition t 0 and sink transition
t s . We replace transition t in the original net by the subnet N 0 as follows. For any p 2 P
and any transition t 0 6= t s of N 0 (including t 0 and excluding t s
and oe t 0 . For any p 2
is a new place with initial marking 1. An example of the construction is illustrated in
Figure
9 for an Erlang distribution with 3 stages. Recall that the -ring mechanism under
consideration is race policy with age memory. The reader can therefore easily check
that when transitions whose -ring times have phase-type distributions are thus replaced
by corresponding Markovian state machines, we obtain a stochastically equivalent STPN
with exponential -ring times.
The performance measures considered in the paper are mostly the throughputs of
transitions and the expectations of X p and X p e t . The same approach can be used to
RR n\Sigma2642
Z. Liu
(a)122224(b)
Figure
9: Transformation from phase-type -ring times to exponential -ring times. (a)
transition with 3-stages-Erlang-distribution -ring times. (b) transitions with exponential
-ring times.
INRIA
Performance Analysis of Stochastic Timed Petri Nets 31
obtain linear equalities among higher moments of the token numbers. More precisely, for
any m - 1, linear equalities can be established for variables E[X k
ii
. Similarly, for
any m - 1, linear equalities can be established for variables E[X k 1
p2 (- n+1 )jF -n
ii
Finally, we remark that when the weights of the STPN are real numbers, all our
analyses go through straightforwardly and the same results hold.
Appendix
Elimination of Immediate Transitions
We present here a direct transformation technique which removes immediate transitions
playing roles of synchronization and/or routing. We assume that for any such immediate
transition t, t is the only output transition of all its input places, i.e.
t. Further, we assume that for any immediate transition t,
and
ffl either oe t;p 0
ffl or oe t;p
with a harmless abuse of notation, the index ffl p denotes the
unique transition preceding place p.
We show below that this kind of immediate transitions can be removed from the
net without changing the -ring behavior of the other transitions. Consider a net
E) with initial marking M and weights -; oe. Let t 0 be an immediate transition,
g. Without loss of generality, we assume
that min p2 ffl t 0
We construct a new net f
E) with initial marking f
M and weights ~
oe.
The key idea is to create a place p i
for each pair of input place p i and output place p j
of transition t 0 . The set of input transitions of p i
j is the union of ffl p i and ffl p j . The set
of output transitions of p i
. Such a transformation is illustrated in Figure 10, where
transition t 0 is an immediate transition.
The mathematical de-nition of f
N is as follows.
e
RR n\Sigma2642
Z. Liu
(a) (b)
Figure
10: Removal of Immediate Transitions. (a): A subnet containing an immediate
transition. (b): The subnet without immediate transition.
e
e
where E is de-ned by
The initial marking f
M and weights ~
oe are de-ned accordingly:
f
~
~ oe t;p
It is easily seen that if the sequences of the -rings times are the same for the same
transitions in N and f
-ring commencement and completion times are identical.
The detailed proof can be done by induction and is left to the interested reader.
INRIA
Performance Analysis of Stochastic Timed Petri Nets 33
Acknowledgements
: The author is very grateful to Dr Alain JEAN-MARIE for constructive
comments and for eOEcient help in the computation of numerical results.
--R
iThe Eoeect of Execution Policies on the Semantics and Analysis of Stochastic Petri Netsj
iA Class of Generalized Stochastic Petri Nets for the Performance Analysis of Multiprocessor Systemsj
Performance Models of Multiprocessor Systems
iErgodic Theory of Stochastic Petri Netsj
iAnnotated Bibliography on Stochastic Petri Netsj
Elements of Queueing Theory
iParallel Simulation of Stochastic Petri Nets Using Recursive Equationsj
iRecursive Equations and Basic properties of Timed Petri Netsj
iStationary regime and stability of free-choice Petri netsj
iEstimates of Cycle Times in Stochastic Petri Netsj
iComparison Properties of Stochastic Decision Free Petri Netsj
iGlobal and Local Monotonicities of Stochastic Petri Netsj
iOptimization of Multiclass Queueing Networks: Polyhedral and Nonlinear Characterizations of Achievable Performancej
Achievable Region and Side Constraintsj
iA Characterisation of Independence for Competing Markov Chains with Applications to Stochastic Petri Netsj
iA computational Algorithm for Closed Queueing Networks with Exponential Servers.
iErgodicity and Throughput Bounds of Petri Nets with Unique Consistent Firing Count Vectorj
iProperties and performance bounds for closed free choice synchronized monoclass queueing networksj
iA General Iterative Technique for Approximate Throughput Computation of Stochastique Marked Graphsj
iEmbedded Product-form Queueing Networks and the Improvement of Performance Bounds for Petri Net Systemsj
iComputational Algorithms for Product Form Queueing Networks.
iGeneralized Stochastic Petri Nets: A De-nition at the Net Level and Its Implicationsj
iOperational Analysis of Timed Petri Nets and Application to the Computation of Performance Boundsj
SPN: What is the Actual Role of Immediate Transitions?
iSPNP: Stochastic Petri net packagej
iExtended Stochastic Petri Nets: Applications and Analysisj
iCyclic scheduling on parallel processors: an overviewj
Markov Chain Models - Rarity and Exponentiality
iPerformance Bounds for Queueing Networks and Scheduling Policiesj
iStability of Queueing Networks and Scheduling Policiesj
iComplete Decomposition of Stochastic Petri Nets Representing Generalized Service Networksj
iPerformance Analysis using Stochastic Petri Netsj
iQueueing Networks with Multiple Closed Chains: Theory and Computational Algorithms.
iA Last Word on L
--TR
--CTR
Jinjun Chen , Yun Yang, Adaptive selection of necessary and sufficient checkpoints for dynamic verification of temporal constraints in grid workflow systems, ACM Transactions on Autonomous and Adaptive Systems (TAAS), v.2 n.2, p.6-es, June 2007 | mean token number;linear programming;throughput;stochastic timed petri net;uniformization;performance bound |
291356 | On Generalized Hamming Weights for Galois Ring Linear Codes. | The definition of generalized Hamming weights (GHW) for linear codes over Galois rings is discussed. The properties of GHW for Galois ring linear codes are stated. Upper and existence bounds for GHW of ZF_4 linear codes and a lower bound for GHW of the Kerdock code over Z_4 are derived. GHW of some ZF_4 linear codes are determined. | Introduction
For any code D, -(D), the support of D, is the set of positions where not all the
codewords of D are zero, and w s (D), the support weight of D, is the weight of
-(D). For an [n; k] code C and any r, where 1 - r - k, the r-th Hamming weight
is defined [7],[14] by
d r is an [n; r] subcode of Cg:
In [1],[4], and [5] different generalizations of GHW for nonlinear case were sug-
gested. In the present paper we consider the case of Galois ring (GR) linear codes.
Outline of the paper is following.
In Section 2, we discuss the definition and state some properties of GHW of GR
linear codes. In Section 3, we show that definition of GHW from [4] determines
performance of group codes in the wire tap channel of type II. In Section 4, we find
the number of different type subcodes in a Z 4 \Gammalinear code and state a connection
between their support sizes. In Section 5, we use results of Section 4 to get bounds
for GHW of Z 4 \Gammalinear codes. In Section 6, using results of sections 4 and 5, we
obtain a lower bound for GHW of the Kerdock code over Z 4 and complete weight
hierarchys of some short Z 4 \Gamma linear codes. We also show that though the minimum
Hamming weight of a GR linear code can not exceed the minimum Hamming weight
of an optimal linear codes, heigher weights of a GR linear code do can exceed
corresponding weights of an optimal linear code of the same length and dimension.
2. Definition and Basic Properties
Let R be a Galois ring, i.e., a finite commutative ring with identity e, whose set
of zero divisors has the form pR for a certain prime p. After Nechaev's paper
Galois rings have become easy to understand for a reader with a standard
background in finite fields. In particular, using the definition given, one can prove
[10] that and the characteristic of R
(the order of e in the group (R; +)) equals p m . Since fixing the numbers p m and q m
identifies R up to isomorphism, it may be also denoted as GR(q m
of R form the following chain:
and jR quotient ring is a Galois field of order q
The ring R is constructed as a degree s Galois extension of Z p m in much the same
way as one constructs the finite field from Z p . Note also that Galois rings encompass
finite fields and residue rings as boundary cases. Namely, GF (p s
Let C be a GR linear code of length n. Let D be any GR linear subcode of C.
The support of D is defined exactly as for codes over a Galois field, i.e., the support
of D is the set of not-all-zero symbol positions of D.
In [1], the following definition was proposed for the r-th generalized Hamming
weight of C
d r linear subcode of C; log q
One can see that this definition generalizes the definition of GHW for linear codes
over Galois fields.
L. A. Bassalygo proposed another definition for support and GHW [4]. Let A be
any code of length n over arbitrary alphabet and let B be any subcode of A. Define
the support of B as follows
an such that a i 6= b i g:
In other words, the support of B is the set of positions where not all the codewords
of B have the same symbols. Define the following function
is subcode of A and
We will consider only those points where the function FA changes its values, that
is, the following two sets
are defined as follows
and 1. It is clear that the function FA (M) is completely defined by
the set of pairs so on. These pairs are called generalized
Hamming weights.
Example: Consider the nonlinear code
This code has the following values
and the function FA (M) can be depicted as follows (the function FA (M) is defined
only for integer values of M , and we draw it everywhere just for visualization)
6 FA (M)
One can see that Bassalygo's definition works for both linear and nonlinear cases.
In the case of a linear code over GF (q), we have M i =M therefore we
can study only values ffi i . In the general case, we must consider pairs
In Corollary 1, we show that if C is a linear code over GR(q
i.e., the function FC (M) can change its values only at
see that in the GR linear case there is one-to-one
correspondence between the set fd and the sets and
knowledge of values d i allows us to find values
vice versa.
Note that for GHW of nonlinear codes one more definition, nonequivalent to
Bassalygo's one, was given in [5].
Theorem 1 Let C be a linear code of length n over GR(q
and if d r
Proof: Let D be a GR linear subcode of C such that w s
log q Dg. From linearity
of D it follows that 0g.
Then log q jD
d r\Gammat (C) ! d r (C) for some t - m.
To see that there exist a code A such that d r
one can consider the code with generator matrix [1;
Note that if C is a linear code of length n over GR(q
code, then jCjjC mn. In the sequel, we suppose
that ng is the set of positions of a code of length n.
Denote by \Phi the direct sum of two sets.
ng and C
I \Phi fx Ig.
I rg.
Proof: Let S I
I g, where x \Delta y denotes the
inner product of x and y. Then log q jS I j log q jS ?
I I be such that
I I is a linear subcode of C, we see
that d r (C) - jS I j -
I rg.
Let us establish inequality in the other direction. Let D be a linear subcode of
C such that w s
I rg.
Theorem 2 Let C be a GR linear code and let C ? be its dual code. Then fd r
times
times
times
Proof: It suffices to prove that if
then there are no more than m\Gammal values d t+1
that d t (C) ! d t+1 (C); d t+m\Gammal (C) ! d t+m\Gammal+1 (C), and d t+1
for some t.
At first we show that d t (C) -
ng n -(D)
I \Phi D. Hence, jC ?
I
1, we obtain d t (C) - jI
log
Next we show that if d
a subcode of C such that log q
ng n -(D) and Hence jC I
By Lemma 1, we obtain d
where
contradiction.
Corollary 2 Let g. Then
ng
l log q M i =M
there is not ffi j such that ffi ?
Good nonlinear codes can be constructed by mapping GR linear codes into codes
over GF (q) [10],[6]. Therefore it is natural to ask whether GR linear codes themselves
can have better characteristics than linear codes over Galois fields. The
following theorem shows that GR linear codes can not be better (from the point of
view of length, cardinality, and minimum Hamming distance) than optimal linear
codes, i.e. than linear codes having the largest cardinality for a given length and
distance.
Theorem 3 Let C be a GR linear code over GR(q
Hamming distance d. Then there exist a linear code D over GF (q m ) of length n
with minimum distance d such that jDj - jCj.
Proof: We consider the case of a code C over GR(q The case of an arbitrary
m can be proved by the same way. A generator matrix of C can be written in the
are r 1 \Theta n and r 2 \Theta n GR(q
C be a subcode of C with the generator matrix
Obviously, the code b
C has the same length and the same minimum distance as
C. Note that b
C contains only elements from the ideal pGR(q us define
exterior multiplication of an element ff 2 pGR(q
by an element a 2 GR(q
Analogously,
aff n ). Now taking into account (1),
it is easy to see that b
C is a vector space over GR(q
of dimension r Hence b
C is isomorphic to some [n; r
over GF (q). Let D be a code with a generator matrix of the code b
considered
as a matrix over GF (q 2 ). Then D is a linear code over GF (q 2 ) of length n with
minimum distance d and
Using the Nordstrom-Robinson code over Z 4 as an example, we shall show later
that a GR linear code can have the same length, cardinality, and minimum distance
as an optimal linear code, and the function F for such a GR linear code can exceed
the function F for the optimal linear code.
3. An Application to The Wire-Tap Channel of Type II
One of motivations for the introduction of a notion of GHW was cryptographical
one. Ozarow and Wyner [11] considered using linear codes on the wire-tap channel
of Type II. One of their schemes uses an [n; k] linear code, say C, over GF (q). The
code has q n\Gammak cosets, each representing a q-ary (n \Gamma k)-tuple. If the sender wants
to q\Gammaary symbols of information to the receiver, he selects a random
vector in the corresponding coset. The adversary has full knowledge of the code,
but not of the random selection of a vector in a coset.
Wei showed [14] that in the linear case if the adversary is allowed to tap s symbols
(of his choice) from the sender, he will obtain r q\Gammaary symbols of information, if
and only if s - d r (C ?
It is obvious that we can use a group code on the said channel. Let C be a group
code of length n over an additive group \Theta; '. The code C has ' n =jCj cosets.
As in the linear case, the number of a coset is transmitted information and the
sender transmits a random vector from the corresponding coset. Let C have values
let the code C be used on the
wire-tap channel of type II. We prove the following theorem.
Theorem 4 Let s symbols be taped from transmitter and
the number of cosets of C that can (equiprobably) contain the transmitted vector
(with known s taped symbols) is equal to or greater than ' n\Gammas
Proof: Let I be a set of taped positions. By L denote a set of vectors that contain
zeros on positions from I , i.e., Ig. Cosets of C that
contain vectors from L form a subgroup, say \Psi, of the quotient group of cosets of
C. Cosets of C that contain vectors with given values on positions from I form a
coset of \Psi. Hence w.l.o.g. we can assume that taped symbols are zeros. Denote
by D a set of codewords of C that belong to L. It is clear that any coset of C that
belongs to \Psi has exactly jDj vectors from L and hence
The best strategy for the adversary is to choose the set I such that the cardinality
of D would be maximal. We claim that jDj - M i . Indeed, suppose that there
exists a subcode B ' C such that jBj ? M i and
the definition of GHW the support weight of any subcode of cardinality larger than
M i is equal to or greater than ffi i+1 . A contradiction. Hence the transmitted vector
equiprobably belongs to one of j\Psij cosets of C, and j\Psij - ' n\Gammas
4. Subcodes of a Z 4 \Gammalinear code
Let C be a Z 4 \Gammalinear code of length n and cardinality 4 r1 2 r2 with a generator
matrix
are r 1 \Theta n and r 2 \Theta n Z 4 \Gammamatrices. We will say that C is an
C denote a subcode of C with the generator matrix [2G 1 ] and
by e
C denote a subcode of C with the generator matrix
Remark. Throughout the rest of the paper we use symbols - and - in this meaning
Note that b
C and e
C are [n; 0; r 1 ] and [n; 0; r
an [n; s of C and let
be a generator
matrix of D, where G 1 and G 2 are s 1 \Theta n and s 2 \Theta n Z 4 \Gammamatrices respectively. By
denote the set of subcodes [n; s of C. In
the sequel, we need the following propositions.
Proposition 1
Proof: Consider the number of ways of choosing s 1 Z 4 \Gammalinearly independent
codewords. Note that linear independence over Z 4 implies that none of these codewords
consists only of zero divisors of Z 4 . The total number of codewords of C
of them consist of zero divisors of Z 4 . Hence the first
codeword, say u 1 , can be chosen by 2 distinct ways. The second
codeword u 2 can not be equal to au 1
C . Indeed, if
then either 2u are
not Z 4 \Gammalinearly independent. Thus u 2 can be chosen by 2
ways. Analogously, if we have codewords then a codeword u t can
be chosen by 2 distinct ways. Thus the number of ways of
choosing s 1 Z 4 \Gammalinearly independent codewords equals
Now we have to choose s 2 Z 2 \Gammalinearly independent codewords that consist from
zero divisors of Z 4 . These codewords should also be Z 2 \Gammalinearly independent from
the s 1 codewords chosen earlier. The number of ways for choosing these codewords
equals
Using the same arguments, one can see that the number of distinct generator matrices
of any [n; s
To complete the proof, we multiply (2) by (3) and divide by (4).
In the linear case, any nonzero codeword of a linear code belongs to one and the
same number of subcodes of a given dimension. The situation is different in the
Z 4 \Gammalinear case. We should consider the following three cases.
Proposition 2 Let u 2 C; 2u 6= 0. Then u belongs to
subcodes from -(C; s
Proof: Recall that the generator matrix of C has the form
. To prove
the proposition, we assume that one of rows of the matrix G 1 equals u and use
arguments from the proof of Proposition 1.
Proposition 3 Let
subcodes from -(C; s
Proof: Suppose a codeword u 2 D; then either
D or u 2 e
D.
Consider the case u 2 b
D. The subcode D contains 2
w such that w 2 b
D. The code C contains 2 codewords v such that
C. Hence the number of subcodes D such that u 2 b
D equals the total number
of [n; s subcodes of C multiplied by the factor 2
Using Proposition 1, we
get
Consider the case u 2 e
D. From (5) it follows that the number of subcodes D
such that u 62 b
Assuming that one of rows of G 2 equals u and using arguments like ones from
Proposition 1, we get the number of subcodes D such that u 2 e
D. This number
equals
Summation of (5) and (6) completes the proof.
Proposition 4 Let u 2 e
C. Then u belongs to
subcodes from -(C; s
Proof: Suppose that u 2 D. From the conditions of the proposition it follows
that u 62 b
D. Hence the matrix G 1 can be formed by any s 1 Z 4 \Gammalinearly independent
codewords of C. Assuming that one of rows of the matrix G 2 equals u and using
arguments from the proof of Proposition 1, we complete the proof.
By wtL (u) denote the Lee weight of a vector u.
Proposition 5 ([3], [13]) For an [n; r
Using an approach suggested in [8], we can obtain a connection between support
weight of C and supports weights of its subcodes.
Proposition 6
C)
C)
are numbers from Propositions 2,3, and 4.
Proof: Using Proposition 5 and Propositions 2,3, and 4, we get (an abbreviation
s.t. means such
u2b Cn0
u2e Cnb C
u2b Cn0
u2e Cnb C
u2b Cn0
u2e Cnb C
wtL (u)C A
u2b Cn0
u2e Cnb C
wtL (u)C A
u2b Cn0
C)
It is known that if C is a Z 4 \Gammalinear code and C is its binary image under the Gray
map, then minimum Lee distance of C equals minimum Hamming distance of C [6].
Such good nonlinear binary codes as Kerdock, Delsarte-Goethals, and Preparata
codes can be constructed by Gray mapping Z 4 \Gammalinear codes [6]. Therefore it is
natural to ask what is the minimal length, N(r of an [n; r
code with minimum Lee distance dL . We confine ourselves to the case of [n;
codes and obtain an analog of the Griesmer bound for N(r; 0; dL ).
Theorem 5
dL
Proof: Let C be an [n; \Gammalinear code with minimum Lee distance dL . Note
that in this case b
Using Proposition 6, we
get
Obviously, w s ( b
Assuming w s ( b
From (7) and (8) it follows that
If 3ag is an [n;
\Upsilon . Otherwise wtL (2a) ! dL . Substituting the value for N(1; 0; dL )
in (9) completes the proof.
Note that the Nordstrom-Robinson code over Z 4 meets this bound.
5. Bounds on Higher Weights
We start with Plotkin type bound.
Let d s1 ;s 2
is an [n; s subcode of Cg.
Theorem 6 If C is an [n; r
d 2s (C) - n
Proof: Calculate in two ways the sum, say S, of codeword Lee weights of all
subcodes of C.
The total number of subcodes equals -(r According to Proposition
5 the sum of Lee weights of codewords of any [n; s subcode is greater than or
equal to 2 2s1+s2 d 2s1 +s2 (C). Hence
Let A be a 2 2r1+r2 \Theta n matrix whose rows are codewords of C. W.l.o.g. we can
consider the first column of A. Any a 2 Z 4 occurs 2 2r1+r2 \Gamma2 times in the column.
belongs to A 1 [n; s
of C and there are 2 \Delta 2 2r1+r2 \Gamma2 such vectors. If a = 2 then there are three different
cases.
C. Then v belongs to A 1 [n; s subcodes of C and there are
C. Then v belongs to A 2 [n; s
subcodes of C and there are 2 r1 \Gamma1 such vectors.
C. Then v belongs to
subcofdes of C and there are 2 r1 +r2 Hence
After computations we get
Setting respectively, we get
the assertion.
Let D be a Z 4 \Gammalinear code and C be its [n; r subcode such that w s
d r1 ;r 2
(D). Let fl r1 ;r 2
and fl r1 be upper bounds for support sizes of subcodes e
C and
C.
Remark. If we don't have an information on the structure of subcodes e
D and
D then we can say that
(D). Sometimes, if we know weight
distribution of support sizes of all subcodes of codes e
D and b
D, we can find exact
values for support sizes of subcodes e
C and b
C. (see next chapter).
Now we can get a Griesmer type bound.
Theorem 7 If 2 r1 +r2 \Gammas 1
d r1 ;r 2 (D) -
otherwise
d r1 ;r 2
d r1 ;r 2
(D)
Proof: From proposition 6 we have
(D)
C)
C)
C)
C)
It is easy to check that A 2 \GammaA 3 is always nonnegative when s
Replacing w s ( b
C) by fl r1 and w s ( e
C) by fl r1 ;r 2
(C) in the case when A
by d r1 ;r 2
(D) otherwise, we get the assertion.
As corollarys we have the following relations.
Although in general d r can be equal to d r+1 , d s1 ;s 2 +1 is always greater than d s1 ;s 2
Corollary 3
Proof: Inequality d s1 ;s 2 +1 (D) - d 0;s1 +s2+1 (D) is obvious. Assuming r
(like in Theorem 7)
D)
For linear binary codes we have [8]
s
D and b
are isomorphic to linear binary codes we have
D)
D) - d 0;s1 (D) the assertion follows.
Corollary 4
ae
oe
Proof: If r
1) and the assertion follows from assuming fl r1 ;r 2
(D) and Theorem 7.
Note that as an estimation for value d 0;t (D) we can use any lower bound for
generalized Hamming weights of binary liner codes. For example using the Griesmer
bound we get
d s1 ;0 (D);
where d(C) is minimum Hamming weight of C. Note that some codes have equalities
in (10) and (11) (see Remark 5).
Our last bound is an existence one.
Theorem 8 For any s 1 - r 1 and s there exists an [n; r
Z 4 \Gammalinear code with d s1 ;s 2
- d which satisfies
d
Proof: The number of [n; r codes that contain a given subcode [n; s
equals
d
is the number of [n; s codes with supports of size less than
or equal to d. Hence if
d
is less than the total number of [n; r
at least one of [n; r codes has a subcode [n; s greater
than or equal to d. Substitution valies for the function - gives us the assertion.
6. Determination of Weight Hierarchy
It is often that we know more about subcodes e
C and b
C, wich are isomorphic to
linear binary codes. In those cases we can get better bounds.
Consider the Kerdock code over Z 4 [6],[10]. In our notation the Kerdock code
Km is an [2 code, m is odd. The minimum Lee distance of Km is
and the minimum Hamming distance is 2 . Moreover, in [6]
compositions of all codewords and weight distribution of Km were found. Suppose
l t Km such that -(u) 6=
has one of the following compositions
l
l
be the number of codewords of weight i. Then
In [6], it was also shown that subcode b
Km is isomorphic to the first order Reed-Muller
code RM(1;m) of length 2 m . To find a lower bound for d r (Km ) we need
the following lemma.
Lemma 2 The support size of any r dimensional subcode, say C, of RM(1;m)
equals to either
Proof: If C contains the all-one codeword then w s that C does
not contain the all-one codeword.
It is known [14](Theorem 5) that d r
any r rows
(that are not all-one) in a standard generator matrix generate a subcode whose
support size equals d r (RM(1;m)). Any codeword of RM(1;m) can be consedered
as a boolean linear function of m variables, say v
affine transformation of v belongs to the automorphism group of RM(1;m)
and hence it does not change the support size of C. Let f 1
functions corresponding to basis vectors of C. Assume
that f 1 (v) has a term v 1 . Then the affine transform v 1
to the form v 1 and f 2 (v) to some f 0
(v). Since f 2 (v) is linearly
independent of f 1 (v) and f 1 (v)
2 (v) has a term v j
The affine transform v 2
2 (v) to the form v 2 . Continuing
this procedure, we get f 1 that is r rows from a standard
generator matrix.
Theorem 9
d
d r
d
we have (like in Theorem 7)
where C is an [n; r of Km such that w s
C and e
C are [n; 0; r 1 ] and [n; 0; r subcodes of Km respectively, and they
are isomorphic to [n; r 1 ] and [n; r subcodes of RM(1;m). Hence from the
Lemma 2 we get
From possible compositions of codewords of Km it follows that if V is an [n; 1; 0]
subcode of Km then w s (V Hence
(3
By denote the right side of (14). It is evident that d r (Km
It is easy to check that n(r
1). On the other hand, since w s (C) - w s ( e
C), we have
linear subcode of Km ;
ae d r=2 (RM(1;m); if r is even,
d (r+1)=2 (RM(1;m); if r is odd:
It is easy to check that
d r (RM(1;m)) at r - (m \Gamma 1)=2.
It is obvious that for Km
Indeed, it is enough to take a subcode C =! a ?= f0; a; 2a; 3ag generated by
a codeword a with the composition l 0
(note that in Km there are also subcodes of cardinality
4 generated by a couple of codewords consisting of zero divisors of Z 4 ),
and check that support weight of C meets the lower bound and
The Kerdock code over Z 4 can be also defined for the case of even m [6]. In this
case
Similar to the case when m is odd we get
Theorem
d
d
Remark. Theorems 9 and 10 were obtained for the first time in [2] [3]. Later and
independently the following estimates were obtained for the Kerdock codes [13]:
d
ae P 2r
and
d
It is easy to check that estimates (16)and (17) coincides with estimates from Theorems
m=2. For the r - m=2 estimates from the theorems are
better than estimates (16) and (17).
Minimum distance of a code is only a small part of information contained in the
spectrum of distance distribution of the code. In the same way, the number of
subcodes of a given dimension with a given support weight gives much more information
on a code than only minimum support weights of subcodes. For example
we used distribution of support weights of the first order Reed-Muller code in the
proof of Theorem 9 and 10. Papers [7],[12] are devoted to studying distributions of
support weights of linear codes. We shall find weight distribution of Z 4 \Gammalinear sub-codes
of cardinality 2 3 (weight distribution of subcodes of cardinality 2 2 is evident).
Let A 3
linear subcode of Km ;
Theorem 11
A 3
A 3
A 3
Proof: We should consider subcodes
Consider an [2
generated by codewords a; 2a 6= 0; -(a) 6=
Assuming using Proposition 6, we get that
and w s
3 be sets of [2 subcodes with support weights 7
It is easy to see that if w s (! a; b
Hence,
As it was mentioned if w s any codeword a 2
C; 2a 6= 0, has Hamming weight 3 . Hence the total number of
codewords of weight 3 in all the subcodes from S 1 equals
In Km there are 2 m+1 \Gamma4 codewords b; l 0 , that are Z 4 \Gammalinearly
independent form a given a 2 Km ; 2a 6= 0. According to (18), half of these codewords
half of them form subcodes
So, a given codeword a; 2a 6= 0, belongs to 2 subcodes from S 1 . Hence the
the total number of codewords of weight 3 in all the subcodes
from S 1 equals the half number of codewords of this weight (codewords a and 3a
always belong to one and the same subcode) multiplied by 2. So, using (19)
and (12), we get
subcode of Km with support contains the codeword
Assuming s using Proposition 3, we
get that the number of an [2 subcodes with support
According to Proposition 1 the total number of an [2 subcodes equals
1). Hence the number of [2 subcodes
with support weight 7
Since 2Km is isomorphic to RM(1;m), one can see that in 2Km there are
\Theta m
subcodes of Km with support weight 2 m and
\Theta m+1\Gamma
with support weight 7
The numbers jS 1
\Theta m+1\Gamma
\Theta m; jS 3 j, and A 2
\Theta mgive the numbers of
Z 4 \Gammalinear subcodes of cardinality 8 with support weights 7
Corollary 5 The values ffi 3 and M 3 for the Kerdock code Km are
Using Theorems 2 and 7, we can find the weight hierarchy of some short Z 4 -linear
codes.
Theorem 12 The Nordstrom-Robinson code (NR) over Z 4 has weight hierarchy
Proof: The Nordstrom-Robinson code is a self-dual code with d 1
According to (15), d 2 5. From this fact and Theorem 2 it follows that
8g. The claim of the theorem
follows from this hierarchy.
Remark. The optimal linear code, say B, over GF (4) of length 8 and dimension
4 has 4: The best possible function FB (M) for such a code and the
function FNR (M) have the following form (see the picture). One can see that
5. Thus the Nordstrom-Robinson
over Z 4 code is better, in some sense, than the optimal linear code over GF (4) of
the same length and dimension. From cryptographical point of view it means that
if we use the code B on the wire tap channel the adversary must tap 4 symbols
from transmitter to get 2 bits of information, whereas in the case of using the
Nordstrom-Robinson code the adversary must tap at least 5 symbols to get 2 bits
of information. Whether there exist other GR linear codes (or codes over another
ring or additive group) that are better (from the point of view of GHW) than
optimal linear codes over GF (q) is an open question.
Theorem 13 The Kerdock code of length 16 over Z 4 has weight hierarchy
Proof: From (15) and Corollary 5 it follows that fd 1 (K 4
5 be the Preparata code over Z 4 of length 16. Using the
6 F(M)
4\GammaThe Nordstrom-Robinson code over Z 4
3\GammaThe optimal [8; 4] linear code over GF (4)
MacWilliams identities for Z 4 \Gammalinear codes [6], we see that in P 5 there exists a
codeword a such that 2a 6= 0 and 4. Using this fact and taking into
account that minimum Hamming weight of P 4 equals 4, we get d 1
Now using Theorem 2, we get the hierarchy of K 4 .
Remark. Note that if we start with known values d 1 (Km ) and d 2 (Km
then in (10) and (11) we have equality for all d s1 ;0 and d s1 ;1 .
Using Theorems 2 and 13, we can also find weight hierarchy of the Preparata
code P 4
Theorem 14 The Kerdock code of length 32 over Z 4 has weight hierarchy
Proof: From (15) and Theorem 7 it follows that fd 1 (K 5
5 be the Preparata code over Z 4 of length 32. Using
the MacWilliams identities for Z 4 \Gammalinear codes [6], we see that in P 5 there exists a
codeword a such that 2a 6= 0 and 5. Using this fact and taking into account
that minimum Hamming weight of P 5 equals 4, we get d 1
Now using Theorem 2, we get fd 5 (K 5
From Theorem 6 it follows that d 4 (K 5 or 28. Suppose d 4 (K 5
contradiction. Thus d 4 (K 5 27. The claim of the theorem follows
from the hierarchy of the values d i (K 5 ).
Using Theorems 9 and 2, we can also find weight hierarchy of the Preparata code
--R
"Generalized Hamming Weights for Z4 -Linear Codes,"
"On Generalized Hamming Weights for Galois Ring Linear Codes,"
"On Generalized Hamming Weights for Galois Ring Linear Codes,"
"Supports of a Code,"
"Upper Bounds on Generalized Distances,"
"The Z4 - Linearity of Kerdock, Preparata, Goethals and Related Codes,"
"The Weight Distribution of Irreducible Cyclic Codes with Block Lengths n 1 ((q l \Gamma 1)=N ),"
"Generalization of the Griesmer Bound,"
The Theory of Error-Correcting Codes
"Kerdock Code in a Cyclic Form,"
"Wire-Tap Channel II,"
"The Effective Length of Subcodes,"
"On the Weight Hierarchy of the Kerdock Codes Over Z 4 ,"
"Generalized Hamming Weights for Linear Codes,"
--TR
--CTR
Cui Jie, Support weight distribution of Z
Manish K. Gupta , Mahesh C. Bhandari , Arbind K. Lal, On Linear Codes over$${\mathbb{Z}}_{2^{s}}$$, Designs, Codes and Cryptography, v.36 n.3, p.227-244, September 2005
Keisuke Shiromoto , Leo Storme, A Griesmer bound for linear codes over finite quasi-Frobenius rings, Discrete Applied Mathematics, v.128 n.1, p.263-274, 15 May | generalized Hamming weights;galois ring linear codes |
291364 | Time- and VLSI-Optimal Sorting on Enhanced Meshes. | AbstractSorting is a fundamental problem with applications in all areas of computer science and engineering. In this work, we address the problem of sorting on mesh connected computers enhanced by endowing each row and each column with its own dedicated high-speed bus. This architecture, commonly referred to as a mesh with multiple broadcasting, is commercially available and has been adopted by the DAP family of multiprocessors. Somewhat surprisingly, the problem of sorting m, (mn), elements on a mesh with multiple broadcasting of size $\sqrt n\times \sqrt n$ has been studied, thus far, only in the sparse case, where $m\in \Theta \left( {\sqrt n} \right)$ and in the dense case, where m(n). Yet, many applications require using an existing platform of size $\sqrt n\times \sqrt n$ for sorting m elements, with $\sqrt n | Introduction
With the tremendous advances in VLSI, it is technologically feasible and economically viable
to build parallel machines featuring tens of thousands of processors [4, 22, 39, 40, 42, 47].
However, practice indicates that this increase in raw computational power does not always
translate into performance increases of the same order of magnitude. The reason seems to
be twofold: first, not all problems are known to admit efficient parallel solutions; second,
parallel computation requires interprocessor communication which often acts as a bottleneck
in parallel machines.
In this context, the mesh has emerged as one of the platforms of choice for solving
problems in image processing, computer vision, pattern recognition, robotics, and computational
morphology, with the number of application domains that benefit from this simple
and intuitive architecture growing by the day [2, 4, 39, 40, 42]. Its regular and intuitive
topology makes the mesh eminently suitable for VLSI implementation, with several models
built over the years. Examples include the ILLIAC V, the STARAN, the MPP, and the
MasPar, among many others [4, 5, 11, 39]. Yet, the mesh is not for everyone: its large computational
diameter makes the mesh notoriously slow in contexts where the computation
involves global data movement.
To address this shortcoming, the mesh has been enhanced by the addition of various
types of bus systems [1, 2, 16, 18, 20, 23, 39, 43]. Early solutions involving the addition of
one or more global buses shared by all the processors in the mesh, have been implemented
on a number of massively parallel machines [2, 4, 19, 39]. Yet another popular way of
enhancing the mesh architecture involves endowing every row with its own bus. The resulting
architecture is referred to as mesh with row buses and has received a good deal of
attention in the literature. Recently, a more powerful architecture has been obtained by
adding one bus to every row and to every column in the mesh, as illustrated in Figure 1.
In [20] an abstraction of such a system is referred to as mesh with multiple broadcasting,
(MMB, for short). The MMB has been implemented in VLSI and is commercially available
in the DAP family of multicomputers [20, 38, 41]. In turn, due to its commercial
availability, the MMB has attracted a great deal of attention. Applications ranging from
image processing [21, 38, 41], to visibility and robotics [7, 14, 15, 37], to pattern recognition
[9, 10, 20, 25, 37], to optimization [17], to query processing [9, 13], and to other fundamental
problems [3, 6, 8, 9, 10, 16] have found efficient solutions on this platform and some of its
variants [18, 25, 29, 30].
As we shall discuss in Section 2, we assume that the MMB communicates with the
outside world via I/O ports placed along the leftmost column of the platform. This is
consistent with the view that enhanced meshes can serve as fast dedicated coprocessors for
present-day computers [28]. In such a scenario, the host computer passes data, in a systolic
fashion, to the enhanced mesh in batches of
n and so, in the presence of m elements to be
processed, the leftmost m
columns will be used to store the input. From now on, we will
not be concerned with I/O issues, assuming that the input has been pretiled in the leftmost
columns of the platform.
Sorting is unquestionably one of the most extensively investigated topics in computer
science. Somewhat surprisingly, thus far, the problem of sorting m, (m - n), elements on
an MMB of size
addressed only in the sparse case, where m 2 \Theta(
n) or
in the dense case, when m 2 \Theta(n). For the sparse case, Olariu et al. [35] have proposed a
time-optimal \Theta(log n) time algorithm to sort
elements stored in one row of an MMB of
size
n. Yet, many applications require using an existing platform of size
for sorting m elements, with
For example, in automated visual inspection
one is interested in computing a similarity measure coefficient for a vertical strip of the
image [27]. A similar situation arises in the task of tracking a mobile target across a series
of frames [31, 46]. In this latter case, the scene consists typically of static objects and one
is interested only in evaluating movement-related parameters of the target. In order to
perform this task in real-time it is crucial to use the existing platform to process (sort, in
parameter values at pixels in a narrow rectangular subimage only. The width
of the subimage of interest depends, of course, on the speed with which the target moves
across the domain. Therefore, in order to be able to meet read-time requirements one has
to use adaptive sorting algorithms that are as fast as possible.
The main contribution of this paper is to propose the first know adaptive time- and
VLSI-optimal sorting algorithm on the MMB. Specifically, we show that once we fix a
pretiled in the leftmost m
columns of an MMB of size
time. We show that this is time-optimal
for this architecture. It is also easy to see that this achieves the AT 2 lower bound in the
word model. At the heart of our algorithms lies a novel deterministic sampling scheme
reminiscent of the one developed recently by Olariu and Schwing [36]. The main feature of
our sampling scheme is that, when used for bucket sorting, the resulting buckets are well
balanced, making costly rebalancing unnecessary.
The remainder of this paper is organized as follows: Section 2 discusses the model of
computation used throughout the paper; Section 3 presents our time- and VLSI lower bound
arguments; Section 4 reviews a number of basic data movement results; Section 5 presents
the details of our optimal sorting algorithm. Finally, Section 6 summarizes the results and
poses a number of open problems.
2 The Model of Computation
An MMB of size M \Theta N , hereafter referred to as a mesh when no confusion is possible,
consists of MN identical processors positioned on a rectangular array overlaid with a high-speed
bus system. In every row of the mesh the processors are connected to a horizontal
bus; similarly, in every column the processors are connected to a vertical bus as illustrated
in
Figure
1. We assume that the processors in the leftmost column serve as I/O ports,
as illustrated, and that this is the only way the MMB can communicate with the outside
world.
Figure
1: An MMB of size 4 \Theta 4
Processor P (i; j) is located in row i and column j, (1 -
in the northwest corner of the mesh. Each processor P (i; j) has local links to its neighbors
exist. Throughout this paper,
we assume that the MMB operates in SIMD mode: in each time unit, the same instruction
is broadcast to all processors, which execute it and wait for the next instruction. Each
processor is assumed to know its own coordinates within the mesh and to have a constant
number of registers of size O(log MN ); in unit time, the processors perform some arithmetic
or boolean operation, communicate with one of their neighbors using a local link, broadcast
a value on a bus, or read a value from a specified bus. Each of these operations involves
handling at most O(log MN) bits of information.
Due to physical constraints, only one processor is allowed to broadcast on a given bus
at any one time. By contrast, all the processors on the bus can simultaneously read the
value being broadcast. In accord with other researchers [3, 16, 17, 20, 21, 23, 38, 39], we
assume constant broadcast delay. Although inexact, experiments with the DAP, the PPA,
and the YUPPIE multiprocessor array systems seem to indicate that this is a reasonable
working hypothesis [23, 26, 29, 38, 39].
3 The Lower Bounds
The purpose of this section is to show that every algorithm that sorts m; m - n, elements
pretiled in the leftmost d m
e columns of an MMB of size
must take
time.
Our argument is of information transfer type. Consider the submesh M consisting of
processors
n . The input will be constructed in
such a way that every element initially input into M will find its final position in the sorted
order outside of M.
To see that this is possible, note that m - n guarantees that the number of elements in M
satisfies:
mp
- mSince at most O( m
) elements can leave or enter M in O(1) time, it follows that any
algorithm that correctly sorts the input data must take at
time. Thus, we have
the following result.
Theorem 3.1. Every algorithm that sorts m, (m - n), elements stored in the leftmost m
columns of a mesh with multiple broadcasting of size
must take
In addition to time-lower bounds for algorithms solving a given problem, one is often
interested in designing algorithms that feature a good VLSI performance. One of the most
m//n
m/2/n
Figure
2: Illustrating the lower bound argument
commonly used metrics for assessing the goodness of an algorithm implemented in VLSI is
the AT 2 complexity [44], where A is the chip area and T is the computation time. A time
lower bound based on this metric is strong because it is not based on memory requirements
or input/output rate, but on the requirements for information flow within the chip. It is
well-known [44] that in the word model the AT 2 lower bound for sorting m elements on a
VLSI chip is m 2 . In our case, the size m of the input varies, while the area, n, of the mesh is
a constant. Hence, for any algorithm to be AT 2 -optimal, we must have
T is the running time. Thus, in this case, the time lower bound of \Omega\Gamma m
into VLSI-optimality.
In the remaining part of the paper, we show that the lower bounds derived above are tight
by providing an algorithm with a matching running time. For sake of better understanding,
we first present a preliminary discussion on some data movement techniques used throughout
the paper.
4 Data Movement
Data movement operations constitute the basic building blocks that lay the foundations of
many efficient algorithms for parallel machines constructed as an interconnection network of
processors. The purpose of this section is to review a number of data movement techniques
for the MMB that will be instrumental in the design of our sorting algorithm.
Merging two sorted sequences is one of the fundamental operations in computer science.
Olariu et al. [35] have proposed a constant time algorithm to merge two sorted sequences
of total length
stored in one row of an MMB of size
precisely, the
following result was established in [35].
Proposition 4.1. Let S
p n, be
sorted sequences stored in the first row of an MMB of size
holding
a s). The two sequences can be merged
into a sorted sequence in O(1) time.
Since merging is an important ingredient in our algorithm, we now give the details of the
merging algorithm [35]. To begin, using vertical buses, the first row is replicated in all rows
of the mesh. Next, in every row i, (1 - i - r), processor P (i; i) broadcasts a i horizontally
on the corresponding row bus. It is easy to see that for every i, a unique processor P (i; j),
. Clearly, this unique processor can now
use the horizontal bus to broadcast j back to P (i; i). In turn, this processor has enough
information to compute the position of a i in the sorted sequence. In exactly the same way,
the position of every b j in the sorted sequence can be computed in O(1) time. Knowing
their positions in the sorted sequence, the elements can be moved to their final positions in
time.
Next, we consider the problem of merging multiple sorted sequences with a common
length. Let a sequence of
n be stored, one per processor, in
the first row of an MMB of size
n. Suppose that the sequence consists of k sorted
subsequences and each subsequence consists of
consecutive elements of the original
sequence. The goal is to sort the entire sequence.
For definiteness, we assume that subsequence j, (1 - j - k), contains the elements
a (j \Gamma1)
. Our sorting algorithm proceeds by merging the subsequences two
at a time into longer and longer subsequences. The details are as follows. We set aside
submeshes of size 2
\Theta 2
k and every pair of consecutive subsequences is merged in each
one of these submeshes. Specifically, the first pair of subsequences is allocated the submesh
with its north-west corner; the next pair of subsequences is allocated the submesh
with processor
its north-west corner, and so on. Note that moving
the subsequences to the corresponding submeshes amounts to a simple broadcast operation
on vertical buses. Now in each submesh, the corresponding subsequences are merged using
the algorithm described above. By Proposition 4.1, this operation takes constant time. By
repeating the merging operation dlog ke times, the entire sequence is sorted. Consequently,
we have the following result.
Lemma 4.2. A sequence consisting of k equal-sized sorted subsequences stored in the first
row of a mesh with multiple broadcasting of size
can be sorted in O(log
Finally, we look at a data movement technique on an MMB of size
involving the
reorganization of the elements in the leftmost x columns of the mesh sorted in row-major
order to column-major order (see Figure 3(a) to 3(d)). This can be accomplished by a series
of simple data movement operations whose details follow. To simplify the notation, we shall
assume that
x
is an integer. The leftmost x columns of the mesh are moved, one at a
time, as follows. For each column s being moved, every processor P
broadcasts the element it holds to processor P (r;
illustrated in Figure 3(b).
We now view the mesh as consisting of horizontal submeshes R 1 each of
size
x
\Theta p n. In a submesh R p ,
x
broadcasts its value along column bus j and P (j; records it as shown in Figure 3(c).
Again, in constant time, each processor P (j; broadcasts its value along row bus j to
processor P (p; j). The above can be repeated for each submesh R p with 1 - p - x, thus
accomplishing the required data movement in O(x) time. To summarize our findings we
state the following result.
Lemma 4.3. Given a mesh with multiple broadcasting of size
stored in the leftmost x columns in sorted row-major order, the data can be moved into
sorted column-major order in the leftmost x columns in O(x) time.
(a) (b)
(c) (d)
R x
x
x
Figure
3: Illustrating the data movement of Lemma 4.3
5 The Algorithm
We are now in a position to present our time- and VLSI-optimal sorting algorithm for the
MMB. Essentially, our algorithm implements the well-known bucket sort strategy. The
novelty of our approach resides in the way we define the buckets, ensuring that no bucket
is overly full. Throughout, we assume an MMB R of size
Fix an arbitrary constant 1. The input is assumed to be a set S of m
elements from a totally ordered universe 1 stored in the leftmost d m
e columns of R. To
We assume O(1) time comparisons among the elements in the universe.
avoid tedious, but otherwise inconsequential, details we assume that m
is an integer. The
goal is to sort these elements in column-major order, so that they can be output from the
mesh in O( m
time. We propose to show that with the above assumptions the entire task
of sorting can be performed in O( m
time. Thus, from our discussion in Section 3, we can
conclude that our algorithm is both time- and VLSI-optimal. It is worth mentioning yet
another interesting feature of our algorithm, namely, that the time to input the data, the
time to sort, and the time to output the data are essentially the same.
To make the presentation more transparent and easier to follow we refer to the submesh
consisting of the leftmost m
columns of R as M. In other words, M is the submesh that
initially contains the input. Further, a slice of size k of the input consists of the elements
stored in k consecutive rows of M.
We will first present an outline of our algorithm and then proceed with the details. We
start by partitioning M into slices of size m
n and sort the elements in each such slice in
row-major in O( m
using an optimal sorting algorithm for meshes [32, 45]. Next, we
use bucketsort to merge consecutive m
of these into slices of size ( m
order. Using the same strategy, these slices are again merged into larger slices sorted in
row-major order. We continue the merging process until we are left with one slice of size
row-major order. Finally, employing the data movement discussed in Lemma
4.3, the data is converted into column-major order.
We proceed to show that the task of merging m
consecutive sorted slices of size ( m
into a sorted slice of size ( m
time. For this purpose, it is convenient to
view the original mesh R as consisting of submeshes R j;k of size ( m
involving processors P (r; s) such that (j \Gamma 1)( m
We refer to submeshes R k;k as diagonal - see Figure 4 for an illustration. Notice that
the diagonal submeshes can be viewed as independent MMBs, since the same task can be
performed, in parallel, in all of them without broadcasting conflict. The algorithm begins by
moving the elements in every R k;1 to the diagonal submesh R k;k . This can be accomplished,
column by column, in O( m
time. We now present the details of the processing that takes
place in parallel in every diagonal submesh R k;k .
The rightmost element in every row of R k;k will be referred to as the leader of that row
as shown in Figure 4. To begin, the sequence of leaders
in increasing order. Note that by virtue of our grouping, the sequence of leaders consists
leaders
R
R 11
R 22
R 22
Figure
4: Illustrating diagonal submeshes and leaders
of m
sorted subsequences, and so, by Lemma 4.2, the sequence of leaders can be sorted
in O(log m
time. Let this sorted sequence be a 1 , a
For convenience, we
assign a
Next, in preparation for bucket sort, we define a set of ( m
such that for every j, (1
(2)
By definition, the leaders a (j \Gamma1)m p n +1
through a jm
belong to bucket B j . This observation
motivates us to call a row in R k;k regular with respect to bucket B j if its leader belongs
to B j . Similarly, a row of R k;k is said to be special with respect to bucket B j if its leader
belongs to a bucket B t with t ? j, while the leader of the previous row belongs to a bucket
To handle the boundary case, we also say that a row is special with respect
to B j if it is the first row in a slice and its leader belongs to B t with t ? j. Note that, all
elements must be in either regular rows or special rows with respect to B j .
At this point, we make a key observation.
Observation 5.1. With respect to every bucket B j , there exist m
regular rows and at
most m
special rows in R k;k .
Proof. The number of regular rows follows directly from the definition of bucket B j in (2).
The claim concerning the number of special rows follows from the assumed sortedness of
the m
n slices of size ( m
implies that each slice of size ( m
may contain at most one special row with respect to any bucket.
In order to process each of the ( m
buckets individually, we partition the mesh R k;k
into submeshes T
each of size ( m
. Specifically, T 1 contains the
leftmost m
columns of R k;k , T 2 contains the next m
columns, and so on. Each submesh
is dedicated to bucket B j , in order to accumulate and process the elements belonging to
that bucket, as we describe next.
In O( m
time we replicate the contents of T 1 in every submesh T
Next, we broadcast in each submesh T j the values a (j \Gamma1) m
and a j m
that are used in (2)
to define bucket B j . As a result, all the elements that belong to B j mark themselves. All
the unmarked elements change their value to +1.
At this point, it is useful to view the mesh R k;k as consisting of submeshes Q
each of size m
\Theta m
. It is easy to see that processor P (r; s) is in Q l;j if
n .
The objective now becomes to move all the elements in T j belonging to bucket B j to
the submesh Q j;j . To see how this is done, let q k (= a v ),
be the
leader of a regular row with respect to bucket B j . The rank r of this row is taken to be r
1. Now, in the order of their ranks, the regular rows with respect to
are moved to the row in Q j;j corresponding to their rank. It is easy to confirm that all
the regular rows with respect to B j can be moved into the submesh Q j;j in O( m
time.
Now consider a row u of T j that is special with respect to bucket B j . Row u is assigned
the rank s=
. Note that no two special rows can have the same rank. In the order
of their ranks, special rows are moved to the rows of Q j;j corresponding to their ranks. As
the number of special rows is at most m
, the time taken to move all the special rows to
Q j;j is O( m
Notice that as a result of the previous data movement operations, each processor in Q j;j
holds at most two elements: one from a regular row with respect to B j and one from a
special row. Next, we sort the elements in each submesh Q j;j in overlaid row-major order.
In case the number of elements in Q j;j does not exceed m 2
after sorting the elements can
be placed one per processor. If the number of elements exceeds m 2
n , the first m 2
n of them are
said to belong to generation-1 and the remaining elements are said to belong to generation-
2. The elements belonging to generation-1 are stored one per processor in row-major order,
overlaid with those from generation-2, also in row-major order.
This task is performed as follows. Using one of the optimal sorting algorithm for meshes
[32, 45], sort the elements in regular rows in Q j;j in O( m
repeat the same
for the elements in special rows. Merging the two sorted sequences thus obtained can be
accomplished in another O( m
time.
Now, in each submesh Q j;j , all the elements know their ranks in bucket B j . Our next
goal is to compute the final rank of each of the elements in R k;k . Before we give the details
of this operation, we let S 1
be the sorted slices of size ( m
be the largest element in bucket B j . In parallel, using simple data movement, each m j is
broadcast to all the processors in T j in O( m
time. Next, we determine the rank of m j in
each of the S l 's as follows: in every S l we identify the smallest element (if any) strictly larger
than m j . Clearly, this can be done in at most O( m
time, since every processor only has
to compare m j with the element it holds and with the element held by its predecessor. Now
the rank of m j among the elements in R k;k is obtained by simply adding up the ranks of m j
in all the S l 's. Once these ranks are known, in at most O( m
time they are broadcast to
the first row of Q j;j , where their sum is computed in O(log m
knows its rank in R k;k , every element in bucket B j finds, in O(1) time, its rank in R k;k by
using its rank in its own bucket, the size of the bucket, and the rank of m j . Consequently,
we have proved the following result.
Lemma 5.2. The rank in R k;k of every element in every bucket can be determined in
O( m
time.
Finally, we need to move all the elements to the leftmost m
columns of R k;k in row-major
order. In O(1) time, each element determines its final position from its rank r as follows.
The row number x is given by d r
e and the column number y by
In every submesh T j , each element belonging to generation-1 is moved to the row x to
which it belongs in sorted row-major order by broadcasting the m
rows of Q j;j , one at
a time. This takes O( m
time. Notice that at this point every row of R k;k contains at
most m
elements. Knowing the columns to which they belong, in another m
time all the
elements can be broadcast to their positions along the row buses. This is repeated for the
generation-2 elements. In parallel, every diagonal submesh R k;k moves back its data into
the leftmost m
columns of submesh R k;1 . Thus, in O( m
time, all the elements are moved
to the leftmost m
columns of R. Now R contains slices of size ( m
each sorted in
row-major order.
To summarize our findings we state the following result.
Lemma 5.3. The task of merging m
consecutive sorted slices of size ( m
slice of size ( m
can be performed in O( m
time.
be the worst-case complexity of the task of sorting a slice of size ( m
It is easy to confirm that the recurrence describing the behavior of T (i
The algorithm terminates at the end of t iterations, when
Now, by dividing (1) throughout by
by raising to the (t 1)-th power we obtain
By combining (3) and (4), we obtain
In turn, (5) implies that
Thus, the total running time of our algorithm is given by
which is obtained by solving the above recurrence. Since ffl is a constant, we have proved
the following result.
Theorem 5.4. For every choice of a constant set of m, n 1
elements stored in the leftmost d m
e columns of a mesh with multiple broadcasting of size
can be sorted in \Theta( m
time. This is both time- and VLSI-optimal.
6 Conclusions and Open Problems
The mesh-connected computer architecture has emerged as one of the most natural choices
for solving a large number of computational tasks in image processing, computational geom-
etry, and computer vision. Its regular structure and simple interconnection topology makes
the mesh particularly well suited for VLSI implementation. However, due to its large communication
diameter, the mesh tends to be slow when it comes to handling data transfer
operations over long distances. In an attempt to overcome this problem, mesh-connected
computers have been augmented by the addition of various types of bus systems. Among
these, the mesh with multiple broadcasting (MMB) is of a particular interest being commercially
available, being the underlying architecture of the DAP family of multiprocessors.
The main contribution of this paper is to present the first known adaptive time- and
VLSI-optimal sorting algorithm for the MMB. Specifically, we have shown that once we
fix a constant the task of sorting m elements, n 1
pretiled in the
leftmost m
columns of an MMB of size
can be performed in O( m
time. This
is both time- and VLSI-optimal.
A number of problems remain open. First, it would be of interest to see whether the
bucketing technique used in this paper can be applied to the problem of selection. To this
day, no time-optimal selection algorithms for meshes with multiple broadcasting are known.
Also, it is not known whether the technique used in this paper can be extended to meshes
enhanced by the addition of k global buses [1, 12]. Further, we would like to completely
resolve these issues concerning optimal sorting over the entire range
that the results of Lin and others [24] show that for m near
n,\Omega\Gamma378 n) is the time lower
bound for sorting in this architecture. Their results imply that a sorting algorithm cannot
be VLSI-optimal for m near
Quite recently, Lin et al. [24] proposed a novel VLSI architecture for digital geometry
- the Mesh with Hybrid Buses (MHB) obtained by enhancing the MMB with precharged
1-bit row and column buses. It would be interesting to see whether the techniques used in
this paper extend to the MHB. This promises to be an exciting area for future work.
Acknowledgement
: The authors wish to thank the anonymous referees for their constructive
comments and suggestions that led to a more lucid presentation. We are also
indebted to Professor Ibarra for his timely and professional way of handling our submission
--R
Optimal bounds for finding maximum on array of processors with k global buses
Parallel Computation: Models and Methods
Square meshes are not always optimal
Design of massively parallel processor
STARAN parallel processor system hardware
Square meshes are not optimal for convex hull computation
A fast selection algorithm on meshes with multiple broadcasting
Convexity problems on meshes with multiple broadcasting
The MasPar MP-1 architecture
Designing efficient parallel algorithms on mesh connected computers with multiple broadcasting
Efficient median finding and its application to two-variable linear programming on mesh-connected computers with multiple broadcasting
Prefix computations on a generalized mesh-connected computer with multiple buses
Array processor with multiple broadcast- ing
Image computations on meshes with multiple broadcast
A multiway merge sorting network
IEEE Transactions on Computers
Parallel Processing Letters
The mesh with hybrid buses: an efficient parallel architecture for digital geometry
IEEE Transactions on Parallel and Distributed Systems
Computer Vision
Optimal sorting algorithms on bus-connected processor arrays
Methods for realizing a priority bus system
A Guided Tour of Computer Vision
Bitonic sort on a mesh-connected parallel computer
Finding connected components and connected ones on a mesh-connected parallel computer
Data broadcasting in SIMD computers
Optimal convex hull algorithms on enhanced meshes
A. new deterministic sampling scheme
The AMT DAP 500
The Massively Parallel Processor
Parallel Computing: Theory and Practice
Fractal graphics and image compression on a SIMD processor
Constant time BSR solutions to parenthesis matching
The VLSI complexity of sorting
Sorting on a mesh-connected parallel computer
Foundations of Vision
Algorithms for sorting arbitrary input using a fixed-size parallel sorting device
--TR | parallel algorithms;VLSI-optimality;lower bounds;sorting;meshes with multiple broadcasting;time-optimality |
291368 | Theoretical Analysis for Communication-Induced Checkpointing Protocols with Rollback-Dependency Trackability. | AbstractRollback-Dependency Trackability (RDT) is a property that states that all rollback dependencies between local checkpoints are on-line trackable by using a transitive dependency vector. In this paper, we address three fundamental issues in the design of communication-induced checkpointing protocols that ensure RDT. First, we prove that the following intuition commonly assumed in the literature is in fact false: If a protocol forces a checkpoint only at a stronger condition, then it must take, at most, as many forced checkpoints as a protocol based on a weaker condition. This result implies that the common approach of sharpening the checkpoint-inducing condition by piggybacking more control information on each message may not always yield a more efficient protocol. Next, we prove that there is no optimal on-line RDT protocol that takes fewer forced checkpoints than any other RDT protocol for all possible communication patterns. Finally, since comparing checkpoint-inducing conditions is not sufficient for comparing protocol performance, we present some formal techniques for comparing the performance of several existing RDT protocols. | Introduction
A distributed computation consists of a finite set of processes connected by a communication
network, that communicate and synchronize by exchanging messages through the network.
A local checkpoint is a snapshot of the local state of a process, saved on nonvolatile storage
to survive process failures. It can be reloaded into volatile memory in case of a failure
to reduce the amount of lost work. When a process has to record such a local state, we
say that this process takes a (local) checkpoint. With each distributed computation is
thus associated a checkpoint and communication pattern, defined from the set of messages
and local checkpoints. A global checkpoint [1] is a set of local checkpoints, one from each
process, and a global checkpoint M is consistent if no message is sent after a checkpoint
of M and received before another checkpoint of M [2]. The computation of consistent
global checkpoints is an important work when one is interested in designing or implementing
systems that have to ensure dependability of the applications they run.
Many protocols have been proposed to select local checkpoints in order to form consistent
global checkpoints (see the survey [3]). Remark that if local checkpoints are taken independently
there is a risk that no consistent global checkpoint can ever be formed from them (this
is the well-known unbounded domino effect , that can occur during rollback-recovery [4]). To
avoid the domino effect, a kind of coordination in the determination of local checkpoints is
required. In [2, 5], the coordination is achieved at the price of synchronization by means of
additional control messages. Another approach, namely, communication-induced checkpointing
[6], achieves coordination by piggybacking control information on application messages.
In that case, processes select local checkpoints independently (called basic checkpoints) and
the protocol requires them to take additional local checkpoints (called forced checkpoints)
in order to ensure the progression of a consistent recovery line. Forced checkpoints are taken
according to certain condition tested each time a message is received, on the basis of control
information piggybacked on messages.
Generally, the fact that two local checkpoints be not causally related is a necessary
but not sufficient condition for them to belong to the same consistent global checkpoint
[7]. They can have "hidden" dependencies that make them impossible to belong to the
same consistent global checkpoint. These dependencies are characterized by the fact that
they cannot be tracked with transitive dependency vectors. To solve this problem, Wang
has defined the Rollback-Dependency Trackability (RDT) property [8]. A checkpoint and
communication pattern satisfies this property if all dependencies between local checkpoints
can be on-line trackable (i.e. trackable by a simple use of the transitive dependency vector).
RDT has two noteworthy properties: (1) It ensures that any set of local checkpoints that are
not pairwise causally related can be extended to form a consistent global checkpoint; (2) It
enjoys efficient calculations of the minimum and the maximum consistent global checkpoints
that contain a given set of local checkpoints. As a consequence, the RDT property has
applications in a large family of dependability problems such as: software error recovery [9],
deadlock recovery [10], mobile computing [11], distributed debugging [12], etc. Moreover,
when combined with an appropriate message logging protocol [13], RDT allows to solve some
dependability problems posed by nondeterministic computations as if these computations
were piecewise deterministic [8].
Since the RDT property has a wide range of applications on many problems, it becomes
an important pragmatic issue to design an efficient communication-induced checkpointing
protocols satisfying the RDT property. The number of forced checkpoints and the size
of piggybacked control information are dominant factors on the price to be paid. Hence
the main question in this context is: how to design an efficient RDT protocol with less
number of forced checkpoints and smaller size of piggybacked control information? Is the
common intuition in the literature [8, 14], "If a protocol forces a checkpoint at a weaker
condition then it must force at least as many checkpoints as a protocol that does at a
stronger condition.", necessarily true (note that the stronger condition is a subset of the
weaker condition)? Is there actually a tradeoff between the number of forced checkpoints
and the size of piggybacked control information [15]? In this paper, we give a theoretical
analysis for these problems. First, some counterexamples against the previous two statements
are enumerated. We then demonstrate that there is no optimal on-line protocol in terms of
the number of forced checkpoints. Since the common intuition is proved to be invalid, some
interesting techniques of comparing useful protocols are also proposed. These techniques
can be used to compare many existing RDT protocols in the literature. Remark that our
results are not only from a theoretical point of view, but also from a practical one, when
considering the task of designing efficient protocols with the RDT property.
This paper is structured in five main sections. Section 2 defines the computational model
and introduces definitions and elements of the Rollback-Dependency Trackability theory. In
Section 3, we discuss some "impossibility " problems. Then several techniques of comparing
useful protocols are addressed in the next section. Section 5 depicts a hierarchy graph for
comparing a family of RDT protocols to marshal the discussions in the context. Finally, we
conclude the paper in Section 6.
2. Preliminaries
2.1 Checkpoint and Communication Patterns
A distributed computation consists of a finite set P of n processes fP that
communicate and synchronize only by exchanging messages. We assume that each ordered
pair of processes is connected by an asynchronous, reliable, directed logical channel whose
transmission delays are unpredictable but finite. Each process runs on a processor; processors
do not share a common memory; there is no bound for their relative speeds and they fail
according to the fail-stop model.
A process can execute internal, send and receive statements. An internal statement does
not involve communication. When P i executes the statement "send(m) to puts the
message m into the channel from P i to P j . When P i executes the statement "receive(m)", it
is blocked until at least one message directed to P i has arrived; then a message is withdrawn
from one of its input channels and received by P i . Executions of internal, send and receive
statements are modeled by internal, sending and receiving events.
Processes of a distributed computation are sequential , in other words, each process P i
produces a sequence of events. This sequence can be finite or infinite. All the events produced
by a distributed computation can be modeled as a partially ordered set with the well-known
!", defined as follows [16].
Definition 1: The relation " hb
!" on the set of events satisfies the following three condition:
(1) If a and b are events in the same process and a comes before b, then a hb
b. (2) If a is
the event send(m) and b is the event receive(m), then a hb
b. (3) If a hb
a hb
With each distributed computation is associated a checkpoint and communication pat-
tern, which is composed of a distributed computation H and the set of local checkpoints
!!!!!!!!!!!!! I k,1 I k,2
Figure
1: A checkpoint and communication pattern ccpat.
defined on H. Figure 1 shows an example checkpoint and communication pattern ccpat.
C i;x represents the xth checkpoint of process and the sequence of events occurring at P i
between C called a checkpoint interval and is denoted by I i;x ; where
i is called the process id and x the index of this checkpoint or this checkpoint interval. We
assume each process P i starts its execution with an initial checkpoint C i;0 .
2.2 Rollback-Dependency Trackability
First we will briefly introduce the concepts of Z-path [7] and causal doubling of a Z-path [15],
and then the concept of Rollback-Dependency Trackability [8]. For more details on these
subjects, please consult those papers previously cited.
Definition 2: A Z-path is a sequence of messages [m
each t.
To the best of our knowledge this notion has been introduced for the first time by Netzer
and Xu in [7] under the name zig-zag path. If a Z-path [m is such that
we say that this Z-path is from I i;x to I j;y . For
example, in pattern ccpat depicted in Figure 1, both the paths [m are
Z-path from I k;2 to I i;2 . However the path [m is not a Z-path. In the rest of this
paper, we use the following notation. In a Z-path -, the first (last) message will be denoted
-:first (-:last). Let - and - be two Z-paths whose concatenation is also a Z-path. This
concatenation will be represented as - \Delta -.
We now introduce the notion of a causal Z-path. A Z-path is causal if the receiving event
of each message except the last one precedes the sending event of the next message in the
sequence. A Z-path is non-causal if it is not causal. A Z-path with only one message is
trivially causal. For simplicity, a casual Z-path will also be called a causal path.
Definition 3: A Z-path from I i;x to I j;y is causally doubled if or if there exists
a causal path - from I i;x 0 to I j;y 0 where x - x 0 and y 0 - y.
From the previous definition, every causal path is obviously causally doubled by itself.
As an example, the Z-path [m in pattern ccpat of Figure 1 is non-causal, and is causally
doubled by the causal path [m
The following concept, Rollback-Dependency Trackability , was introduced by Wang in [8].
It is defined differently but equivalently as below [15].
Definition 4: A checkpoint and communication pattern ccpat satisfies the RDT property
if and only if all non-causal Z-paths are causally doubled in ccpat.
In other words, a checkpoint and communication pattern satisfies this property in the
sense that all dependencies between local checkpoints need to be on-line trackable since
dependencies can be passed over only along causal paths.
2.3 PCM-paths
For a given checkpoint and communication pattern ccpat, it is not necessary to check that
every non-causal Z-path is causally doubled to ensure that ccpat satisfies the RDT property.
Namely, we can only consider some certain embedded subsets of non-causal Z-paths [15]. An
important subset, PCM-paths, is now introduced.
Definition 5: A causal path - from I i;x to I j;y is prime if every causal path - from I i;x 0
to
I j;y 0 with x - x 0 and y 0 - y satisfies that receive(-:last) hb
receive(-:last).
Intuitively, a prime path from I i;x to I j;y is the first one including the existence of interval
I i;x (i.e. the new dependency on which P j 's current state transitively depends with
the causal past of P j 's current state. A PCM-path - \Delta m is a Z-path that is the concatenation
of a causal path - and a single message m, where - is prime and send(m) hb
receive(-:last).
In pattern ccpat shown in Figure 1, the path [m 5 ] is prime but [m 3 ] is not prime. And the
path [m is a PCM-path. The following theorem is a direct consequence of [15].
Theorem 1: A checkpoint and communication pattern ccpat satisfies the RDT property if
and only if all PCM-paths are causally doubled in ccpat.
The idea behind this theorem is that according to the definitions of RDT, since all
dependencies between local checkpoints must be on-line trackable, any new dependency has
to be passed over along a Z-path to its end. Due to all new dependencies included by prime
causal paths firstly, that all PCM-paths are causally doubled is then necessary and sufficient
to ensure the RDT property.
Note that we can exploit the following transitive dependency tracking mechanism proposed
in the literature [11, 17, 18] to detect the existence of a prime path: In a system with
processes, each process P i maintains the size-n transitive dependency vector (TDV), that
can represent the current interval index (or equivalently the checkpoint index of the next
checkpoint) of P i and record the highest index of intervals of any other process P j on which
current state transitively depends. TDV is piggybacked on application messages sent,
and upon the receipt of messages, processes can decide by evaluating this vector if a prime
path is encountered [8].
According to Theorem 1, in order to ensure the RDT property, any PCM-path which
is not causally doubled needs to be broken by a forced checkpoint. For a PCM-path - \Delta m
whose breakpoint is I i;x , on the receipt of -:last, process P i has to distinguish if - \Delta m is
causally doubled, only through the information carried on the message. So this PCM-path
has to be not only causally doubled but also visibly doubled , defined as follows [15], in
order not to be broken.
Definition visibly doubled if and only if it is causally doubled by
a causal path - 0 with receive(- 0 :last) hb
send(-:last).
Figure
2 shows an example visibility of doubling. Intuitively a causal doubling of a PCM-
path is visible at a process on the receipt of message -:last, if the path - 0 that causally
doubles belongs to the causal past of -:last. Note that from the definition a causally
doubled PCM-path is not necessarily visibly doubled, but a non-causally doubled one must
be non-visibly doubled. Based on the foregoing discussion and Theorem 1, we can deduce
a characterization of the RDT with respect to protocols based on the entire-causal history
[15].
Corollary 1: A checkpoint and communication pattern produced by a protocol based on
the entire-causal history satisfies the RDT property if and only if all PCM-paths are visibly
doubled.
-.last
Figure
2: Visibility of doubling.
If a PCM-path is from an interval I i;x to another interval I i;x 0 of the same process P i , we
call this PCM-path a PCM-cycle. If x PCM-cycle can not be causally doubled,
and is called a non-doubled PCM-cycle [15]. For example, the path [m Figure 1 is
a PCM-cycle and is non-doubled. In the remainder of the paper, for the sake of clarity, we
only refer to a PCM-path from a process to another different one as a PCM-path, and on
the contrary, the PCM-path from a process to the same one as a PCM-cycle. PCM-paths
and PCM-cycles are all called PCM-conditions.
3. Impossibility Problems
In this section, we discuss some "impossibility " problems. First we disprove the truthfulness
of the following common intuition in the literature [8, 14]: if a protocol forces a checkpoint at
a weaker condition then it must force at least as many checkpoints as a protocol that does at a
stronger condition. In other words, even though conditions involved in two different protocols
have inclusive and subordinative relationship, it may be impossible to compare these two
protocols in terms of the number of forced checkpoints. The motive of this problem is that
since any forced checkpoint will change the given checkpoint and communication pattern
and consequently affect the later condition testings, as soon as two protocols differ in their
decision to force a checkpoint, the two resulting checkpoint and communication patterns
are not the same and possibly strongly "diverge" in the future, and thus it is perhaps
impossible to compare these two protocols. We also overthrow the concept that there is
a tradeoff between the number of forced checkpoints and the size of piggybacked control
information for RDT protocols. In the last subsection, it was given a proof to demonstrate
another impossibility problem that there is no optimal on-line protocol that ensures the
RDT property. This scenario is quite common in the area of on-line algorithms due to no
knowledge of future information.
3.1 Common Intuitions Not Necessarily True
Two counterexamples are enumerated against those specious statements mentioned previ-
ously. Their results show the fact that it definitely needs a formal proof for comparison
of two different protocols. Therefore we will propose some techniques of comparing useful
protocols in the next section.
Counterexample 1: CPn is a protocol that breaks all PCM-paths and every PCM-cycle
! send(-:first), and CPm is a protocol that breaks all PCM-paths
and every CM-cycle - \Delta m (a CM-condition is the concatenation of a causal path - which is
not necessarily prime and a single message m where send(m) hb
receive(m) hb
! send(-:first). It can be easily verified that both CPn and CPm break all
non-causally doubled PCM-paths and non-doubled PCM-cycles. Hence these two protocols
are RDT protocols by Theorem 1. And they only need to piggyback TDV on application
messages (note that we apply the consequence of [15] to evaluate the size of piggybacked
control information in this paper and please refer to that paper for more details). Obviously,
CPm forces a checkpoint at a weaker condition than CPn's. As the result shown in Figure
3 (a), CPn must take two forced checkpoints (the diamond box) to make the considered
checkpoint and communication pattern satisfy the RDT property. However, CPm needs
only one forced checkpoint to make the same pattern ensure RDT, depicted in Figure 3
(b). This counterexample shows that CPm forces fewer checkpoints than CPn in the given
checkpoint and communication pattern, and thus disproves the common intuition.
Besides overthrowing the common intuition, the following counterexample also demonstrates
that there is not necessarily a tradeoff between the number of forced checkpoints and
the size of piggybacked control information.
Counterexample 2: Let No-Non-Visibly-Doubled-PCM (NNVD-PCM) a protocol
that breaks all non-visibly doubled PCM-paths and non-doubled PCM-cycles, and No-
PCM-Path a protocol that breaks all PCM-path and non-doubled PCM-cycles. Similarly,
since both protocols break all non-visibly doubled PCM-paths and non-doubled PCM-cycles,
initial checkpoint forced checkpoint
basic checkpoint
(a)
Figure
3: The scenario of Counterexample 1 (a) CPn (b) CPm.
according to Corollary 1, they are also RDT protocols. Moreover, since NNVD-PCM has
to decide whether a PCM-path is visibly doubled, it needs more piggybacked control information
than No-PCM-Path's. NNVD-PCM takes one more forced checkpoint (see
Figure
4 (a)) than No-PCM-Path does (see Figure 4 (b)). Hence this counterexample also
shows that the protocol piggybacking less control information (No-PCM-Path) outperforms
the one piggybacking more control information (NNVD-PCM) in some checkpoint
and communication patterns in terms of the number of forced checkpoints.
The idea behind those counterexamples is that the forced checkpoint taken by the protocol
based on the stronger condition will make the CM-path at the rightmost part of the considered
checkpoint and communication pattern become a non-causally doubled PCM-path, and
thus another checkpoint is necessary to be forced to break this PCM-path. However, the
forced checkpoint taken by the protocol on the weaker condition will not give rise to such a
scenario.
initial checkpoint forced checkpoint
basic checkpoint
(a)
Figure
4: The scenario of Counterexample 2 (a) NNVD-PCM (b) No-PCM-Path.
3.2 No Optimal On-line Protocol
We take two categories of on-line protocols into consideration for this problem. These two
categories are on-line protocols based on the entire causal history and the transitive dependency
tracking (i.e. only piggybacking TDV on a message as control information), respec-
tively. They are all shown to have no optimal protocol through the following descriptions.
Given the checkpoint and communication pattern in Counterexample 1, redrawn in Figure
5 and denoted ccpat a , we directly have the following lemma since the PCM-cycle [m
is non-doubled.
Lemma 1: Process P 3 needs to force at least one checkpoint between point a and b to make
ccpat a satisfy RDT for all entire-causal, and TDV on-line RDT protocols.
Lemma 2: If a forced checkpoint is taken between point c and b in Process P 3 , Process P 2
must take another forced checkpoint to satisfy RDT for all entire-causal, and TDV on-line
RDT protocols.
ccpat c ccpat a
initial checkpoint m 4
a c b
Figure
5: The checkpoint and communication patterns ccpat a and ccpat c .
forced checkpoint is taken between point c and b in Process P 3 , the Z-path
becomes a non-causally doubled PCM-path since m 3 turns out prime. Hence P 2
has to force another checkpoint to break this PCM-path to satisfy RDT for all entire-causal,
and TDV on-line RDT protocols, as the scenario shown in Figure 3 (a). Q.E.D.
We now consider the following theorem.
Theorem 2: There is no optimal on-line protocol based on the entire causal history in terms
of the number of forced checkpoint.
protocol is optimal if and only if given any checkpoint and communication pat-
tern, no other protocol has less number of forced checkpoints than it. See the checkpoint
and communication pattern ccpat a depicted in Figure 5. Since the protocol CPm in Counterexample
needs only one forced checkpoint to make ccpat a satisfy RDT, protocols that
force any checkpoint between point c and b (by Lemma 2 these protocols must force two
checkpoints), and protocols that take more than one checkpoint between point a and c cannot
be optimal. So an optimal protocol, if any, must take exactly one forced checkpoint
between point a and c according to Lemma 1. However, such an on-line protocol cannot be
optimal because it has to force one checkpoint in the cut pattern ccpat c , shown as the left
region of the dotted line in Figure 5, due to the same causality with that in ccpat a at the
same position. But the protocol CPn in Counterexample 1 takes zero forced checkpoint in
ccpat c . Therefore, there is no optimal on-line protocol based on the entire causal history in
terms of the number of forced checkpoints. Q.E.D.
As mentioned earlier in Counterexample 1, both the protocols CPm and CPn only
piggyback the transitive dependency vector. Hence we can have the corollary below with
the similar proof of the previous theorem.
Corollary 2: There is no optimal on-line protocol based on the transitive dependency
tracking (TDV) in terms of the number of forced checkpoints.
4. Techniques of Comparison
4.1 FDAS vs Other Protocols
Wang proposed the Fixed-Dependecy-After-Send (FDAS) checkpointing protocol in [8],
that breaks all PCM-conditions. In this subsection, two assertions that FDAS outperforms
protocols which force a checkpoint at weaker conditions than FDAS's, and that protocols
which force a checkpoint at stronger conditions are better than FDAS, both in terms of the
number of forced checkpoints, are demonstrated. First we prove that the former assertion
is true. Let C f denotes the condition on which FDAS is based (i.e. breaking all PCM-
conditions), and CPw denotes a protocol which takes a forced checkpoint at a weaker
condition than FDAS's, with its based condition denoted Cw . Obviously C f is a subset of Cw
and we represent this relation as C f ) Cw . Let ccpat f and ccpat w represent the checkpoint
and communication patterns produced by the protocols FDAS and CPw respectively. Since
adding forced checkpoints cannot make any PC-path (i.e. prime causal path) in the original
checkpoint and communication pattern become a non-PC path, we directly have the following
lemma.
Lemma 3: For any PC-path in the original checkpoint and communication pattern, it is
still a PC-path in the checkpoint and communication pattern produced by any protocol.
Now we define an extra PC-path as a non-PC path originally but becoming a PC-path
due to the forced checkpoint taken by a protocol. We then have the following two lemmas.
Lemma 4: For any extra PC-path in ccpat f , it is also an extra PC-path in ccpat w .
Proof: Assume there exists an extra PC-path - in ccpat f which is not an extra PC-path in
ccpat w . An extra PC-path is produced only when a forced checkpoint is taken before it. Thus
there exists a PC-path condition - 1 before send(-:first) in ccpat f such that FDAS forced
the process to take a checkpoint to break the C f condition formed by - 1 , and this checkpoint
made - become an extra PC-path. We then say that - is produced by - 1 . Obviously, - 1 is in
the causal past of -. Similarly, - 1 is a PC-path either because itself is originally a PC-path
Figure
The observation process in Lemma 4.
or because it is an extra PC-path produced by another PC-path - 2 , which is in the causal
past of - 1 and thus also in the causal past of -. By repeatedly the foregoing observation,
and since messages in the causal past of - is finite, eventually we can obtain an original
PC-path - n . The previous progress is shown in Figure 6. By Lemma 3, - n is also a PC-path
in ccpat w . Hence CPw must force the process to take a checkpoint (not necessarily the
same checkpoint with the one in ccpat f ) between receive(- n :last) and its nearest previous
message-sending event to avoid - n to form a C f condition that CPw also needs to break
since C f ) Cw . The forced checkpoint makes - n\Gamma1 in ccpat w become an extra PC-path since
- in ccpat f is also an extra PC-path and there is no other message-sending event between
receive(- n :last) and its nearest previous message-sending event (namely there cannot exist
a causal path that prevents - n\Gamma1 from becoming prime). For the same reason, - n\Gamma2 also
becomes a PC-path in ccpat w , and finally - also becomes an extra PC-path in ccpat w . This
leads to a contradiction. Q.E.D.
Lemma 5: FDAS can never force two consecutive checkpoints between two consecutive
Condition C w (if any)
PC-path PC-path
Figure
7: The scenario of Lemma 5.
checkpoints forced by CPw.
Proof: We prove this lemma by showing that CPw must force at least one checkpoint between
any two consecutive forced checkpoints taken by FDAS. See the scenario shown in
Figure
7, there are two consecutive checkpoints forced by FDAS, so there exist two continuous
conditions. For the PC-path of the latter C f condition in ccpat f , by Lemma 3
and Lemma 4, it is also a PC-path in ccpat w . Therefore CPw has to force one checkpoint
between this PC-path and its nearest previous message-sending event to prevent a C f condition
from being formed, shown as the hollow diamond in Figure 7. This checkpoint is
obviously between the two consecutive checkpoints forced by FDAS. Q.E.D.
As a consequence, we can derive the following "monotonicity " property.
Theorem 3: CPw takes the n-th forced checkpoint no later than FDAS does, for all n.
Proof: By induction, because the given checkpoint and communication pattern are exactly
the same for CPw and FDAS before any forced checkpoint is taken, it is clear that CPw
must force the first checkpoint no later than FDAS does for the reason that C f ) Cw . Now
suppose CPw takes the k-th checkpoint no later than FDAS does, according Lemma 5, we
have that CPw will take the 1)-th checkpoint no later than FDAS does. Q.E.D.
Let #f ckpt(CP) denotes the number of forced checkpoints taken by the protocol CP.
Applying the previous theorem, it is obvious that #f ckpt(CPw) - #f ckpt(FDAS). The
Russell's algorithm [19] and the protocol presented in [20], that are named No-Receive-
After-Send (NRAS) and Fixed-Dependency-Interval (FDI) by Wang in [8] respec-
tively, are both RDT protocols. By their definitions, NRAS breaks all CM-paths and FDI
forces a checkpoint whenever a PC-path is encountered. Hence they belong to the family of
CPw. By Theorem 3, we know that FDAS is better than these two protocols in terms of
the number of forced checkpoints.
Here we begin to demonstrate that the latter assertion aforementioned in the very beginning
is valid. Also, let CPs denote a protocol which takes a forced checkpoint at a stronger
condition than FDAS's, and the condition it based is C s , obviously where C s ) C f . Let
ccpat s represent the checkpoint and communication pattern produced by the protocol CPs.
Similarly we consider the following lemma.
Lemma For any extra PC-path in ccpat s , it is also an extra PC-path in ccpat f .
Proof: Assume there exists an extra PC-path - in ccpat s , which is not an extra PC-path in
ccpat f . Since CPs also only breaks some certain PCM-conditions, by the same observation
with Lemma 4, we can obtain an original PC-path - n , that causes - to become an extra
PC-path. By Lemma 3, - n is also a PC-path in ccpat f . Hence FDAS must force the process
to take a checkpoint (not necessarily the same checkpoint with the one in ccpat s ) between
receive(- n :last) and its nearest previous message-sending event to avoid - n to form a C f
condition that FDAS needs to break. The forced checkpoint makes - n\Gamma1 in ccpat f become
an extra PC-path since - n\Gamma1 in ccpat s is also an extra PC-path and there is no other message-sending
event between receive(- n :last) and its nearest previous message-sending event. For
the same reason, - n\Gamma2 also becomes a PC-path in ccpat f , and finally - also becomes an extra
PC-path in ccpat f . This leads to a contradiction. Q.E.D.
With the similar description of Lemma 5, the following lemma can be proved.
Lemma 7: CPs can never force two consecutive checkpoints between two consecutive
checkpoints forced by FDAS.
Therefore we can obtain the corollary below in a straightforward way.
Corollary 3: FDAS takes the n-th checkpoint no later than CPs does, for all n.
And so, the assertion #f ckpt(FDAS) - #f ckpt(CPs) holds. The RDT protocol
BHMR proposed in [14] breaks non-visibly doubled PCM-paths and some PCM-cycles
(including all non-doubled PCM-cycles), and thus it belongs to the family of CPs. As a side
effect of the foregoing corollary, we give a formal proof showing that BHMR outperforms
FDAS in terms of the number of forced checkpoints, instead of the simulation results in
4.2 No-PCM-Cycle vs FDAS
Another interesting result is the technique for comparing No-PCM-Cycle and FDAS.
Applying Corollary 3, the protocol No-PCM-Cycle that breaks non-visibly doubled PCM-
paths and all PCM-cycles outperforms FDAS because its based condition is stronger than
FDAS's. However we find that No-PCM-Cycle is actually equivalent to the protocol
FDAS, shown in the following theorem.
Theorem 4: If all PCM-cycles and non-visibly doubled PCM-paths are broken in the check-point
and communication pattern, any visibly doubled PCM-path is also broken.
Proof: For a visibly doubled PCM-path - \Delta m shown in Figure 8 (a), since there must exist a
prime path that visibly doubles - \Delta m, without loss of generality, - 1 can be assumed prime.
From
Figure
8 (a), we know that - 1 is in the causal past of -:last. Now we show by a case
analysis that - \Delta m will be broken.
(a). If - 1 \Delta (-:last) is prime, the PCM-cycle - 1 \Delta (-:last) \Delta m is broken, and thus the PCM-path
(b). If - 1 \Delta (-:last) is not prime, there exists a causal path - 0
1 to process P i , which is necessarily
before receive(- 1 :last) (otherwise - will turn out non-prime), as shown in Figure 8
(a). Without lost of generality, we assume - 0
1 to be the nearest causal path to P i before
1 :first) is a PCM-condition. If this PCM-condition is broken
by the theorem assumption, the forced checkpoint will make - 1 \Delta (-:last) become prime
since is the first causal path to P i after receive(- 1 :last), and consequently the
will be broken. If not, the PCM-path
1 :first) is a visibly doubled
PCM-path, shown as Figure 8 (b). Clearly, - 2 is in the causal past of - 1 :last and thus in
the causal past of - 1 (also in the causal past of -:last). By repeatedly applying the foregoing
observation, and since messages in the causal past of -:last is finite, eventually we
can obtain a PCM-condition - n \Delta (- 0
n :first), which is either a non-visibly doubled PCM-path
or a PCM-cycle and therefore both need to be broken. The forced checkpoint will make
become prime and the PCM-path -
will be broken. With
the same reason, - n\Gamma2 \Delta (- 0
n\Gamma2 :first) will be also broken and finally the PCM-path - \Delta m will
also be broken. Q.E.D.
According to the previous theorem, we have that the protocol No-PCM-Cycle in fact
breaks all PCM-conditions, so it is equivalent to FDAS. This result shows that we can
'.first
.last
(a) (b)
Figure
8: The scenario of Theorem 4.
reduce the redundant size of piggybacked control information by adopting FDAS instead of
No-PCM-Cycle because No-PCM-Cycle needs extra information to distinguish a visibly
doubled PCM-path.
4.3 PCM vs PESCM
In [15], Baldoni et. al. proposed a more constrained characterization of the RDT property,
PESCM , for designing protocols. A PESCM-condition is composed of a PCM-condition - \Delta m
such that - is elementary and simple besides being prime. The interest of this subsection
lies in the fact that the existence of a PCM-condition implies the existence of a PESCM-
condition at the same position. It consequently becomes unnecessary to take such a stronger
condition with more piggybacked control information into consideration for some protocols.
First, we introduce the definitions of the terms "elementary" and "simple" [15].
Definition 7: A Z-path - is elementary if its traversal sequence P
is the sequence of processes traversed by -, has no repetition.
Definition 8: A causal path simple if the two events receive(m i )
and send(m i+1 ) occur in the same interval, 8i (1
Namely, an elementary Z-path only traverses a process once, and a simple causal path
does not include local checkpoints. For instance, in the checkpoint and communication
pattern shown in Figure 1, the path [m neither elementary nor simple because it
traverses process P j twice and the local checkpoint C k;1 is included. But the path [m
is both elementary and simple.
By Definition 7, we have that every causal path contains an elementary causal path. The
elementary causal path contained by a prime causal path - has the same starting interval
and ending point with - and thus is also prime. A PECM-condition is defined as a PCM-
condition with the property that - is elementary in addition to being prime. Then we
directly have the following theorem and corollary.
Theorem 5: The existence of a PCM-condition implies the existence of a PECM-condition
at the same position.
Corollary 4: The existence of a non-doubled PCM-condition implies the existence of a
non-doubled PECM-condition at the same position.
Next we begin to demonstrate the lemma below.
Lemma 8: A PEC-path (prime and elementary causal path) contains a PESC-path (prime,
elementary and simple causal path).
Proof: Note that a non-simple causal path - can be written as each
component - i is simple. We prove this lemma by showing that the last simple path contained
by a PEC-path is prime (of course is elementary). As depicted in Figure 9, a PEC-path
l is not prime, then there exists
a prime path - from point x 0 to point y 0 , where point y 0 precedes point y. Since - 1
is a causal path with receive(-:last) hb
turns out non-prime. This leads
to a contradiction. Q.E.D.
According to Theorem 5, Corollary 4 and the previous lemma, it can be easily verified
that the following theorem and corollary are true.
Theorem 6: The existence of a PCM-condition implies the existence of a PESCM-condition
at the same position.
Corollary 5: The existence of a non-doubled PCM-condition implies the existence of a
non-doubled PESCM-condition at the same position.
The idea underlying the previous results is that the protocol No-PESCM that breaks
all PESCM-conditions and the protocol No-Non-Visibly-Doubled-PESCM that breaks
non-visibly doubled PESCM-paths and non-doubled PESCM-cycles presented in [15] are exactly
the same with FDAS and NNVD-PCM respectively, however with more piggybacked
control information for the sake of necessity to distinguish simple paths. Intuitively, we have
to break all non-doubled PCM-conditions in order to satisfy RDT by Theorem 1. Why can
Figure
9: The PEC-path
we achieve this goal only by breaking non-doubled PESCM-conditions in a scenario that
there is not necessarily a PESCM-condition before a PCM-condition such that breaking the
former can eliminate the latter? The reason is because the previous results hold.
5. A Family of RDT Protocols
Figure
depicts a hierarchy graph of comparing a family of communication-induced check-pointing
protocols satisfying the RDT property. A plain arrow from a protocol CP1 to
another protocol CP2 indicates that #f ckpt(CP1) - #f ckpt(CP2) and a dotted arrow
indicates that the piggybacked control information of CP1 is less than that of CP2.
The line with two arrows means "equivalent " and the line with a mark "X" on it means
"incomparable". For the protocol CBR (Checkpoint-Befoe-Receive) [8] at the bottom
of
Figure
10, a checkpoint is placed before every message-receiving event. It can be easily
verified that both FDI and NRAS force fewer number of checkpoints than CBR. Figure 10
marshals the discussions in the previous sections. Note that this family includes many existing
RDT protocols in the literature. Therefore the result is helpful for a wide range of
practical applications.
No-PCM-Path
No-PCM-Circle
FDAS
NRAS FDI
CBR
Figure
10: Comparing a family of RDT protocols.
6. Conclusions
This paper has provided a theoretical analysis for RDT protocols. In the context, some
"impossibility " problems are addressed. First, we have shown that the common intuitions
in the literature are not convincing and it definitely needs to a formal proof to demonstrate
that one protocol is better than another one. Through rigorous demonstrations, we usually
found that it is not necessarily worthy to adopt protocols based on stronger conditions with
more piggybacked control information. We also proved that there is no optimal on-line
protocol that ensures the RDT property. This scenario is quite common in the area of on-line
algorithms due to no knowledge of future information. Moreover, some techniques for
comparing useful protocols have been proposed. We showed that these techniques can be
exploited to compare many existing protocols in the literature. Hence our results provide
guidelines for designing and evaluating efficient communication-induced checkpointing RDT
protocols.
Acknowledgements
The authors wish to express their sincere thanks to Jean-Michel Helary (IRISA) and Michel
Raynal (IRISA) whose comments helped improve the presentation of the paper. We would
like also to thank Jeff Westbrook for his valuable discussions about on-line algorithms. Tsai
and Kuo's work are supported by the National Science Council, Taiwan, ROC, under Grant
NSC 87-2213-E-259-007.
--R
"Consistent global checkpoints based on direct dependency tracking,"
"Distributed snapshots: determining global states of distributed systems,"
"A survey of rollback-recovery protocols in message-passing systems,"
"System structure for software fault-tolerant,"
"Checkpointing and rollback-recovery for distributed systems,"
"Experimental evaluation of multiprocessor cache-based error recovery,"
"Necessary and sufficient conditions for consistent global snapshots,"
"Consistent global checkpoints that contain a given set of local check- points,"
"The maximum and minimum consistent global checkpoints and their applications,"
"Guaranteed deadlock recovery: deadlock resolution with rollback propagation,"
"Checkpointing distributed applications on mobile computers,"
"Causal distributed breakpoints,"
"When piecewise determinism is almost true,"
"A communication-induced checkpointing protocol that ensures rollback-dependency trackability,"
"Rollback-dependency trackability: an optimal characterization and its protocol,"
"Time, clocks and the ordering of events in a distributed system,"
"Efficient distributed recovery using message logging,"
"Optimistic recovery in distributed systems,"
"State restoration in systems of communicating processes,"
"Optimal checkpointing and local recording for domino-free rollback recovery,"
--TR
--CTR
Roberto Baldoni , Jean-Michel Hlary , Michel Raynal, Rollback-dependency trackability: visible characterizations, Proceedings of the eighteenth annual ACM symposium on Principles of distributed computing, p.33-42, May 04-06, 1999, Atlanta, Georgia, United States
Jichiang Tsai, On Properties of RDT Communication-Induced Checkpointing Protocols, IEEE Transactions on Parallel and Distributed Systems, v.14 n.8, p.755-764, August
B. Gupta , Z. Liu , Z. Liang, On designing direct dependency: based fast recovery algorithms for distributed systems, ACM SIGOPS Operating Systems Review, v.38 n.1, p.58-73, January 2004
B. Gupta , S. K. Banerjee, A Roll-Forward Recovery Scheme for Solving the Problem of Coasting Forward for Distributed Systems, ACM SIGOPS Operating Systems Review, v.35 n.3, p.55-66, July 1 2001
D. Manivannan , Mukesh Singhal, Quasi-Synchronous Checkpointing: Models, Characterization, and Classification, IEEE Transactions on Parallel and Distributed Systems, v.10 n.7, p.703-713, July 1999
J.-M. Hlary , A. Mostefaoui , R. H. B. Netzer , M. Raynal, Communication-based prevention of useless checkpoints in distributed computations, Distributed Computing, v.13 n.1, p.29-43, January 2000
Jun-Lin Lin , Margaret H. Dunham, A Low-Cost Checkpointing Technique for Distributed Databases, Distributed and Parallel Databases, v.10 n.3, p.241-268, December 2001 | communication-induced protocols;checkpointing;rollback recovery;rollback-dependency trackability;on-line algorithms |
291369 | Diskless Checkpointing. | AbstractDiskless Checkpointing is a technique for checkpointing the state of a long-running computation on a distributed system without relying on stable storage. As such, it eliminates the performance bottleneck of traditional checkpointing on distributed systems. In this paper, we motivate diskless checkpointing and present the basic diskless checkpointing scheme along with several variants for improved performance. The performance of the basic scheme and its variants is evaluated on a high-performance network of workstations and compared to traditional disk-based checkpointing. We conclude that diskless checkpointing is a desirable alternative to disk-based checkpointing that can improve the performance of distributed applications in the face of failures. | Introduction
Checkpointing is an important topic in fault-tolerant computing as the basis for rollback recovery. Suppose a
user is executing a long-running computation and for some reason (hardware or software), the machine running the
computation fails. In the absence of checkpointing, when the machine becomes functional, the user must start the
program over, thus wasting all previous computation. Had the user stored periodic checkpoints of the program's
state to stable storage, then he or she could instead restart the program from the most recent checkpoint. This
is called rolling back to a stored checkpoint. For long-running computations, checkpointing allows users to limit
the amount of lost computation in the event of a failure (or failures).
There have been many programming environments intended for users with long-running computations that
rely on checkpointing for fault-tolerance. For example, Condor [34], libckpt [25] and others [16, 30, 37] provide
[email protected]. This material is based upon work supported by the National Science Foundation under grants CCR-9409496,
MIP-9420653 and CDA-9529459, by the ORAU Junior Faculty Enhancement Award, and by DARPA under grant N00014-95-1-1144
and contract DABT63-94-C-0049.
transparent checkpointing for uniprocessor programs, and checkpointers such as MIST [4], CoCheck [33] and
others [2, 10, 18, 28, 32] provide checkpointing in parallel computing environments.
All of the above systems store their checkpoints on stable storage (i.e. disk), since stable storage typically
survives processor failures. However, since checkpoints can be large (up to hundreds of megabytes per processor),
the act of storing them to disk becomes the main component that contributes to the overhead, or performance
degradation, due to checkpointing. This is more marked in parallel and distributed systems where the number of
processors often vastly outnumbers the number of disks.
Several techniques have been devised and implemented to minimize this source of overhead, including incremental
checkpointing [11, 38], checkpoint buffering with copy-on-write [9, 21], compression [20, 28] and memory
exclusion [25]. However with all of these techniques, the performance of the stable storage medium is still the
underlying cause of overhead.
In this paper, we present diskless checkpointing. The goal of diskless checkpointing is to remove stable storage
from checkpointing in parallel and distributed systems, and replace it with memory and processor redundancy.
By eliminating stable storage, diskless checkpointing removes the main source of overhead in checkpointing.
However, this does not come for free. The failure coverage of diskless checkpointing is less than checkpointing to
stable storage, since none of the components in a diskless checkpointing system can survive a wholesale failure.
Moreover, there is memory, processor and network overhead introduced by diskless checkpointing that is absent
in standard disk-based schemes.
The purpose of this paper is twofold. We first present basic schemes for diskless checkpointing and then
performance optimizations to the basic schemes. Second, we assess the performance of diskless checkpointing on
a network of Sparc-5 workstations as compared to standard disk-based checkpointing. As anticipated, diskless
checkpointing induces less overhead on applications than disk-based checkpointing, enabling the user to checkpoint
more frequently without a performance penalty. This lowers the application's expected running time in the
presence of failures.
Diskless checkpointing tolerates single processor failures, and in some cases multiple processor failures. How-
ever, it does not tolerate wholesale failures (such as a power outage that knocks out all machines). Thus, an
optimized fault-tolerant scheme would be a two-level scheme, as advocated by Vaidya [35], where diskless checkpoints
are taken frequently, and standard, disk-based checkpoints are taken at a much larger interval. In this
way, the more frequent case of one or two processors failing is handled swiftly, with low overhead, while the rarer
case of a wholesale failure is handled as well, albeit with higher overhead and a longer rollback penalty.
2 Overview of Diskless Checkpointing
Diskless checkpointing is based on coordinated checkpointing. With coordinated checkpointing, a collection of
processors with disjoint memories coordinates to take a checkpoint of the global system state. This is called a
"coordinated checkpoint". A coordinated checkpoint consists of checkpoints of each processor in the system plus
a log of messages in transit at the time of checkpointing. Coordinated checkpointing is a well-studied topic in
fault-tolerance. For a thorough discussion of coordinated checkpointing, the reader is directed to the survey paper
by Elnozahy, Johnson and Wang [8].
With diskless checkpointing we assume that there is no message log to be stored (for example, the "Sync-and-
stop" algorithm for coordinated checkpointing ensures that there is no message log [28]), or that the message log
is contained within the checkpoints of individual processors. This reduces the problem of taking a coordinated
checkpoint to saving the individual checkpoints of each processor in the system.
Diskless checkpointing is composed of two parts - (1) checkpointing the state of each application processor in
memory, and (2) encoding these in-memory checkpoints and storing the encodings in checkpointing processors.
When a failure occurs, the system is recovered in the following manner. First, the non-failed application processors
roll themselves back to their stored checkpoints in memory. Next, replacement processors are chosen to take the
place of the failed processors. Finally, the replacement processors use the checkpointed states of the non-failed
application processors plus the encodings in the checkpointing processors to calculate the checkpoints of the failed
processors. Once these checkpoints are calculated, the replacement processors roll back, and the application
continues from the checkpoint. Note that either spare processors or some of the checkpointing processors may be
used as replacement processors. If checkpointing processors are used, then the system will continue with fewer (or
no) checkpointing processors, thus reducing the fault-tolerance. However, when more processors become available,
they may be employed as additional checkpointing processors.
2.1 Exact Problem Specification
The user is executing a long-running application on a parallel or distributed computing environment composed
of processors with disjoint memories that communicate by message-passing. The application executes on exactly n
processors. With diskless checkpointing, an extra m processors are added to the system, and the n+m processors
cooperate to take diskless checkpoints. As long as the number of processors in the system is at least n, and as
long as failures occur within certain constraints, the application may proceed efficiently.
As stated above, diskless checkpointing may be broken into two parts: application processors checkpointing
themselves, and checkpoint processors encoding the application processors' checkpoints. Each is explained below,
followed by issues involved in gluing the two parts together.
3 Application Processors Checkpointing Themselves
Here the goal is for an application processor to checkpoint its state in such a way that if a rollback is called
for, due to the failure of another processor, the processor can roll back to its most recent checkpoint. In standard
disk-based systems, a processor checkpoints itself by saving the contents of its address space to disk. Typically
this involves saving all values in the stack, heap, global variables and registers as in Figure 1(a). If the processor
must roll back, it overwrites the current contents of its address space with the stored checkpoint. As a last step,
it restores the registers, which restarts the computation from the checkpoint, thereby completing the rollback.
For more detail on general process checkpointing and recovery, see the papers on Condor [34] and libckpt [25].
Memory Disk
Registers
Application
Processor
Address
space
(unused)
Memory
Registers
Application
Processor
Address
space
Diskless checkpoint
Memory
Registers
Application
Processor
Address
space
Page
faults
Memory
Clone
Registers
Application
Processor
Address
space
Checkpoint
(unused)
(a) (b) (c) (d)
Figure
1: (a) Checkpointing to disk, (b) simple diskless checkpointing, (c) incremental diskless checkpointing,
(d) forked diskless checkpointing
With diskless checkpointing, the processor saves its state in memory, rather than on disk. In its simplest form,
diskless checkpointing requires an in-memory copy of the address space and registers, as in Figure 1(b). If a
rollback is required, the contents of the address space and registers are restored from the in-memory checkpoint.
Note that this checkpoint will not tolerate the failure of the application processor itself; it simply enables the
processor to roll back to the most recent checkpoint if another processor fails.
One drawback of simple diskless checkpointing is memory usage. A complete copy of the application must
be retained in the memory of each application processor. A solution to this problem is to use incremental
checkpointing [11, 38], as in Figure 1(c). To take a checkpoint, an application processor sets the virtual memory
protection bits of all pages in its address space to be read-only [1]. When the application attempts to write a
page, an access violation (page fault) occurs. The checkpointing system then makes a copy of the faulting page,
and resets the page's protection to read-write. Thus, a processor's checkpoint consists of the read-only pages in
its address space plus the stored copies of all the read-write pages. To roll back to a checkpoint, the processor
simply copies (or maps) the checkpointed copies of all its read-write pages back to the application's address space.
As long as the application does not overwrite all of its pages between checkpoints, incremental checkpointing
improves both the performance and memory utilization of checkpointing.
The last useful checkpointing method is forked (or copy-on-write) checkpointing [9, 21, 25]. To checkpoint,
the application clones itself (with, for example, the fork() system call in Unix) as depicted in Figure 1(d).
This clone is the diskless checkpoint. To roll back, the application overwrites its state with the clone's, or if
possible, the clone merely assumes the role of the application. Forked checkpointing is very similar to incremental
checkpointing because most operating systems implement process cloning with copy-on-write. This means that
the process and its clone will share pages until one of the processes alters the page. Thus, it works in the same
manner as incremental checkpointing, except the identification of modified pages and the page copying are all
performed in the operating system. This results in less CPU activity switching back and forth from system to user
mode. Moreover, forked checkpointing does not require that the user have access to virtual memory protection
facilities, which are not available in all operating systems.
4 Encoding the checkpoints
The goal of this part is for extra checkpoint processors to store enough information that the checkpoints
of failed processors may be reconstructed. Specifically, there are m checkpoint processors. These processors
encode the checkpoints of the application processors in such a way that when application processors fail, their
checkpoints may be recalculated from the checkpoints of the non-failed processors plus the encodings in the
checkpoint processors.
4.1 Parity (Raid Level 5)
The simplest checkpoint encoding is parity (Figure 2(a)). Here there is one checkpoint processor (i.e.
that encodes the bitwise parity of each of the application's checkpoints. In other words, let byte b j
represent the
j-th byte of application processor i. Then the j-th byte of the checkpoint processor will be:
If any application processor fails, the state of the system may be recovered as follows. First, a replacement
processor is selected to take the place of the failed application processor. This could be the checkpoint processor,
a spare processor that had previously been unused, or the failed processor itself if the failure was transient. The
replacement processor calculates the checkpoint of the failed processor by taking the parity of the checkpoints of
the non-failed processors and the encoding in the checkpoint processor. In other words, suppose processor i is
the failed processor. Then its checkpoint may be reconstructed as:
ckp
Note that this is the same recovery scheme as Raid Level 5 in disk array technology [5]. When the replacement
processor has calculated the checkpoint of the failed processor, then all application processors roll back to the
previous checkpoint, and the computation proceeds from that point.
Besides parity, there are several other schemes than can be used to encode the checkpoints. They vary in the
number of checkpoint processors, the efficiency of encoding, and the amount of failure coverage. They are detailed
below.
(a) (b)
(c)
Figure
2: Encoding the checkpoints: (a) Raid Level 5, (b) Mirroring, (c) One-dimensional parity
4.2 Mirroring
Checkpoint mirroring (Figure 2(b)) is another simple encoding scheme. With mirroring, there are
checkpoint processors, and the i-th checkpoint processor simply stores the checkpoint of the i-th application
processor. Thus, up to n processor failures may be tolerated, although the failure of both an application processor
and its checkpoint processor cannot be tolerated. Checkpoint mirroring should have a very low checkpointing
overhead because no encoding calculations (such as parity) need to be made.
4.3 1-dimensional parity
With one-dimensional parity (Figure 2(c)) there are checkpoint processors. The application
processors are partitioned into m groups of roughly equal size. Checkpoint processor i then calculates
the parity of the checkpoints in group i. This increases the failure coverage, because now one processor failure per
group may be tolerated. Moreover, the calculation of the checkpoint encoding should be more efficient because
there is no longer a single bottleneck (the checkpoint processor). Note that 1-dimensional parity reduces to Raid
Level 5 when and to mirroring when
4.4 2-dimensional parity
Two-dimensional parity (Figure 3(d)) is an extension of one-dimensional parity. With two-dimensional parity,
the application processors are arranged logically in a two-dimensional grid, and there is a checkpoint processor for
each row and column of the grid. Each checkpoint processor calculates the parity of the application processors in
its row or column. Two-dimensional parity requires m - 2
checkpoint processors, and can tolerate the failure
of any one processor in each row and column. This means that any two-processor failures may be tolerated.
(d) (e)2
a
Figure
3: Encoding the checkpoints: (d) Two-dimensional parity, (e) Hamming coding, (f) EvenOdd coding, (g)
Reed-Solomon coding
4.5 Other parity-based codes
The well-known Hamming codes (Figure 3(e)) may be used to tolerate any two-processor failures with the
addition of roughly log n processors [13]. Each checkpoint processor calculates the parity of a subset of the
application processors. EvenOdd coding (Figure 3(f)) is a technique where checkpoint processors are
employed and all two-processor failures may be tolerated [3]. The encoding is based on parity calculations, but
is a little more complex than the above schemes.
4.6 Reed-Solomon coding
The most general purpose encoding technique is Reed-Solomon coding [24] (Figure 3(g)). Here m checkpointing
processors use Galois Field arithmetic to encode the checkpoints in such a way that any m failures may be
tolerated. Since the encoding is more complex than parity, the CPU overhead of Reed-Solomon coding is greater
than the other methods, but it achieves maximal failure coverage per checkpoint processor.
5 Gluing the two parts together
Sections 3 and 4 have discussed how application processors store checkpoints internally, and how the checkpoint
processors encode information. The final component of diskless checkpointing is coordinating the application
and checkpointing processors in an efficient and correct way. This section discusses the relevant details in the
coordination of the two sets of processors. We focus primarily on Raid Level 5 encodings, and then discuss the
differences that the other encodings entail.
5.1 Tolerating failures when checkpointing
As with all checkpointing systems, diskless checkpointing systems must take care to remain fault-tolerant even
if there is a failure while checkpointing or recovery is underway. This is done by making sure that each coordinated
checkpoint remains valid until the next coordinated checkpoint has been completed. The checkpointing processors
control this process. When all the checkpointing processors have completed calculating their encodings for the
current checkpoint, then they may discard their previous encodings, and then notify the application processors
that they may discard their previous checkpoints.
Upon recovery, if the checkpointing processors all have valid encodings for the most recent checkpoint, then
these are used for recovery, along with the most recent checkpoints in the non-failed application processors. If
any checkpointing processor does not have a valid encoding for the most recent checkpoint, then the previous
encoding must be used along with the previous checkpoints in the non-failed application processors.
This protocol ensures that there is always a valid coordinated checkpoint of the system in memory. If all
checkpoint processors have their encodings for coordinated checkpoint i, then all application processors will
have their checkpoints for coordinated checkpoint i. If any checkpoint processor has an incomplete encoding for
checkpoint i, then all checkpoint processors will still contain their encodings for coordinated checkpoint
Moreover, all application processors will have their checkpoints for coordinated checkpoint i \Gamma 1. Thus, the whole
system may recover to coordinated checkpoint
If a failure is detected during recovery, then the remaining processors simply initiate the recovery procedure
anew.
5.2 Space demands
A ramification of the preceding protocol is that at the moment when the checkpoint processors finish storing
their encodings, all processors contain two checkpoints in memory: the current checkpoint and the previous
checkpoint. Thus, the memory usage of diskless checkpointing is a serious issue.
Suppose the size of an application processor's address space is M bytes. Then simple diskless checkpointing
consumes an extra M bytes of memory to hold a checkpoint. To ensure that only M bytes of extra memory
are consumed at all times, the application must be frozen during checkpointing. Then the application's address
(a) (b)
Figure
4: Calculating the encoding: (a) direct, (b) fan-in
space may be used (without being copied) to calculate the checkpoint encodings. When the encodings have been
calculated, the application's address space may be copied over its previous checkpoint, which is now expendable.
Then the application is unfrozen.
With incremental checkpointing, checkpointed copies of pages are made when page faults are caught. At
checkpoint time, the processors calculate the encodings, then discard the checkpointed copies of pages and set the
protection of all application pages to read-only. Thus, if the incremental checkpoint size is I, then only I extra
bytes of memory are necessary. In the worst case, all pages are modified between checkpoints, and I equals M .
With forked checkpointing, each checkpoint is a separate process. When the checkpoint processors complete
their encodings, there are three processes contained by each application processor: the application itself, its
most recent checkpoint, and the previous checkpoint. Since process cloning uses the copy-on-write optimization,
each checkpoint process only consumes an extra I bytes of memory. Therefore, forked checkpointing requires an
extra 2I bytes of memory during checkpointing, and I bytes at all other times. In the worst case, this is 2M
during checkpointing, and M at other times.
Finally, disk-based checkpointing using the fork optimization requires I 0 bytes of memory, where I 0 consists of
all pages that are modified while checkpointing is taking place. I 0 should be less than I, though if the latency of
checkpointing is large compared to the checkpointing interval, I 0 may be close to I.
5.3 Sending and calculating the encoding
With Raid Level 5 encoding, there is one checkpoint processor C 1 , and n application processors
stores the bitwise parity of the checkpoints of each application processor. The simplest way to calculate the
parity is to employ the direct method: each application processor simply sends its checkpoint to C 1 . Initially,
clears a portion of its memory, which we call e 1 , to store the checkpoint encoding. Upon receiving ckp i from
This is shown in Figure 4(a). In Figure 4, the \Phi signs are shown directly above the
processors that perform the bitwise exclusive or. Arrows from one processor to another represent one processor
sending its checkpoint to another.
There are two problems with the direct method. First, C 1 can become a message-receiving bottleneck, since
it is the destination of all checkpoint messages. Second, C 1 does all of the parity calculations. Both problems
may be alleviated with the fan-in algorithm. Here, the application processors perform the parity calculation in
log n steps and send the final result to C 1 , which stores the result in its memory. This is shown in Figure 4(b).
For other encodings besides Raid Level 5, these two methods may be extended. In the direct method, each
processor sends its checkpoint in a multicast message to the proper checkpointing processors. If necessary (e.g.,
for Reed-Solomon coding), the checkpointing processors modify the checkpoints, and then exclusive-or them
into their checkpoints. In the fan-in method, there is one fan-in performed for each checkpointing processor.
This may entail the cooperation of all application processors (e.g., in Reed-Solomon coding), or a subset of the
application processors (e.g., in one-dimensional parity). If a checkpoint must be modified for the encoding, it is
done at application processor P i before the fan-in starts.
For most networks, the fan-in algorithm will be preferable to the direct because it eliminates bottlenecks and
distributes the parity calculations. However, if the network supports multicast, the encodings involving multiple
checkpointing processors may profit from the direct method.
5.4 Breaking the checkpoint into chunks
The preceding description implies that whole checkpoints are sent from processor to processor. Since checkpoints
may be large, it often makes more efficient use of memory to break the checkpoint into chunks of a fixed
size. For example, in the fan-in algorithm, only two extra chunks of memory are needed to receive an incoming
chunk from another processor, make the parity calculation, and then send off the result. The chunks should be
small enough that they do not consume too much memory, but large enough that the overhead in sending chunks
is not dominated by message-sending start-up.
5.5 Sending diffs
If the application processors use incremental checkpointing, then they can avoid overhead by sending only
pages that have been modified since the previous checkpoint. However, this can cause problems in creating the
checkpoint encoding. Specifically, if the encoding is to be created anew at each checkpoint, it needs to have all
checkpointing data from all processors. The solution to this is to use diffs.
Assume that the direct encoding method is being employed. The checkpoint processor first copies its previous
checkpoint to its current checkpoint. Then each application processor does the following. For each modified page
page k in its address space, it calculates diff k , which is the bitwise exclusive-or of the current copy of the page
and the copy of the page in the previous checkpoint (which of course is available to the application processor). It
then sends diff k to the checkpoint processor, which XOR's it into its checkpoint. This has the effect of subtracting
out the old copy of the page and adding in the new copy. In this way, unmodified pages need not be sent to the
checkpointing processor.
One may use diffs with the fan-in algorithm as well, stipulating that if a processor does not modify a page
during the checkpoint interval, then it does not need to send that page or XOR it with other pages when performing
the fan-in.
5.6 Compressing Diffs
By sending diffs rather than actual bytes of the checkpoint, an interesting opportunity for compression arises.
Suppose that an application modifies just a few bytes on a page. Then the diff of that page and its previously
checkpointed copy will be composed of mostly zeros, which can be easily compressed using either run-length
encoding or an algorithm that sends tagged bytes rather than whole pages. Such compression trades off use of
more CPU for a reduced load on the network.
Compression combines naturally with incremental checkpointing, where modified pages are compressed before
being sent. It may also be used with simple and forked checkpointing by converting the entire checkpoint into
a diff and compressing it before sending it along. This has the effect of emulating incremental checkpointing,
because regions of memory that have not been modified get compressed to nothing.
6 Implementation and Experiment
In order to assess the performance of diskless checkpointing as compared to standard disk-based checkpointing
on networks of workstations, we implemented a small transparent checkpointing system on a network of 24 Sun
Sparc5 workstations at the University of Tennessee. Each workstation has 96 Mbytes of physical memory and
runs SunOS version 4.1.3. The workstations are connected to each other by a fast, switched Ethernet which can
be isolated for performance testing. The measured peak bandwidth between any two processors is roughly 5
megabytes per second. The workstations have very little accessible local disk storage: 38 megabytes per machine.
However, the machines are connected via regular Ethernet to the department's file servers using Sun NFS. These
disks have a bandwidth of 1.7 megabytes per second, but the performance of NFS on the Ethernet is far worse.
With NFS, remote file writes achieve a bandwidth of 0.13 megabytes per second. The page size of each machine
is 4096 bytes, and access to the page tables is controlled by the mprotect() system call.
Our checkpointer runs on top of PVM [12] and works like many PVM checkpointers [4, 33]. Applications do
not need to be recompiled, but their object modules must be relinked with our checkpointing/modified PVM
library. When the applications are started, the checkpointing code gets control and reads startup information
from a control file. This information includes the checkpoint interval, which checkpointing optimizations to use,
plus where checkpoints should be stored (to disk or to checkpointing processors).
The application then starts, and one of the application processors is interrupted when the checkpointing
interval has expired. This processor coordinates with the other application processors using the "Sync-and-stop"
synchronization algorithm, and once consistency has been determined, the processors checkpoint.
Abbreviation Description
checkpointing
DISK-FORK Checkpointing to disk using fork()
SIMP Simple diskless checkpointing
INC Incremental diskless checkpointing
Forked diskless checkpointing
INC-FORK Incremental, forked diskless checkpointing
C-SIMP Simple diskless checkpointing with compression
C-INC Incremental diskless checkpointing with compression
Forked diskless checkpointing with compression
C-INC-FORK Incremental, forked diskless checkpointing with compression
Table
1: Checkpointing variants implemented in our experiments
PVM includes some basic forms of failure detection. Specifically, if a processor in the current PVM session fails,
the rest of the processors eventually notice the failure and remove the failed processor from the PVM session.
PVM allows the user to be notified of such events. Our checkpointer uses this facility to recognize processor
failures. When such a failure occurs, then if there is a spare processor in the PVM session, it is selected to
replace the failed processor. If there is no spare processor, and diskless checkpointing is being employed, then a
checkpoint processor is chosen to be the replacement processor. Recovery proceeds automatically, either from the
disk-based or diskless checkpoint.
It is important to note that our checkpointer does not require the programmer to modify his or her code to
enable checkpointing. A simple relinking is all that is necessary.
The gamut of checkpointing variants is enumerated in Table 1. This includes standard disk-based checkpointing
using the fork() optimization. We do not test incremental, disk-based checkpointing because it does not improve
the performance of checkpointing in any of our tests. 1 .
For diskless checkpointing, we implement Raid Level 5 encoding using the fan-in algorithm. Checkpoint
encodings are created in chunks of 4096 bytes (conveniently, also the page size). The choice of algorithm has some
ramifications on how certain optimizations work. For example, when performing incremental checkpointing, the
encoding is created chunk-by-chunk, but if a processor has not modified the corresponding page, then an empty
message is sent as part of the fan-in instead of the page.
When using diff-based compression, pages are compressed using a bitmap-based compression algorithm [29].
Compression is performed by the sending processor before sending, and then uncompressed by the receiving
1 This is not to say that incremental, disk-based checkpointing is not often a useful optimizations. It simply does not help in any
of our tests.
Application Running Time Checkpoint Size per node
(sec) (h:mm:ss) (Mbytes)
NBODY 5722 1:35:22 3.7
CELL 6351 1:45:51 41.4
PCG 5873 1:37:53 66.6
Table
2: Basic parameters of the testing applications
processor, which merges the page with its own, and compresses the result before sending it along. When the
final compressed chunk reaches the checkpointing processor, it uncompresses the chunk and merges it with the
previous checkpoint encoding, which is then stored as the next encoding.
Applications
We used five applications to test the performance of checkpointing. These applications are all CPU-intensive,
parallel programs of the sort that often require hours, or sometimes days of execution. We executed instances of
these programs that took between 1.5 and 2 hours to run on sixteen processors in the absence of checkpointing. In
all cases, it is clear how the programs scale in size, and how this scaling will affect the performance of checkpointing.
The basic parameters of each application are presented in Table 2. We briefly describe each application, ordered
by checkpoint size, below.
7.1 NBODY
NBODY computes N-body interactions among particles in a system. The program is written in C, and uses
the parallel multipole tree algorithm [19]. The instance used in our tests was 15,000 particles and ten iterations.
The basic structure of the program is as follows. Each particle is represented by a data structure with several
fields. The particles are partitioned among "slave" processors (sixteen in our tests) in such a way that processors
that are "close to each other" (by some metric) reside in the same slave, to limit interslave communication. For
this reason, slave processors can differ in the number of particles they hold and therefore in their sizes. For
example, in our tests, the slave processors averaged 3.7 megabytes in size, but the largest was six megabytes.
At each iteration, the "location" field (among others) of each particle is updated to reflect the n-body inter-
action. Since the size of a particle's data structure is less than the machine's page size, this means that almost
all pages of the slave processors are modified during each iteration, leading to poor incremental checkpointing
behavior when the checkpointing interval spans multiple iterations. However, since much of each particle's data
is left unmodified from iteration to iteration, only a few bytes per page are changed, resulting in good diff-based
compression.
There are two parameters that affect the running time and memory usage of NBODY. These are the number
of particles, which affects both time and space, and the number of iterations, which only affects the running time.
NBODY is the only application where the checkpoints are small enough to allow the same number of checkpoints
in both diskless and disk-based checkpointing.
7.2 MAT
MAT is a C program that computes the floating point matrix product of two square matrices using Cannon's
algorithm [17]. The matrix size in our tests was 4,608\Theta4,608, leading to 15.1 megabyte checkpoints per processor.
On a uniprocessor, matrix multiplication typically shows excellent incremental checkpointing behavior, since
the two input matrices are read-only, and the product matrix is calculated sequentially, filling up whole pages at
a time in such a way that once a product element is calculated, it is never subsequently modified [25]. However,
most high-performance parallel algorithms, such as Cannon's algorithm, differ in this respect.
In Cannon's algorithm, all three matrices are partitioned in square blocks among the n processors (and it
is assumed n is a perfect square). The algorithm proceeds in p
steps. In each step, each processor adds the
product of its two input submatrices to its product submatrix. Then the processors send their input submatrices to
neighboring processors, receiving new ones in their place, and repeat until the product submatrices are calculated.
The ramification of this data movement is that during the course of an iteration, all matrices are modified.
Therefore, if checkpoints span iterations (as is the case in disk-based checkpointing), incremental checkpointing
will have no beneficial effect. If multiple checkpoints are taken in the same iteration (as is the case in diskless
checkpointing), then incremental checkpointing will be successful as in the uniprocessor case.
When pages are updated in MAT, they are updated in their entirety, leading to very poor diff-based compression
MAT's time and space demands are determined by the size of the matrix. For an N \Theta N matrix, the memory
usage is proportional to N 2 , and the running time is proportional to N 3 . The communication patterns of MAT
depend on the number of processors, and are the same for all matrix sizes.
MAT and NBODY are the only applications where it is possible to take more than one disk-based checkpoint
during the program's execution. Three disk-based checkpoints (as opposed to seven diskless checkpoints) are taken
in MAT.
7.3 PSTSWM
PSTSWM is a fortran program that solves the nonlinear shallow water equations on a rotating sphere using
the spectral transform method [14]. The instance used here simulates the state of a 3-D system for a duration of
hours.
Like NBODY, PSTSWM modifies the majority of its pages during each iteration, but it only modifies
a few bytes per page. Therefore, incremental checkpointing should show limited improvement, but diff-based
compression should work well. PSTSWM's checkpoints are large - approximately 25 megabytes per processor.
However, since each machine has 96 megabytes of physical memory, two checkpoints may be stored in their
stressing the limits of physical memory.
PSTSWM can scale in size by simulating a denser particle grid. Once the size is set, each iteration performs
roughly the same actions. Therefore, simulating longer time frames increases the running time in a linear fashion
without altering the general behavior (e.g. memory access pattern) significantly.
7.4 CELL
CELL is a parallel cellular automaton simulation program. Written in C, this program distributes two grids
of cellular automata evenly across all the application processors. One grid is denoted current, and one is denoted
next. The values of the current grid are used to calculate the values in the next grid, and then the two grids'
identities are swapped. The instance used in our tests simulates a 18,512 by 18,512 cellular automaton grid for
generations.
During each iteration, CELL updates every automaton in the next grid. Therefore, if checkpoints span two or
more iterations, all memory locations will be updated, rendering incremental checkpointing useless. Compressibility
depends on the data itself. "Sparse" grids (where many automata take on zero values) may see little change
in the automata's values over time, which can lead to good compression. Denser grids lead to less compression.
In our tests, we used very sparse grids.
The program size is directly proportional to the grid size. The running time is proportional to the grid size
times the number of iterations. Each pair of iterations performs the same operations, and thus has the same
memory access and communication patterns.
PCG is a fortran program that solves for a large, sparse matrix A using the "Preconditioned Conjugate
Gradient" iterative method The matrix A is converted to a small, dense format, and then approximations
to x are calculated and refined iteratively until they reach a user-specified tolerance from the correct values. In
our tests, A is a 1,638,400 by 1,638,400 element sparse matrix, and the program takes 3,750 iterations.
The exact mechanics and memory usage of PCG are detailed in [26]. The salient points are as follows. The main
data structures in the program may be viewed as many vectors of length N (in our instances,
These vectors are distributed among all the application processors. Roughly three quarters of these vectors are
never modified once the program starts calculating. The rest are updated in their entirety at each iteration.
Therefore, incremental checkpoints should be one quarter the size of non-incremental checkpoints. The data that
gets updated at every iteration is stored densely on contiguous pages, offering little opportunity for diff-based
compression.
The program size is directly proportional to N , and like CELL and PSTSWM, the running time is proportional
to the size times the number of iterations.
Each application processor holds 66.6 megabytes worth of data in PCG. Therefore, one simple diskless check-point
will not fit into memory. However, when incremental and copy-on-write checkpointing are employed, the
application and one or two checkpoints consume just a few megabytes more memory than is available. The size
of the checkpoints combined with the speed of Sun NFS results in the inability to take disk-based checkpoints of
PCG. This is because the time to store one checkpoint is longer than the running time of the application.
It should be reiterated that the instances for these tests were chosen to run for a period of time that was
long enough to measure the impact of checkpointing and recovery. In all applications, there are natural input
parameters which result in longer execution times and larger checkpoints. Our goal in these tests is to assess the
performance of checkpointing so that users of longer-running applications may be able to project the expected
running time of their applications in the presence of failures while employing the various checkpointing variants.
The raw data for the experiments is in the Appendix of this paper. All graphs in this section are derived
directly from the raw data. In most cases, the tests were executed in triplicate. The number of times each test
was executed plus the standard deviations in execution times is displayed in the tables in the Appendix. The
tables and graphs display average data.
We concentrate on two performance measures: latency and overhead. Latency is the time between when a
checkpoint is initiated, and when it may be used for recovery. Overhead has been defined previously. Overhead
is a direct measure of the performance penalty induced on an application due to checkpointing. The impact of
latency is more subtle, and will be discussed in detail in Section 9.
8.1 Checkpointing to disk
Figure
5 plots checkpoint latency and overhead of checkpointing to disk (the DISK-FORK tests). These
are plotted as a function of the applications' per-processor checkpoint sizes. As displayed in leftmost graph,
the latency in the DISK-FORK tests is directly proportional to the checkpoint size, achieving a bandwidth of
Mbytes/sec. Here bandwidth is calculated as per-processor checkpoint size times the number of processors,
divided by the checkpoint latency. Using that information, the checkpoint latency of the PCG test is projected
to be roughly 8,663 seconds.
The rightmost graph displays overhead as a function of checkpoint size. While the graph appears roughly
linear, it should be noted that the overhead of checkpointing is not a simple function of checkpoint size. The bulk
Checkpoint size
(Mbyte / processor)30009000
Latency
per
Checkpoint
(sec)
MAT
PCG (projected)
Checkpoint size
(Mbyte / processor)3090
per
Checkpoint
(sec)
MAT
Figure
5: Checkpoint latency and overhead of checkpointing to disk (DISK-FORK)
Checkpoint size
(Mbyte / processor)100300Latency
Per
Checkpoint
(sec)
Checkpoint size
(Mbyte / processor)100300Overhead
per
Checkpoint
(sec)
Figure
Checkpoint latency and overhead of SIMP and FORK
of work performed in checkpointing involves DMA from each processor's memory to its network interface card.
The CPU is only affected significantly when one of the following occurs:
ffl A DMA transaction needs to be initiated or repeated.
ffl A copy-on-write page fault occurs in the application.
ffl There is contention for the memory bus.
There are also effects on the cache as a result of checkpointing. Therefore, although checkpoint size is a rough
measuring stick for computing the overhead of DISK-FORK checkpointing, it is not the whole story. As has been
shown in other research, the copy-on-write optimization does an excellent job of reducing overhead [9, 21, 25]. In
this test, the overhead is between 0.7 and 5.5 percent of the checkpoint latency.
8.2 Diskless checkpointing: SIMP and FORK
Figure
6 plots checkpoint latency and overhead of the SIMP and FORK tests, again plotted as a function of
checkpoint size. As in the DISK-FORK case, both the SIMP and FORK latencies are directly proportional to
the checkpoint size, with the exception of the SIMP test in the PCG application. Here, the combined size of
the application and its checkpoint exceeds the size of physical memory, resulting in pages being swapped to the
backing store. This degrades the performance of checkpointing. In the FORK test, the checkpoint only requires
an additional 16.6 Mbytes of memory, since the unmodified pages of memory are shared between the application
and its checkpoint. Therefore, the checkpoint latency follows the same linear pattern as in the other applications.
With the exception of the SIMP test in the PCG application, the bandwidth of checkpointing in SIMP and
FORK is roughly 4.4 Mbytes/sec. This is a factor of 34 faster than the DISK-FORK bandwidth.
The overhead of the SIMP tests is identical to the latency, since the application is halted during checkpointing.
In the FORK tests, the overhead is reduced by 29.4 (in MAT) to 53.7 (in PCG) percent. Although this is an
improvement, it is not the same degree of improvement as in the DISK-FORK tests. The reason for this is that
the CPU is more involved in diskless checkpointing than in disk-based checkpointing. In diskless checkpointing,
the parity of each processor's checkpoint must be calculated, and this takes the CPU (plus some memory) away
from the application. The only time when disk-based checkpointing makes more use of the CPU than diskless
checkpointing is when the longer latency of checkpointing causes more copy-on-write page faults to occur.
8.3 The rest of the tests
All of the diskless checkpointing results are displayed in Figure 7. The top row of graphs shows the checkpoint
latency for each test in each application. The middle row shows checkpoint overhead, and the bottom row shows
the average checkpoint size. This is a bit of a misnomer, because in all cases, the in-memory and parity processor
checkpoints are the same size. However, with incremental checkpointing and compression, fewer bytes are sent
per processor. The "checkpoint size" graphs (and the "checkpoint size" columns in the Appendix) display the
average number of bytes that each processor sends during checkpointing.
Some salient features from Figure 7 are as follows. First, incremental checkpointing significantly reduces the
average checkpoint sizes in the MAT and PCG applications. In the other three applications, the checkpoint size
of SIMP and INC are roughly the same. In the MAT and PCG applications, significant reductions in checkpoint
latency and overhead result from incremental checkpointing. In both cases, the mixture of incremental and forked
checkpointing result in the lowest overhead of the all diskless checkpointing tests.
When incremental checkpointing fails to decrease the size of checkpoints, as in the NBODY and CELL ap-
plications, the overhead of checkpointing is greater than with simple checkpointing. In both of these applications,
the INC-FORK tests yielded the highest checkpoint latencies.
The results of diff-based compression are interesting. In three applications (NBODY, PSTSWM and CELL),
Checkpoint Latency
(sec)
MAT
INC-FORK C-SIMP C-FORK C-INC C-INC-FORK515Checkpoint Overhead
(sec)
MAT
Checkpoint Size
MAT
Figure
7: Diskless checkpoint latency, overhead, and size per application
incremental checkpointing fails because most of the programs' pages are updated at every iteration. However, diff-
based compression succeeds in reducing checkpoint size because the pages are either sparsely modified (NBODY
and PSTSWM) or updated with the same values (CELL). In these three applications, the C-FORK tests yielded
the lowest checkpoint overhead. Note that since compression adds extra demands on the CPU, the reduction
in overhead is not as drastic as with incremental checkpointing. It is also interesting to note that the lowest
overhead is achieved with C-FORK rather than C-INC or C-INC-FORK. This is because in these tests, almost
all pages are modified between checkpoints, and therefore incremental checkpointing merely adds the overhead of
processing page faults.
In the other two tests (MAT and PCG), diff-based compression brings the checkpoint sizes of the FORK and
SIMP tests to roughly the same size as incremental checkpointing. However, it does not improve upon incremental
Application Recovery Time
(sec)
PSTSWM 66.3
CELL 138.3
PCG 375.3
Table
3: Recovery times for the SIMP tests
checkpointing in terms of size or overhead. This is because the modified pages showed little compressibility.
8.4 Recovery time
Table
3 shows the time that it takes the system to recover from a single failure and continue execution from
the most recent checkpoint during the SIMP tests. Here, a processor failure is simulated by terminating one of
the application processors. PVM has been written so that the other processors recognize this failure, and our
modifications take advantage of this to automate the process of recovery. In our tests, the checkpointing processor
takes the place of the failed application processor.
The recovery times are roughly equal to the checkpoint latencies of the SIMP applications. It should be noted
that in all but the DISK-FORK tests, the recovery times are equal, since the entire diskless checkpoint of the
failed processor must be calculated. In the DISK-FORK tests, the recovery times are equal to the checkpoint
latencies. Thus, like the latencies, they are extremely large.
9 Discussion
9.1 Diskless vs. disk-based checkpointing
There are two basic results that we may draw from our tests concerning diskless vs. disk-based checkpointing:
ffl The checkpoint latency and recovery time of diskless checkpointing is vastly lower than disk-based
checkpointing. As stated in section 8.2, the latency (and recovery time) of disk-based checkpointing
is a factor of 34 slower than diskless checkpointing. This is a result of the poor performance of Sun NFS
combined with the fact that all processors use the same disk.
ffl The overhead of diskless checkpointing is comparable to disk-based checkpointing. Figure 8 plots
the overhead of disk-based checkpointing and the overhead of the best diskless variant for each application.
Checkpoint Overhead
(sec)
Figure
8: Checkpoint overhead of disk-based checkpointing as compared to the best diskless variant.
In some cases (NBODY and PSTSWM), diskless checkpointing outperforms disk-based, and in others
disk-based outperforms diskless. The question mark is plotted in PCG because we were unable to
complete a disk-based checkpoint during the lifetime of the application.
There are two reasons why diskless checkpointing may be viewed as preferable to disk-based checkpointing.
First, it lowers the expected running time of the application in the presence of failures. Second, it has less effect
on the computing environment, which is of special concern if the environment is shared. We consider each of
these in turn.
9.1.1 Expected running time
Supposing that failure rate is governed by a Poisson process, Vaidya has derived equations for assessing the
performance of an application in the presence of checkpointing and rollback recovery [36]. These equations take
as input the average overhead, latency, and recovery time per checkpoint, plus the rate of failures, and are defined
as follows.
where:
The rate of failures (1=MTBF ).
The optimal checkpoint interval.
The average overhead per checkpoint.
The average latency per checkpoint.
recovery time from a checkpoint.
The running time of the application in the absence of checkpointing, recovery, and failures
(i.e. the BASE test).
The "overhead ratio," which is a measure of the performance penalty due to checkpointing, recovery
and failures[36].
The expected running time of the optimal checkpoint interval in the presence of failures,
checkpointing and recovery.
The optimal expected running time of the application in the presence of failures, checkpointing
and recovery.
The expected running time of the application in the presence of failures, but no checkpointing
and recovery (i.e. the application is restarted from scratch following a failure).
In all these equations, the repair time is assumed to be zero. This approximates the case when a spare processor
is ready to continue computation immediately following a failure. If repair time is significant, then Eq's 2 and 5
become:
These equations may be used to compare checkpointing algorithms as follows. First, for each algorithm T opt
may be calculated from - and O using Eq. 1. Next, \Gamma and r may be determined by Eqs. 2 and 3. If so desired,
the expected running time of an application (T ckp ) for each algorithm may then be determined by Eq. 4. The
checkpointing algorithm with the lowest value of r will be the one with the smallest expected running time.
Thus, r suffices as a metric by which to compare checkpointing algorithms.
If T ckp is greater than T nockp , then the application cannot benefit from checkpointing. This occurs when the
application's running time (T base ) is not significantly greater than T opt . However, as T base grows, T nockp increases
more rapidly than T ckp to the point that checkpointing improves the program's expected running time in the
presence of failures.
In
Table
4, we use the data from Section 8 to derive values for T opt , \Gamma, r, T ckp and T nockp for each of the tests
presented in Figure 8. We calculated - in the following manner. In their study of host reliability on the Internet,
Long et at [22] determined an average MTBF of 29.29 days. Assuming independent processor failures, this means
that the MTBF of a collection of 16 processors is days, and the MTBF of a collection of 17
processors is days. This gives - a value of 6:301 10 \Gamma6 failures per second for
and 6:694 10 \Gamma6 failures per second for 17 processors. We use the former value as the failure rate for disk-based
checkpointing and for no checkpointing, and the latter value for diskless checkpointing.
Table
4 shows that in all applications, diskless checkpointing performs better than disk-based checkpointing.
This can be seen in the lower expected running times (T ckp ), and the lower overhead ratios (r). Therefore, even
though the two have similar checkpoint overheads, the extremely large latency and recovery time of disk-based
checkpointing makes it unattractive in comparison to diskless checkpointing.
Another significant result of Table 4 is that in two applications, NBODY and MAT, the expected running time
in the presence of failures is minimized by diskless checkpointing. In the other three applications, no checkpointing
Application Test Tbase Topt \Gamma r Tckp Tnockp
(sec) (sec) (sec) (sec) (sec)
Table
4: Calculated values of T int , \Gamma, r, T ckp and T nockp .
gives the smallest expected running time. That any checkpointing improves performance is somewhat surprising,
given the relatively small execution times of the experiments with respect to the MTBF. There are no cases where
disk-based checkpointing gives a smaller expected running time.
As the execution time of an application grows, checkpointing becomes much more attractive. For example,
suppose the user desires to simulate 5000 hours in PSTSWM instead of 102. Then the program will take roughly
275,000 seconds, or 3.18 days. Such an execution would not alter the size of the checkpoints, and therefore we may
use the same overhead, latency and recovery times as presented in Section 8. This leads to expected execution
times of 3.256 days for diskless checkpointing, 3.390 days for disk-based checkpointing and 8.553 days for no
checkpointing.
9.1.2 The effect on shared resources
Large checkpoint latencies can be detrimental in other ways. For example, in disk-based checkpointing, the entire
latency period is spent writing checkpoint data to stable storage. If other programs or users share the stable
storage, large checkpoint latencies are undesirable, because the performance of stable storage as seen by others is
degraded for a long period of time.
In [23], the effect of DISK-FORK checkpointing on the performance of stable storage was assessed. While a
checkpoint was being stored to the central disk, a processor not involved in the application timed
the bandwidth of disk writes. In that test, the performance of stable storage was degraded by 87 percent. This is
significant, for it means that extremely long checkpoint latencies, such as those measured in our tests, have the
potential to degrade the performance of the system in a severe manner for a long time. Diskless checkpointing,
on the other hand, exhibits much smaller checkpoint latencies, and because the calculation of the checkpoint
encoding involves both the network and the CPU, the impact on shared resources (in this case, the network) is
far less [23].
9.2 Recommendations
Given the results of these experiments, we can make the following recommendations. Of the checkpointing
variants tested in this paper, three stand out as the most useful: DISK-FORK, C-FORK and INC-FORK. On a
system with similar performance to ours, each is the most useful in certain cases:
ffl If checkpoints are small or the likelihood of wholesale system failures is high, then DISK-FORK checkpointing
should be employed.
ffl If the program modifies a few bytes per page between checkpoints, or if the machine does not provide access
to virtual memory facilities, then C-FORK diskless checkpointing should be employed.
ffl If the program does not modify a significant number of pages between checkpoints, then INC-FORK diskless
checkpointing should be employed.
Although we did not test such applications, there may be times when FORK and SIMP are the most useful
checkpointing methods. This is when all pages are modified in a dense manner between checkpoints. Then
FORK will have the lowest overhead when there is enough memory to store two checkpoints, and SIMP will have
the lower overhead otherwise.
None of our applications would have benefited from incremental checkpointing to disk. However, if multiple
checkpoints are taken and the program modifies only a fraction of its pages between checkpoints, incremental
forked checkpoints will outperform DISK-FORK.
Finally, in interpreting the results, it is important to note that the speed of stable storage in these experiments
is quite slow. A faster network, a faster file system, or a file system with multiple disks will improve the
performance of disk-based checkpointing relative to diskless checkpointing. On the other hand, a system with
more processors will degrade the performance of disk-based checkpointing relative to diskless checkpointing. It
should be possible using the equations in Section 9.1.1 to extrapolate the results of our experiments to systems
with different performance parameters.
Related Work
There has been much research performed on checkpointing and rollback recovery. The important algorithms
and performance optimizations for disk-based checkpointing in parallel and distributed systems are presented
in [8]. Research more directly related to diskless checkpointing is cited below.
The first paper on diskless checkpointing was presented by Plank and Li [27]. This paper may be viewed as a
completion of that original paper.
Silva et al [32] implemented checkpoint mirroring on a transputer network, and performed experiments to
determine that it outperformed disk-based checkpointing. Chiueh and Deng [6] implemented checkpoint mirroring
and Raid Level 5 checkpointing on a massively parallel (4096 processors) SIMD machine. They found that
mirroring improved performance by a factor of 10. Both implementations involved modifying the application to
perform checkpointing, rather than simply relinking with a checkpointing library.
Scales and Lam [31] implemented a distributed programming system built on special primitives with shared-memory
semantics. They use redundancy built into the system, plus checkpoint mirroring when necessary to
tolerate single processor failures with low overhead. In a similar manner, Costa et al [7] took advantage of
the natural redundancy in a distributed shared memory system to make it resilient to single processor failures.
Both of these systems export a shared-memory interface to the programmer and embed fault-tolerance into the
implementation with no reliance on stable storage.
Plank et al [26] embedded diskless checkpointing (with Raid Level 5 encoding) into several matrix operations
in the ScaLAPACK distributed linear algebra package, thus making them resilient to single processor failures
with low overhead. Kim et al [15] extended this work to employ one-dimensional parity encoding, which both
lowers the overhead and increases the failure coverage.
In [23], diskless checkpointing ideas are extended to a disk-based checkpointing system where there is disparity
between the performance of local and remote disk storage. In such environments, diskless checkpointing may be
extended so that in-memory checkpoints are stored on local disks (which are fast, but do not survive processor
failures), and checkpoint encodings are stored on remote disks (which are slow, but are available following a
failure). The performance of mirroring, Raid Level 5, and Reed-Solomon codings are all assessed and compare
favorably to standard checkpointing to remote disk. The impact of checkpointing on the remote disk and the
network is also assessed.
Finally in [35], Vaidya makes the case for two-level recovery schemes, where a fast checkpointing method
tolerating single processor failures is combined with a slower method that tolerates wholesale system failures. In
his examples, checkpoint mirroring is employed for the fast method, and DISK-FORK checkpointing is employed
for the slow method. His analysis applies to the methods presented in this paper as well.
Diskless checkpointing is a technique where processor redundancy, memory redundancy and failure coverage
are traded off so that a checkpointing system can operate in the absence of stable storage. In the process, the
performance of checkpointing, as well as its impact on shared resources is improved.
In this paper, we have described basic diskless checkpointing plus several performance optimizations. These
have all been implemented and tested on five long-running application programs on a network of workstations and
compared to standard disk-based checkpointing. In this implementation, the diskless checkpointing algorithms
show a 34-fold improvement in checkpointing latency combined with comparable checkpoint overhead. The result
is a lower expected running time in the presence of single processor failures.
Several checkpointing systems [6, 23, 26, 32] have included variants of diskless checkpointing to improve the
performance of checkpointing. Designers of checkpointing systems should consider the variants of diskless check-pointing
presented in the paper to optimize performance and minimize the impact of checkpointing on shared
resources.
--R
Virtual memory primitives for user programs.
Application level fault tolerance in heterogeneous networks of workstations.
EVENODD: An optimal scheme for tolerating double disk failures in RAID architectures.
MIST: PVM with transparent migration and checkpointing.
Efficient checkpoint mechanisms for massively parallel machines.
Lightweight logging for lazy release consistent distributed shared memory.
A survey of rollback-recovery protocols in message-passing systems
The performance of consistent checkpointing.
Manetho: Transparent rollback-recovery with low overhead
A system for program debugging via reversible execution.
Redundant Disk Arrays: Reliable
Solutions to the shallow water test set using the spectral transform method.
Fault tolerant matrix operations for networks of workstations using multiple checkpointing.
Job and process recovery in a UNIX-based operating system
Introduction to Parallel Computing.
The checkpoint mechanism in KeyKOS.
Low-latency, concurrent checkpointing for parallel programs.
A longitudinal survey of internet host reliability.
Improving the performance of coordinated checkpointers on networks of workstations using RAID techniques.
A tutorial on Reed-Solomon coding for fault-tolerance in RAID-like systems
Libckpt: Transparent checkpointing under unix.
Fault tolerant matrix operations for networks of workstations using diskless checkpointing.
Faster checkpointing with N
Compressed differences: An algorithm for fast incremental check- pointing
Transparent fault tolerance for parallel applications on networks of workstations.
Checkpointing SPMD applications on transputer networks.
Consistent checkpoints of PVM applications.
The Condor distributed processing system.
A case for two-level distributed recovery schemes
Impact of checkpoint latency on overhead ratio of a checkpointing scheme.
In 25th International Symposium on Fault-Tolerant Computing
Demonic memory for process histories.
--TR
--CTR
Sangho Yi , Junyoung Heo , Yookun Cho , Jiman Hong, Adaptive page-level incremental checkpointing based on expected recovery time, Proceedings of the 2006 ACM symposium on Applied computing, April 23-27, 2006, Dijon, France
Kai Hwang , Hai Jin , Edward Chow , Cho-Li Wang , Zhiwei Xu, Designing SSI Clusters with Hierarchical Checkpointing and Single I/O Space, IEEE Concurrency, v.7 n.1, p.60-69, January 1999
Junyoung Heo , Sangho Yi , Yookun Cho , Jiman Hong , Sung Y. Shin, Space-efficient page-level incremental checkpointing, Proceedings of the 2005 ACM symposium on Applied computing, March 13-17, 2005, Santa Fe, New Mexico
Xiaojuan Ren , Rudolf Eigenmann , Saurabh Bagchi, Failure-aware checkpointing in fine-grained cycle sharing systems, Proceedings of the 16th international symposium on High performance distributed computing, June 25-29, 2007, Monterey, California, USA
Saurabh Agarwal , Rahul Garg , Meeta S. Gupta , Jose E. Moreira, Adaptive incremental checkpointing for massively parallel systems, Proceedings of the 18th annual international conference on Supercomputing, June 26-July 01, 2004, Malo, France
Raphael Y. de Camargo , Renato Cerqueira , Fabio Kon, Strategies for storage of checkpointing data using non-dedicated repositories on Grid systems, Proceedings of the 3rd international workshop on Middleware for grid computing, p.1-6, November 28-December 02, 2005, Grenoble, France
Ling , Jie Mi , Xiaola Lin, A Variational Calculus Approach to Optimal Checkpoint Placement, IEEE Transactions on Computers, v.50 n.7, p.699-708, July 2001
Adnan Agbaria , Hagit Attiya , Roy Friedman , Roman Vitenberg, Quantifying rollback propagation in distributed checkpointing, Journal of Parallel and Distributed Computing, v.64 n.3, p.370-384, March 2004
Daniel A. Reed , Charng-da Lu , Celso L. Mendes, Reliability challenges in large systems, Future Generation Computer Systems, v.22 n.3, p.293-302, February 2006
Zizhong Chen , Graham E. Fagg , Edgar Gabriel , Julien Langou , Thara Angskun , George Bosilca , Jack Dongarra, Fault tolerant high performance computing by a coding approach, Proceedings of the tenth ACM SIGPLAN symposium on Principles and practice of parallel programming, June 15-17, 2005, Chicago, IL, USA
Milos Prvulovic , Zheng Zhang , Josep Torrellas, ReVive: cost-effective architectural support for rollback recovery in shared-memory multiprocessors, ACM SIGARCH Computer Architecture News, v.30 n.2, May 2002
Daniel J. Sorin , Milo M. K. Martin , Mark D. Hill , David A. Wood, SafetyNet: improving the availability of shared memory multiprocessors with global checkpoint/recovery, ACM SIGARCH Computer Architecture News, v.30 n.2, May 2002
Feng Qin , Joseph Tucek , Jagadeesan Sundaresan , Yuanyuan Zhou, Rx: treating bugs as allergies---a safe method to survive software failures, ACM SIGOPS Operating Systems Review, v.39 n.5, December 2005
Sudarshan M. Srinivasan , Srikanth Kandula , Christopher R. Andrews , Yuanyuan Zhou, Flashback: a lightweight extension for rollback and deterministic replay for software debugging, Proceedings of the USENIX Annual Technical Conference 2004 on USENIX Annual Technical Conference, p.3-3, June 27-July 02, 2004, Boston, MA | memory redundancy;checkpointing;RAID systems;error-correcting codes;rollback recovery;copy-on-write;fault tolerance |
291520 | Computing the Local Consensus of Trees. | The inference of consensus from a set of evolutionary trees is a fundamental problem in a number of fields such as biology and historical linguistics, and many models for inferring this consensus have been proposed. In this paper we present a model for deriving what we call a local consensus tree T from a set of trees ${\cal T}$. The model we propose presumes a function f, called a total local consensus function, which determines for every triple A of species, the form that the local consensus tree should take on A. We show that all local consensus trees, when they exist, can be constructed in polynomial time and that many fundamental problems can be solved in linear time. We also consider partial local consensus functions and study optimization problems under this model. We present linear time algorithms for several variations. Finally we point out that the local consensus approach ties together many previous approaches to constructing consensus trees. | Introduction
An evolutionary tree (also called a phylogeny or phylogenetic tree) for a species set S is a rooted
tree with leaves labeled by distinct elements in S. Because evolutionary history is difficult
to determine (it is both computationally difficult as most optimization problems in this area are
NP-hard, and scientifically difficult as well since a range of approaches appropriate to different
types of data exist), a common approach to solving this problem is to apply many different
algorithms to a given data set, or to different data sets representing the same species set, and
then look for common elements from the set of trees which are returned.
Several methods are described in the literature for deriving one tree from a set of trees.
In this paper, we propose a new model, called the local consensus. This model is based upon
functions, called local consensus rules, for inferring the rooted topology of the homeomorphic
subtree induced by triples of species. We will show that any local consensus function can be
Supported in part by NSF grant CCR-9108969.
y Supported in part by ARO grant DAAL03-89-0031PRI
z Supported in part by ARO grant DAAL03-89-0031PRI
computed in polynomial time, and that many of the natural forms of the local consensus can
be computed in linear time. We also analyze optimization problems based upon partial local
consensus rules and show that many of these can also be solved in polynomial time.
Preliminaries
2.1 Definitions Let be a set of species. An evolutionary tree for S (also
known as a phylogenetic tree or, more simply, a phylogeny) is a rooted tree T with n leaves
each labeled by a distinct element from S. The internal nodes denote ancestors of the species in
S. For an arbitrary subset S 0 ae S we denote by TjS 0 the homeomorphic subtree of T induced
by the leaves in S 0 . In particular, for a specified triple fa; b; cg ae S we denote by Tjfa; b; cg
the homeomorphic subtree of T induced by the leaves labeled by a; b; and c. This topology is
completely determined by specifying the pair of species among a; b and c whose least common
ancestor (lca) lies furthest away from the root. If (a; b) is this pair then we denote this by
((a; b); c), and T is said to be resolved on the triple a; b; c. If T is not binary it may happen
that all three pairs of species have the same least common ancestor. In this case we will say that
is unresolved in T and denote this topology by (a; b; c).
Given a tree T containing nodes u; v; w, we let lca T (u; v; w) denote the least common ancestor
of u; v and w in T . Also, we let u - T v denote that v is on the path from u to the root of T .
The set of input trees fT to a consensus problem is sometimes referred to as the
profile.
Let T (a; b; c) represent the set of rooted subtrees on the leaf set fa; b; cg. A local consensus
rule is a function Given a local consensus rule f and a set R of
evolutionary trees for S, the f-local consensus (if it exists) is a tree R f such that for all triples
A ' S, R f
When is said to be a total local consensus, and otherwise f is said
to be a partial local consensus. The problem of determining if the f-local consensus exists and
constructing it if it does is called the f-local consensus problem.
We will also consider optimization versions of the local consensus problem which will be
discussed in subsequent sections. Having set up this general machinery, we will look at the
special case where we need to build a consensus of two trees and describe specific local consensus
functions f for which we produce efficient algorithms.
2.2 Particular Local Consensus Rules We define the Binary Local Consensus, Optimistic
Local Consensus and Pessimistic Local Consensus problems below. The Binary Local Consensus
problem takes as input two binary trees, whereas the Optimistic Local Consensus and Pessimistic
Local Consensus problems take as input two trees which are not necessarily binary. All of these
are examples of total local consensus rules.
Definition 2.1. A local consensus rule f is conservative if for every triple fa; b; cg for
which T jfa; b; cg is required to be resolved for a particular profile, then no tree in the profile
resolves fa; b; cg differently from T .
When the trees are not necessarily binary, the local consensus rule has to interpret an
unresolved triple in one of two distinct ways: supposing that any resolution of the three way split is
possible or supposing that the unresolved node represents a three-way speciation event. Depending
upon the interpretation, therefore, the local consensus rule may decide if T 1 is resolved and T 2
is unresolved on some triple, the output should be resolved identically to T 1 or unresolved. We
call the first type of local consensus rule optimistic and the second type pessimistic.
We now define these three consensus rules.
Definition 2.2. Let T 1 and T 2 be two rooted binary trees on the same leaf set S. A rooted
tree T (which is not necessarily binary) is called the binary local consensus of T 1 and T 2 iff
for all triples a; b; c, T jfa; b;
Definition 2.3. Let T 1 and T 2 be two rooted trees on the same leaf set S. A rooted
tree T is called the optimistic local consensus of T 1 and T 2 iff for each triple a; b; c,
c) or (a; b; c) for
Definition 2.4. Let T 1 and T 2 be two rooted trees on the same leaf set S. A rooted
tree T is called the pessimistic local consensus of T 1 and T 2 iff for each triple a; b; c,
These differences are each appropriate for particular types of data.
Given the above definitions of the three models, the local consensus tree may not exist. In
Sections 3, 4, and 5, we will give linear time algorithms that either construct the tree we are
looking for if it exists or conclude that no such tree exists. However, practicing biologists and
linguists need to build some kind of consensus tree, and we therefore have considered variants
of the local consensus tree problem which always have solutions. To this end, we will define the
notion of relaxed-accord local consensus and relaxed-discord local consensus as follows.
Definition 2.5. Let T 1 and T 2 be two rooted binary trees on the same leaf set S. A rooted
tree T (which is not necessarily binary) is called a relaxed-accord local consensus of T 1 and
whenever a triple a; b; c has differing topologies on T 1 and T 2 , that triple is unresolved in T
and T preserves the topology of a maximal set of triples on which T 1 and T 2 agree.
To prove the existence of a relaxed-accord local consensus tree it is sufficient to show that
there exists a tree where every triple on which T 1 and T 2 disagree is unresolved. The set of trees
with this property can be partially ordered based on the set of triples (on which T 1 and T 2 agree)
whose topology they preserve. Once this partial order is known to be non-empty, we have proved
the existence of a relaxed-accord local consensus since any maximal element in this partial order
is such a consensus tree.
We note that if T has the star topology it leaves unresolved all triples on which T 1 and T 2
disagree. Hence the partial order is non-empty and the relaxed-accord local consensus tree always
exists. In Section 6 we show that this tree is unique.
Definition 2.6. Let T 1 and T 2 be two rooted trees (not necessarily binary) on the same leaf
set S. A rooted tree T is called a relaxed-discord local consensus of T 1 and T 2 if T preserves
the topology of all triples on which T 1 and T 2 agree. In addition, T should leave unresolved a
maximal set of triples on which T 1 and T 2 disagree.
Using an argument similar to the one used to prove the existence of a relaxed-accord local
consensus and noting that T 1 (or preserves the topology of all triples on which T 1 and
agree, we conclude that the relaxed-discord local consensus always exists. In Section 6 we
show that the relaxed-discord local consensus is also unique.
Before we look further into the problems, we give some standard definitions available in the
literature.
Definition 2.7. Let T be a rooted tree with leaf set S. Given a node v 2 V (T ), we denote
by L(T v ) the set of leaves in the subtree T v of T rooted at v. This is also called the cluster at v,
and is represented by ff v . The set C(T is called the cluster encoding of T .
Every rooted tree in which the leaves are labeled by S contains all singletons and the entire set
S in C(T ); these clusters are called the trivial clusters. We define a maximal cluster to be
the cluster defined by the child of the root. (Here we allow for a maximal cluster to be defined
by a leaf also.)
We also define the notion of compatibility of a set of clusters.
Definition 2.8. A set A of clusters is said to be compatible iff there exists a tree T such
that C(T
The following proposition can be found in [12].
Proposition 2.1. A set A of clusters is compatible iff 8ff ;g.
We now state a theorem which will be used in the later sections.
Theorem 2.1. Let T 1 and T 2 be two rooted trees on the same leaf set S and let f be a
conservative local consensus rule. If the f-local consensus tree T exists, then C(T
are compatible sets.
Proof. Suppose not and suppose without loss of generality that C(T ) [ not a
compatible set. Then by Proposition 2.1, 9ff 2 C(T ) and fi 2 C(T 1 ) such that ff "
;g.
Pick a ff. The topology of the triple a; b; c in T 1 is ((a; c); b)
while in T it is ((a; b); c). Since f is a conservative local consensus rule, this is impossible. 2
3 Binary Local Consensus
In this section, we will look at the Binary Local Consensus problem. We start by restating
the definition of the binary local consensus tree : Let T 1 and T 2 be two rooted binary trees
on the same leaf set S. A rooted tree T (which is not necessarily binary) is called the
binary local consensus of T 1 and T 2 iff for all triples a; b; c, T jfa; b;
3.1 Characterization and construction We will show that althought the binary local
consensus of two trees may not exist, when it does exist it has a nice characterization.
Proposition 3.1. Given a binary tree T and a cluster ff, ff is compatible with C(T ) iff
Proof. If ff 62 C(T ) but ff is compatible with C(T ), then there exists a proper refinement T 0
of T such that C(T 0 binary tree has a proper refinement. 2
Lemma 3.1. Let T 1 and T 2 be rooted binary trees on the same leaf set S. If f is
a conservative local consensus function such that the f-local consensus tree T exists, then
Proof. By the previous lemmas, C(T since f is conservative, and
Corollary 3.1. If the binary local consensus tree T of T 1 and T 2 exists (T 1 and T 2 are both
binary), then C(T
Proof. All we need to show is that ff 2 C(T ) for all ff 2 C(T 1 For any such ff, pick
identically, so that T also resolves
The tree T such that C(T called the strict consensus tree. This particular
consensus tree always exists and can be constructed in O(n) time [6]. The construction part of the
algorithm for binary local consensus trees is therefore simple, and what remains is the verification
that the strict consensus tree is also the binary local consensus tree (i.e. that the tree we have
constructed using the algorithm in [6] satisfies the constraints imposed upon it by the binary
local consensus rule).
3.2 Verifying that a Consensus Tree is a Binary Local Consensus Tree We now prove
some structural lemmas to help determine whether the consensus tree is in fact the binary local
consensus.
Lemma 3.2. Let T 1 and T 2 be rooted binary trees on the same leaf set and let ff be a cluster
in their intersection. Let T be the strict consensus tree of T 1 and T 2 . Let e 1 e be the
edges in T respectively that are above the respective internal nodes which define the
cluster ff. Let a be a species in ff. Then T is a binary local consensus for T 1 and T 2 if and only
if
1. the subtree below e is a binary local consensus for the subtrees below e 1 and e 2 , and
2. upon replacing the subtrees below by a in T; respectively, T is a
binary local consensus for T 1 and T 2 .
Proof. Clearly, if T is the binary local consensus tree for T 1 and T 2 then conditions (1) and
(2) will hold. Conversely, if (1) and (2) holds, but T is not the binary local consensus tree for T 1
and T 2 , then there is some triple a; b; c such that T incorrectly handles this triple. If all of a; b; c
are below e then by condition (1), T handles a; b; c correctly. Similarly if at least two are above
e, then by condition (2), T handles this triple correctly. It remains to show that T handles all
triples where exactly two of a; b; c are below and one is above the edge e. But then, since the
cluster in each of T
this triple properly. Thus T is a binary local consensus for T 1 and T 2 . 2
This lemma yields an obvious divide-and-conquer strategy to determine whether a binary
local consensus exists.
Next we explore for what pairs of binary trees it is possible for the binary local consensus to
be a star, i.e., a tree with none but the trivial clusters.
Definition 3.1. A caterpillar is a rooted binary tree with only one pair of sibling leaves.
Given a leaf labeled caterpillar T with root r and height h, there is a natural ordering induced
by T on its leaves. Let hg be a function where g(s) is the distance of s from r.
Then the species in S can be ordered in the increasing order as a 1 ; a
such that g(a 1 (Note that the pair of sibling leaves have been
arbitrarily ordered)
Definition 3.2. Two caterpillars X and Y on the same leaf set are said to be oppositely
oriented iff for all k, the k smallest elements of X are contained among the k+1 largest elements
of Y and vice versa.
Proposition 3.2. Let T 1 and T 2 be two rooted binary trees on the same leaf set whose binary
local consensus is a star. If a; b is a sibling pair of leaves in T 1 , then the lca of a and b in T 2
must be the root of T 2 .
Proof. Suppose not. Then there is a species c such that the least common ancestor of (a; c)
is above the least common ancestor of (a; b) in T 2 . Then T 1 jfa; b; hence the
binary local consensus of T 1 and T 2 cannot be a star. 2
Lemma 3.3. Suppose T 1 and T 2 are binary trees on the same leaf set and suppose that they
each have at least 5 leaves. If their binary local consensus tree is a star, then T 1 and T 2 must be
caterpillars.
Proof. Suppose for contradiction that T 1 is not a caterpillar. Then it has two pairs of sibling
leaves, (a; b) and (c; d). By the previous proposition each of these pairs must have the root as
their least common ancestor in T 2 . Thus without loss of generality, a and c lie in the left subtree
of the root of T 2 and b and d lie in the right subtree of the root of T 2 . Thus it follows that T 2
itself must have two sibling pairs (p; q) and (r; s) one in each subtree of the root. Note that in
1 the least common ancestor of p and q and the least common ancestor of r and s is the root
of T 1 . Again without loss of generality let p and r lie in the left subtree of the root of T 1 and q
and s lie in the right subtree of the root.
x can be in either of the subtrees of T 1
Figure
1: Topologies of T 1 and T 2 with respect to p; q;
Let x be any other species besides p; q; Figure 1). Suppose without loss of
generality that x lies in the left subtree of the root of T 2 . We will consider the following two
triples: In T 2 the topology of these triples will be ((x; p); s) and ((x; q); r)
respectively.
We will show that T 1 agrees on at least one of these triples. There are two cases. If x lies in
the left subtree of the root of T 1 , then the topology of the triple x; p; s in T 1 is clearly ((x; p); s)
and if x lies in the right subtree of the root of T 1 , then the topology of the triple x; q; r in T 1
is ((x; q); r). Thus in either case there is a triple in T 1 which agrees with a triple in T 2 and the
binary local consensus cannot be a star. 2
Lemma 3.4. Let T 1 and T 2 be two caterpillars on the same leaf set. Then the binary local
consensus of T 1 and T 2 is a star if and only T 1 and T 2 are oppositely oriented caterpillars.
Proof. Suppose the two caterpillars are oppositely oriented, i.e., they satisfy the two
intersection conditions. Let x; z be any three leaves and let their indices in the ordering
of the leaves of T 1 be respectively. Then the topology of x; and z in T 1 is (x; (y; z)).
Looking at the smallest elements in T 2 , this set must contain y or z but cannot contain x.
Consequently, the topology of the triple in T 2 is not (x; (y; z)) and the star is a valid binary local
consensus.
Conversely, suppose that the two caterpillars do not satisfy the intersection conditions.
Without loss of generality, suppose that there exists at least one k such that the k smallest
elements of T 2 are not contained within the largest elements of T 1 . Pick the smallest such
k. Say, x is the leaf in T 2 with rank k and x does not belong to the set of k largest elements
of T 1 . From the pigeonhole principle, there will exist at least two leaves of T 2 which have ranks
greater than k but which are contained in the set of k largest elements of T 1 . Suppose the
two leaves are y and z. Then T 1 jfx; This implies that the binary
local consensus cannot be a star. 2
a b
c
d
e
f
d
c
a
Figure
2: Example of oppositely oriented caterpillars
Corollary 3.2. The binary local consensus for two trees can be verified to be a star in
linear time.
3.3 Binary Local Consensus Tree Algorithm
1. Use Day's algorithm to produce the strict consensus tree T and for each non-trivial cluster
in T , maintain a pointer to the edges in T 1 and T 2 that give rise to this cluster.
2. Traverse T in postorder. For each non-trivial cluster found, check that the subtrees below
its edge in T 1 and T 2 are caterpillars that satisfy the conditions of the above lemma. If so
replace the entire subtree by a single node belonging to the subtree in each of T 1 and T 2 .
If not, declare that T is not the binary local consensus tree.
Theorem 3.1. Construction and verification for the binary local consensus can be done in
linear time.
Proof. Day's algorithm [6] runs in linear time. Also, step 2 of the above algorithm takes linear
time since at most a linear number of species are reintroduced by replacement above. Also, the
checking of the caterpillars can be done in time linear in the number of leaves in the caterpillar.4 Optimistic Local Consensus
In this section we look at the problem of finding the Optimistic Local Consensus (OLC) tree of
two trees defined in the previous section. Note that the Optimistic Local Consensus of two trees
may not exist.
Recall the definition of the OLC tree : Let T 1 and T 2 be two rooted trees on the same leaf
set S. A rooted tree T is called the optimistic local consensus of T 1 and T 2 iff for each triple
c) or (a; b; c) for
4.1 Characterization of the OLC tree The following lemma characterizes the optimistic
local consensus tree when it exists:
Theorem 4.1. Let T 1 and T 2 be two rooted trees on the same species set S. If the optimistic
local consensus tree T olc exists, then C(T olc
and ff 2 2 C(T 2 ), and ff is compatible with both C(T 1 ) and C(T 2 )g.
Proof. Pick any cluster ff 2 A. If we look at any triple x;
then this triple will be resolved as ((x; y); z) in one tree and will be either resolved the same or
unresolved in the other tree. In either case, ff 2 C(T olc ).
Conversely, pick any cluster
A. There are two cases here, namely, the case when ff is not
compatible with at least one of C(T 1 ) and C(T 2 ) and the case when ff is compatible with both
when ff is not compatible with at least one of C(T 1 ) and C(T 2 ), using Theorem 2.1, we
observe that
For the second case, pick those smallest clusters ff 1
. (Note that the nodes v and u defining the clusters ff 1 and ff 2 respectively,
are the lcas in T 1 and T 2 respectively, of the species in ff.) Then 9fi ' ;, such that
are the smallest clusters in T 1 and T 2 respectively containing
ff and since ff is compatible with both C(T 1 ) and C(T 2 ), this implies that ff is the union of
clusters of at least two children of v and also the union of clusters of at least two children of u.
Moreover, 9a; b 2 ff such that
(a; b) and
(a; b). Thus we can pick a c 2 fi and
we have that T 1 jfa; b; c). But the topology given by T olc is ((a; b); c).
Thus
4.2 Construction phase Since the optimistic local consensus rule is conservative, if the tree
set of clusters, and hence there exists a tree T
satisfying C(T If we can construct T by refining T 1 , we can then reduce
T by contracting all the unnecessary edges, and thus obtain T olc . This is the approach we will
take.
Note that this approach breaks the construction into two stages: refinement and contraction.
Refining
The main objective is to refine T 1 so as to include all the clusters from T olc . Before we explain
how we do this precisely, we will introduce some notation and lemmas from previous works which
enable us to do this efficiently.
Definition 4.1. Let v be an arbitrary node in a tree T with children
representative set of v is any set fx 1 such that x
. We denote by rep(v)
one such representative set.
Lemma 4.1. If the optimistic local consensus tree T olc of trees T 1 and T 2 exists and
then T olc jrep(v) is isomorphic to T 2 jrep(v).
Proof. Follows from the fact that T 1 jrep(v) is a star. 2
Definition 4.2. Let v be a node in a tree T with children is the
subtree induced by fv; g.
We will do the refinement as follows. We will modify the tree T
1 is initialised to
In a postorder fashion, for every
It can be seen that v also has the same number of children as v (since the
processing is done in a postorder fashion). Say these are . Replace the subtree
(v ), rooted at v in the following manner : We replace N(v ) by an isomorphic copy of
Next, we replace x i by the subtree of T
1 rooted at u i .
Let T be the tree that is produced after considering all the nodes in T 1 .
Theorem 4.2. Let T be given and suppose T olc exists. Then the tree T that is produced
from the algorithm described in the previous paragraph satisfies C(T
Proof. Since C(T olc all we need to show is that T olc jrep(v) cannot be a
proper refinement of T 2 jrep(v). If it were, then for some fa; b; cg ' rep(v), T olc jfa; b; cg would be
resolved while T 2 jfa; b; cg is unresolved. Since fa; b; cg ' rep(u), T 1 jfa; b; cg is also unresolved,
forcing T olc to be also unresolved. 2
Note that we have reduced the problem of constructing T to the problem of discovering
To have a linear time algorithm, however, we need to be able to compute T 2 jrep(v) quickly.
We cite the following result from [13] which will be useful to us in this case.
Lemma 4.2. [13] Given a left-to-right ordering of the leaves of a tree and the ability to
determine the topology of any triple of leaves a; b; c in constant time, we can construct the tree
in linear time.
To use this lemma we need two things:
1. that we be able to determine the topology of any triple in T 2 in O(1) time, and
2. that we have for each node in T 1 , an ordered representative set, where the ordering is
consistent with the left-to-right ordering of the leaves in T 2 .
To accomplish (1), we first preprocess T 2 for lca queries. Then, to determine the topology for
the triple a; b; c, we simply compare the lca's of (a; b), (b; c) and (a; c). The second requirement
is more challenging, but can also be handled, as we now show.
Computing all ordered representative sets in O(n) time:
ffl Initially all nodes in T 1 have empty labelings.
ffl For each s 2 S, taken in the left-to-right ordering of the leaves in T 2 , do
1. Trace a path in T 1 from the leaf for s towards the root, until encountering either the
root or a node which has already been labeled.
2. Append s to the ordered set for each such node in the path traced (including the first
node encountered which has already been labeled).
Figure
3 shows an example of the computation just described.
a b c d
e
a b c d
e
a b c d
e
c is added to rep sets of w and v
Left-to-right ordering
a c d b e
a c
d
r
a is added to rep sets of u, v and r
(iv) After completion
Figure
3: Example showing the computation of the representative sets of nodes in T 1 based on
the left-to-right ordering of species in T 2
Note that this computation takes O(n) time since each node v is visited O(deg(v)) times, and
that the order produced is exactly as required. Thus, for each node v 2 V (T 1 ), we have defined
a set of leaves such that each leaf is in a different subtree of v, every subtree of v is represented,
and the order in which these leaves appear is the same as the left-to-right ordering in T 2 .
We have thus proved:
Lemma 4.3. We can compute T 2 jrep(u) in O(jrep(u)j) time.
We therefore have the following:
Theorem 4.3. Given then we can construct a tree T such that C(T
exists in O(n) time.
The rest of the task of constructing T olc is in the contraction of unneeded edges.
Contracting simply go through each
edge in T and check if it needs to be kept or must be deleted. Note that edges that were added
during the refinement phase are required and do not need to be checked. Therefore we need
only check the original tree edges. Let (u; v) be such an edge with From our
representative sets for u and v we can easily choose three species a; b; c such that lca(a;
and lca(b; c) = v. If the topology of this triple in T 2 is differently resolved than ((a; b); c) then
we know that edge (u; v) will have to be contracted; if on the other hand T 2 jfa; b; cg is either
(a; b; c) or ((a; b); c) then (u; v) will have to be retained in any optimistic local consensus tree.
OLC Construction Algorithm
Phase 0: Preprocessing:
Make copies T 0
2 of T 1 and T 2 respectively. For each node v in each tree T 0
compute ordered representative sets ordered by the left-to-right ordering in the other tree.
Preprocess each tree T 0
i to answer lca queries for leaves as well as internal nodes.
Phase I: Refine T 0Refine T 0
1 in a postorder fashion so that at the end, C(T 0
exists.
Phase II: Contract T 0Contract edges e 2 E(T 0
1 ) such that c e , the cluster below e, lies in C(T 1
We have thus shown the following theorem
Theorem 4.4. The algorithm stated above constructs the OLC of two trees T 1 and T 2 if the
OLC exists.
Analysis of Running Time
Phase 0: Preprocessing:
In [18], Harel and Tarjan give an O(n) time algorithm for preprocessing trees to answer lca
queries in constant time. We have already shown that computing the ordered representative sets
takes O(n) time. Thus the preprocessing stage takes O(n) time.
Phase I: Refining T 0This stage involves local refinements of T 0
1 , and we have shown that the cost of refining around
node v is O(deg(v)). Summing over all nodes v we obtain O(n) time.
Phase II: Contracting edges
This stage clearly takes only O(n) time.
Theorem 4.5. Construction of the optimistic local consensus tree can be done in linear time.
4.3 Verification phase
Lemma 4.4. Let T be a tree on a leaf set S. Let T be obtained from T through a
sequence of refinements followed by a sequence of edge contractions. Then there exists a function
there is a subset S v of the children of f(v) in
Proof. We define set of clusters.
Therefore there is a subset S v of the children of f(v) such that [ v 0 2Sv ff v
Lemma 4.5. Suppose T is the OLC of T 1 and T 2 (on a leaf set S containing at least 5 species).
Then T is a star iff either one of the following holds
1. both T 1 and T 2 are oppositely oriented caterpillars, or
2. both T 1 and T 2 are stars
Proof. The "if" direction is easy to see. We now assume that the OLC, T , is a star. If
contains a triple a; b; c that is unresolved, T 2 must also be unresolved on a; b; c. Conversely
whenever T 1 is resolved on a; b; c, T 2 must be (differently) resolved on a; b; c. Thus either both
are binary or both are not.
In the case that both T 1 and T 2 are binary, the definition of the OLC coincides with the
definition of binary local consensus and we appeal to the proofs of Lemma 3.3 and Lemma 3.4
to argue that T 1 and T 2 must be oppositely oriented caterpillars.
are not binary, we will show that for any node v in T 1 with children fu
there is a node v 0 in T 2 with children fu 0
k g such that ff u i
. Pick any three
species a; b; c such that a; b; c is unresolved in T 1 and let
(a; b; c). Then a; b; c must
be unresolved in T 2 . Let v
(a; b; c). We claim that ff To see why, suppose
being in the same subtree under v as a. Then
This contradicts the assumption that T
is a star. Thus ff
Next, note that if x and y are under the same child of v in T 1 but under different children of
there exists a z such that x; y; z is resolved in T 1 but unresolved in T 2 . This would
contradict the fact the T is a star. This establishes the claim.
This implies that if there is a non-binary node v that is not the root of T 1 , we can find two
species a; b (a - v; b - v) and a species c, c 6- v such cg. Thus the root
must have three or more children in this case. But this means that if any cluster defined by a
child of the root contains two or more species, then there is a triple on which T 1 and T 2 agree.
Thus T 1 and T 2 must be stars. 2
The verification proceeds as follows :
Phase 0
Suppose the tree constructed by refining T 1 and then contracting the edges in the resulting
tree is T . We will do the same modification on T 2 , i.e. refine T 2 using the information from T 1
and then contract the edges in the resulting tree as before. Call this tree T 0
. Clearly, if T is not
isomorphic to T 0
, we can terminate and output that the OLC does not exist. This is because
we know that a compatible set of clusters defines a unique tree and we know that the OLC, if it
exists, is uniquely characterized.
Phase 1
If Phase 0 is successful, we then verify further. We compute an ordered representative set for
every node w in V (T ). For each node w in T , do
1. Check if the homeomorphic subtrees of T 1 and T 2 induced by rep(w) are both stars or they
are both oppositely oriented caterpillars. If they are neither of these, then terminate and
output that the OLC does not exist.
2. Identify the parent of w, say w . Look at rep(w ) excluding the representative element
which is below w. Call this set A. Identify the lca's of rep(w) in T 1 and T 2 . Check if there
is a species that belongs to A which lies below the lca of rep(w) in both T 1 or T 2 . If so,
terminate and output that the OLC does not exist.
Implementation of step 1 of Phase
Using the left-to-right ordering of the species in T 1 , compute the ordered representative set rep,
at each node in T as shown in the previous section. For any u 2 V (T ), to be able to quickly
compute the homeomorphic subtree of T 2 induced by the species in rep(u), we need to know the
ordering of theses species as they appear in the left-to-right ordering of T 2 . We associate with
each u, a new rep set, rep (u), which is the rearranged version of the species in rep(u) according
to their ordering in T 2 . We define a specifies for each s 2 S,
the node v 2 V (T ) closest to the root of T such that s 2 rep(v). The function limit together
with the left-to-right ordering of the species in T 2 , help in filling the rep sets, since, s will belong
to the rep sets of all nodes in the path from s to limit(s). We first show how to compute
using algorithm LIMIT and then we show how the rep sets are filled.
Initialisation
For each visited in a top-down traversal of T ,
do f
For each s 2 rep(v) such that
set
Once limit(s) has been identified for all s 2 S, we proceed to compute rep (u);
as follows. Look at the left-to-right ordering of the species in T 2 . Now, for each species s in the
left-to-right order, we trace a path in T from the leaf for s towards the root of T and add s to
the rep set of each node encountered in this path. We terminate when we reach limit(s).
Note that this process of identifying rep and rep has to be done only once.
Analysis of running time :
The isomorphism test in Phase 0 can be performed in O(n) using a simple modification of the
tree-isomorphism testing algorithm in [1].
There is an O(n) cost for preprocessing of T 1 and T 2 to answer lca queries in Phase 1.
Our implementation of step 1 of Phase 1 involves a one time O(n) cost in preprocessing to
identify rep and rep for each node in T . Then each time step 1 is called on a node w
an additional time of O(deg(rep(w))) is taken.
Exploiting that fact that T 1 and T 2 have been preprocessed to answer lca queries, it can be
seen that each step 2 of Phase 1 takes O(deg(w)
Thus the total time taken in the verification phase is O(n).
Correctness of our verification procedure :
Theorem 4.6. If T passes the above tests, then T is the OLC of T 1 and T 2 .
Proof. We need only show that T handles every triple properly. Each of the following cases
is handled assuming T has passed the isomorphism test.
Case 1
If T passes the isomorphism test with T 0
, then any triple a; b; c such that the two trees resolve
differently, will be unresolved in T . This follows since T is created by refining and then
contracting both T 1 and T 2 , and these actions can not take a resolved triple into a different
resolution.
Case 2
This involves a triple a; b; c having the same topology ((a; b); c) in both T 1 and T 2 . We claim
that the first step of Phase 1 will pass only if the topology of this triple is ((a; b); c). To see why,
suppose a; b; c is unresolved in T . ( a; b; c cannot be resolved as (a; (b; c)) or ((a; c); b) in T .) Look
at the nodes u and v, which are the lca's of a; b in T 1 and T 2 , respectively. The node w in T ,
which is the lca(a; b; c), is also lca(a; b) (since a; b; c is unresolved). We infer that
f is the function as defined in Lemma 4.4. This is because, any node above w will contain the
species c and any node below w will not contain either a or b. By a similar argument,
Now, when we look at rep(w) and compute the homeomorphic subtrees of T 1 and T 2 induced by
rep(w), in both of these induced trees, there will exist three species x; z such that x; y are both
below u (and v) in T 1 (and T 2 ) and z is not in the character defined by u (and v). Thus in both
the induced trees, the triple x; will have the same topology ((x; y); z). That is, these induced
trees will neither be both stars nor both oppositely oriented caterpillars. Thus the verification
process will terminate and output that the OLC does not exist.
Case 3
This involves a triple a; b; c which is resolved as ((a; b); c) in one tree and unresolved in the
other. The proof of this case essentially follows the lines of the proof of case 2.
Case 4
This involves a triple a; b; c which is unresolved in both the trees. We claim that the second
step of Phase 1 will pass only if this triple is unresolved in T . To see why, suppose a; b; c is
resolved as ((a; b); c) in T . Let lca T (a; b; c) = x and let lca T (a; y and also suppose without
loss of generality that x is the parent of y. Let y 1 be the child of y such that a 2 ff y 1
and let y 2
be the child of y such that b 2 ff y 2
. Let z 6= y be the child of x such that c 2 ff z .
Let
(a; b; c) and
(a; b; c).
We will look at functions f 1 and f 2 defined by Lemma 4.4 from V (T ) to V (T 1
respectively. v. Note that the cluster defined by any child of u
can have a non-empty intersection with at most one of ff y 1
and ff y 2
. Similarly for v. Thus any
representatives chosen from ff y 1
and ff y 2
respectively, have their least common ancestor at u in
1 and at v in T 2 . However, f 1 (z) - T 1
v. Thus any representative chosen from
ff z will lie below u and v in T 1 and T 2 respectively, causing us to conclude that the OLC does
not exist. 2
5 Pessimistic Local Consensus
Recall the definition of the Pessimistic Local Consensus be two rooted
trees on the same leaf set S. A rooted tree T is called the pessimistic local consensus of T 1
and T 2 iff for each triple a; b; c, T jfa; b;
5.1 Characterization The following theorem characterizes the PLC tree of two trees T 1 and
Theorem 5.1. Let T 1 and T 2 be two trees on the same leaf set S. If the pessimistic
local consensus tree T plc of T 1 and T 2 exists, then it is identically equal to T , where C(T
Proof. Pick any cluster ff 2 C(T ). Since ff belongs to both the trees, if we look at any triple
ff, then this triple will have to be resolved as ((x; y); z). Thus
Conversely, pick any cluster
We have two subcases here
1. ff is not compatible with at least one of C(T 1 ) or C(T 2 ). In this case, from Theorem 2.1,
2. ff is compatible with both C(T 1 ) and C(T 2 ). In this case, pick those nodes from T 1 and
which define the smallest clusters containing ff. We can pick a triple a; b; c, such that
a
ff and this triple is unresolved in either T 1 or T 2 . Thus
Construction Phase By Theorem 5.1, the pessimistic local consensus tree, if it exists,
is identically the strict consensus tree. Thus to construct the pessimistic local consensus tree, it
suffices to use the O(n) algorithm in [6] for the strict consensus tree.
5.3 Verification Phase Let T 1 and T 2 be the input trees, and let T be the strict consensus
tree constructed using the algorithm in [6]. We want to be able to verify whether T is actually
the pessimistic local consensus in the case that T is a star. If T 1 or T 2 is already a star then
there is nothing to verify since T is the true pessimistic local consensus. So assume that this is
not the case.
There are two cases which we will consider. The first is when either of T 1 or T 2 (say T 1 ) has
at least two children of the root which are not leaves. The second case is when both T 1 and T 2
have exactly one child of the root which is not a leaf. Having made observations about these
cases, we can apply the divide and conquer strategy we adopted for the binary local consensus
problem.
Lemma 5.1. Suppose T 1 and T 2 are two trees on the same leaf set S, with T 1 having at least
two children of the root which are not leaves. Let ff 1 ; ::; ff l be the maximal clusters of T 1 and
be the maximal clusters of T 2 . Then T , their pessimistic local consensus, is a star iff
Proof. Suppose 1. This means that 8x; y, if lca(x; y) in T 1 is below the root,
then in T 2 , lca(x; y) is the root. Thus for any triple x; topologies in T 1 and T 2 do not
agree. Thus T is a star.
Suppose defined by a node which is not a leaf. Look at an ff k ,
such that the node in T 1 defining ff k is not a leaf node. There are two cases to handle
here. Either, at least one species in ff k is not in fi j or all species in ff k are in fi j (i.e., ff k ae fi j ).
In the former case, pick that species z, which is in ff k but not in fi j . Also pick those two
species x; y which are in ff agree on the triple x; namely this triple
has topology ((x; y); z) in both the trees. Thus T cannot be a star.
In the latter case, since we know that fi j 6= S, we can pick two species x; y, from ff k and
another species z, from S \Gamma fi j . In both T 1 and T 2 , the topology of this triple is ((x; y); z). Thus
T cannot be a star. 2
Since each species belongs to at most one of these maximal clusters in each tree, this test can
be done in linear time.
The following lemma handles the case when both T 1 and T 2 have exactly one child of the root
which is not a leaf.
Lemma 5.2. Suppose T 1 and T 2 are two trees on the same leaf set S and T and their
pessimistic local consensus, is a star. Suppose both T 1 and T 2 have exactly one child of the
root each which is not a leaf. Let s be leaves in T 1 which are children of the root. Let v be
the lca in T 2 of s . Then every child of v contains at most one species x g.
Moreover, for any pair of species x; y g, the least common ancestor of x and y
in T 2 lies on the path from v to the root.
Proof. Suppose 9 a child of v which contains at least two species from S \Gamma fs g. Then
by picking x; y such that they both lie under this child if v in T 2 and picking an s i out of s
that lies under a different child of v, we find that both trees have the same topology for the triple
cannot be a star. Furthermore, if 9x; y such that lca(x; y)
in T 2 does not lie on the path from v to the root, then the triple x; would have identical
topologies in both trees and T wouldn't be a star. 2
Definition 5.1. A rooted tree T is a millipede if the set of internal nodes of T defines a
single path from the root to a leaf.
a b c
d
Figure
4: An example of a millipede
g. We have that T 2 jS 1 is a millipede (say, T
Let l be the children of the root in T
2 , which are leaves. Look at T 1 jS 1 , (say, T
Either, T
1 has one non-leaf child or it has at least two non-leaf children. In the former case, we
can apply the previous lemma and infer that T
will be a millipede. In the later
case, we can apply Lemma 5.2 to check if the pessimistic local consensus is a star.
In the following subsection, we will show how to verify if T is a star when both the input
trees are millipedes.
5.3.1 Verification when both the input trees are millipedes
The proof of the following lemma is straightforward.
Lemma 5.3. Suppose T 1 and T 2 are two millipedes on the same leaf set S. Then their
pessimistic local consensus, T , is a star iff there exists no triple such that both trees have the
same topologies on the triple.
We now describe a linear time algorithm for verifying that T 1 and T 2 have no triple on which
they have the same topology.
We define an ordering on the species in T 1 using the function f
distance of s from the root of T 1 ; and, h is the height of T 1 .
In T 2 , we can write S as the union of all the sets in the sequence S 1 is the
height of T 2 and each S i contains exactly those species which are at a distance i from the root
of T 2 . Now, in each S i , replace each species s in this set with f(s). Call this multiset of integers
We thus get a sequence M of multisets.
Definition 5.2. We will say a triple of integers p; q; r is special if
We observe that the pessimistic local consensus of T 1 and T 2 is a star iff no special triple p; q
and r exists.
The following algorithm CHECK PLC, takes as input the sequence M
returns FAIL if there exists a special triple of integers, and otherwise it returns PASS.
CHECK PLC works by scanning the multiset M i in the i th iteration. It makes use of three
variables global min, local min and temp. At the start of the i th iteration, global min stores
the smallest integer seen in the first multisets. The variable local min is used to store
the smallest integer a such that 9b for which a ! b and a 2 M
(local min is initialised to +1). The variable temp is initialised to 0. As long as temp remains 0,
local local min stores a and temp stores some b for which
the previously mentioned realtionship between a and b holds. At the i th iteration, CHECK PLC
either returns FAIL (if a special triple exists) or, if necessary, it modifies the variables global min,
local min and temp to hold their intended values for the first i multisets of the sequence.
The reasoning for storing these values at the start of the i th iteration is as follows. If 9p in
some i) such that p; q; r is a special triple, then global min together
with q; r 2 M i are also a special triple since global min - p. Similarly, if 9p in some M j , q 2 M l ,
i), such that p; q; r is a special triple, then local min, temp and r 2 M i are
also a special triple.
We now describe CHECK PLC.
Initialisation:
global
local
The procedure outputs FAIL (and terminates) if the pessimistic local consensus is not a star;
it outputs PASS otherwise.
For
do f
2.
do f
Scan through M i ;
If jAj - 2, then output FAIL;
If y, where y 2 A
local
global
If
do f
Scan through M i ;
If either jAj - 2 or jBj - 1, then output FAIL;
Else
If
If global min ! Min(M i ), then set local min = global min
If global min ? Min(M i ), then set local min = global min
y, where y 2 A
global
If
Output PASS
Analysis of running time : CHECK PLC runs in linear time since each M i is scanned only a
constant number of times.
Theorem 5.2. Algorithm CHECK PLC is correct.
Proof. By induction, observe that Step 1 is executed at the i th iteration if 8j; l; x, where
It then follows that if Step 1 is executed at
the i th iteration, then at the start of that iteration
local Thus, in this case global min stores the smallest integer seen in the first
multisets. Now, in the first i multisets, if any special triple p; q; r exists such that
and q; r 2 M i , then CHECK PLC correctly outputs FAIL since global min - p. Otherwise we
have two cases, depending upon the value of A. If then the variables global min, temp
and local min are updated so that global min holds the smallest value in the first i multisets.
Also, local min, now correctly holds the smallest value a for which there exists a b (stored in
temp) for which a ! b and a 2 M In the other case 0, in which
case global min is updated to hold Min(M i ) (which is the smallest value in the first i multisets).
Observe that once temp is updated to store a nonzero value, it never stores a 0 again. Thus,
once temp is set to a nonzero value in iteration i 0 , then from iteration iteration k, Step
2 is executed.
Assume that Step 2 is executed in some iteration i 0 and assume, inductively, that at the start
of iteration i 0 , global min stores the smallest value in the first i multisets and local min stores
the smallest value a for which there exists a b (stored in temp) such that a ! b and a 2 M
. Then in iteration i 0 , it can be easily seen that CHECK PLC correctly outputs
FAIL if there exist a special triple p; q; r such that
both the cases when
ensures that after iteration i 0 , global min stores the smallest value in the first i 0 multisets and
local min stores the smallest value a for which there exists a b (stored in temp) such that a ! b
and a 2 M
Using the above arguments, it can be seen that CHECK PLC gives the correct output on any
sequence of multisets. 2
Thus we also have the following theorem
Theorem 5.3. Given two millipedes T 1 and T 2 , we can check if their pessimistic local
consensus is a star in linear time.
6 Relaxed Versions
The local consensus rules we have seen so far are such that the output tree satisfying a particular
rule need not exist. This motivates the need to look at the relaxed versions of local consensus,
where solutions always exist. Recall the definitions of relaxed-accord local consensus and relaxed-
discord local consensus. The existance of solutions to these problems was shown in section 2.2.
6.1 Relaxed-Accord Local Consensus In this subsection we will show that the relaxed-
accord local consensus of two binary rooted trees T 1 and T 2 is actually the strict consensus of
these two trees.
Theorem 6.1. If T 1 and T 2 are two rooted binary trees then their relaxed-accord local
consensus T always exists, and is identically the strict consensus of T 1 and T 2 .
Proof. The existence of the relaxed-accord local consensus tree T , was shown in section 2.
Now we show that this tree is the strict consensus tree. Suppose there exists a triple a; b; c resolved
differently in T 1 and T 2 , as say, ((a; b); c) and (a; (b; c)) respectively. Say the lca T 1
neither ff u nor ff v is in the strict consensus tree. Thus the strict consensus
tree leaves unresolved any triple which has different topologies in T 1 and T 2 .
Let T 0 be a tree in which for every triple a; b; c on which T 1 and T 2 differ, T 0 has an unresolved
topology on this triple. Now suppose it is possible that T 0 contains a cluster that is not in the
intersection of the sets of clusters of T 1 and T 2 . Let ff be this cluster and suppose without loss
of generality that ff is not a cluster of T 1 . In T 0 , for any pair of species x; y 2 ff and species
z 62 ff the topology has to be ((x; y); z). However, if this is also the case in T 1 , then T 1 must
also possess the cluster ff contradicting our assumption. Thus there must exist a pair of species
and a species z 62 ff such that in T 1 their topology is not ((x; y); z). But this implies that
cannot be a relaxed-accord local consensus. Hence any candidate , T 0 , for a relaxed-accord
local consensus can only contain the clusters in the intersection of the cluster sets of T 1 and T 2 .
If T 0 contains a proper subset of the clusters in the intersection of the sets of clusters of T 1
and T 2 then there exists a triple a; b; c on which T 0 has an unresolved topology while the strict
consensus tree has a resolved topology that agrees with the topologies of T 1 and T 2 . Hence the
strict consensus of T 1 and T 2 is the relaxed-accord local consensus of T 1 and T 2 . 2
As a consequence, the relaxed-accord local consensus can be constructed in O(n) time using
the algorithm in [6], and there is no need to verify that the tree constructed is correct.
6.2 Relaxed-Discord local consensus In the relaxed-discord local consensus (RDLC)
problem we require that any triple on which the trees T 1 and T 2 agree must have its topology
preserved in the consensus tree T . Further T should leave unresolved a maximal set of triples on
which T 1 and T 2 disagree.
Previously we showed that the RDLC exists. Now we will show that it is unique. The
construction of the RDLC can be accomplished by defining the set
b)c)g. This set of rooted triples can then be passed to the algorithm of Aho
et al. [2], which computes a tree (if it exists) having the required form on every triple in the
set, and resolving a minimum number of additional triples outside that set. The algorithm in
takes O(pn) time where in our case, p 2 O(n 3 ), the use of the algorithm of
would result in a running time of O(n 4 ). We will obtain a speed-up to an O(n 2 ) algorithm
(which includes the verification) for the construction of the relaxed-discord tree, by using the
fact that the tree necessarily exists. Our algorithm however takes advantage of the ideas in [2],
and so we begin by briefly describing how that algorithm works.
6.2.1 The ASSU Algorithm
In [2], Aho et al. describe algorithms which determine if a family of constraints on least common
ancestor relations can be satisfied within a single rooted tree. We describe here the simple
algorithm they give for the case where the constraints are given as rooted resolved triples,
z). For such input the algorithm works top-down figuring out the clusters at the children
of the root before recursing. To do this the algorithm maintains disjoint sets. Initially all leaves
are in singleton sets. For each rooted triple ((x; y); z) the algorithm unions the sets containing x
and y to indicate that x and y must lie below the same child of the root. This algorithm never
unions sets unless this is forced. Recursive calls include constraints that are on species entirely
contained in the same component discovered in the previous call. If all the species are seen to
be in the same component (either initially or during a recursive call), the algorithm determines
that the constraints cannot be simultaneously satisfied. This simple algorithm has a worst case
behavior of O(pn), where there are p lca constraints and the underlying set S has n elements
which will be leaves in the final tree.
6.2.2 An improved algorithm for RDLC
We will now describe an algorithm to solve the RDLC in O(n 2 ) time. Since T 1 is itself consistent
with all triples on which they agree, it is clear that T , the tree produced by the ASSU algorithm,
is a refinement of this tree in the following sense. Each child of the root of T 1 (as well as
represents a cluster which is the union of some of the clusters represented by children of the root
of T . Let ff be the cluster of a child of the root of T 1 and let fi be a cluster of a child of the root
of T 2 . ff and fi are unions of the clusters of some of the children of the root of T . In fact, if ff " fi
is non-empty, then ff " fi is also the union of some of the clusters of the children of the root of
T . We will show that except in one special case ff " fi is in fact the cluster of exactly one child
of the root of T .
fi). For any (y; z) in ff " fi, ((y; z); x) is the form of the
triple within each of T 1 and T 2 and hence in T , y and z would lie under the same child of the
root. Thus in this case ff " fi is a cluster of a child of the root of T .
The case where ff [ can occur for at most one child of the root of T 1 and
one child of the root of T 2 as the following lemma shows.
Lemma 6.1. Let T 1 and T 2 be 2 trees on the same leaf set S. Let ff 1 ; ::; ff k be the clusters
defined by the children of the root of T 1 and l be the clusters defined by the children of the
root of T 2 . Then the case where ff i [ can occur for at most one i and one j.
Proof. Suppose not. Let ff i [
with we have that ff i ' fi j . But since ff
implies that This is a contradiction since fi j and fi j are clusters defined by the
children of the root and hence should be disjoint. 2
The case for ff [ can be handled as follows. Identify the lca, say u, of the species in
similarly, the lca, say v, of the species in S \Gamma fi in T 1 . Clearly, in T 2 , u will be
a descendent of the node defining fi and in T 1 , v will be a descendent of the node defining ff.
all the nodes in the path (in starting from the node defining fi, and ending at the
node u. Let be the clusters defined by the children (not on the path) of the nodes in
this path. Similarily, identify all the clusters defined by the children of the nodes in the
path (in T 1 ), starting from the node defining ff, to the node v. Unioning pairs (x; y) whenever
x and y lie in ff " fl i for some i or whenever they lie in we get a partition of
components and these turn out to be exactly the clusters present at the children of
the root of T .
With the above characterization a high-level description of the algorithm to construct T can
be given as follows:
RDLC Construction Algorithm:
1. For each pair of maximal clusters ff 2
recursively compute the tree on ff " fi and make its root a child of the root of
T .
2. If there are clusters ff and fi such that ff [ compute the partition of
recursively compute the tree for each component of the partition
and make the roots of these trees children of the root of T .
Running Time Analysis:
Note that this algorithm does not require an explicit verification of the constructed tree, since
in fact we know that the tree exists and we are simply computing it by mimicking efficiently what
the algorithm in [2] would create.
There are at most n recursive stages. We will show that each stage can be implemented in
proving the O(n 2 ) bound.
Case 2 can be handled in O(n) time as follows. Build a graph with vertices labeled by species
in ff " fi. Now for each i connect the vertices in ff " fl i by a path and do the same for each j and
vertices in Find connected components of this graph in O(n) time. For each connected
component Comp we will have to find homeomorphic subtrees of T 1 and T 2 whose leaf set is
Comp and recurse on these subtrees. This task is common to both cases and is described after
the discussion on Case 1.
To handle Case 1 it is important not to waste time on empty intersections. So we consider
each species in turn and label the intersection that this species lies in. Thus we will identify
at most n non-empty intersections. Let ff " fi be one such intersection. We need to find a
homeomorphic subtree of T 1 that has ff " fi as the leaf set. We will show how to do this in time
proportional to the number of leaves in ff " fi.
Assume that T 1 and T 2 have been preprocessed for least common ancestor queries. Also note
that we know the left-to-right ordering of all leaves of T 1 as well as of T 2 . Given the leaves in
left-to-right ordering is also known and is the one induced by the overall left-to-right
ordering. By Lemma 4.2 we can reconstruct the topology of the tree in linear time. This is
exactly what we need to show that one stage of the recursion can be accomplished in O(n) time
and that the overall time for the algorithm is O(n 2 ). Clearly this case can be handled in linear
time and can occur for at most one pair of children.
7 Polynomial Time Algorithms for Arbitrary Local Consensus Rules
We show in this section some polynomial time algorithms for constructing local consensus trees.
We begin by discussing the case where f is a partial local consensus function.
Lemma 7.1. (Aho et. al[2]) Let A be a multi-set of k rooted triples on a leaf set S, with
n. We can determine in O(kn log n) time if a tree T exists such that T jt is homeomorphic
to t for all t 2 A.
In [15], an algorithm is given for the problem addressed in [2] for the case where all the triples
are resolved. In this case a faster algorithm can be obtained.
Lemma 7.2. (Henzinger, King, Warnow [15]) Let A be a multi-set of k resolved rooted
triples on a leaf set S, with n. We can determine in minfO(k
whether a tree T exists such that T jfa; b; cg is homeomorphic to the rooted triple(s) in A on
(if such a triple exists in A).
Theorem 7.1. Let f be an arbitrary partial local consensus function and T a set of k
evolutionary trees on S, with Then we can determine if the local consensus tree exists
and construct it if it does in O(kn 3 ) time.
Proof. Given f , T , and a triple A, we can determine the form of T f jA (for those triples A for
which T f jA has a restricted form) in O(kn 3 ) time. By the previous lemma, we can determine if
partial local consensus tree exists, and construct it if it does, in O(n 2:5 time. The total time
is therefore bounded by the cost of computing the triples. 2
While partial local consensus trees can be constructed in O(kn 3 ), total local consensus trees
can be computed even faster.
Lemma 7.3. [Kannan, Lawler, Warnow [13]] Given an oracle O which can answer queries
of "What is the form of T jfa; b; cg for a species set fa; b; cg?", we can construct in O(n 2
tree T consistent with all the oracle queries (if it exists), and O(rn log n) time if the tree T has
degree bounded by r.
Theorem 7.2. Let f be a total local consensus function. Then given a set of k rooted trees
on n species, we can construct in O(kn 2 ) time the f-local consensus tree T f if it exists. If f
always returns resolved subtrees, then we can compute T f in O(kn log n) time.
Proof. We can implement the oracle determining the form of the homeomorphic subtree of T f
on a triple a; b; c by first preprocessing the trees to answer least common ancestor (lca) queries
in constant time, using [18]. Then, answering a query needs only O(k) time. By [13], we need
only O(n 2 ) queries and O(n 2 ) additional work, for a total cost of O(kn 2 ) in the general case.
When T f has degree bounded by r, we have total cost O(krn log n). If f always returns resolved
subtrees, then T f will be binary, so that the total cost is O(kn log n). 2
8 Discussion and Conclusions
Several approaches have been taken to handle the problem of resolving multiple solutions. One
approach has been to find a maximum subset S 0 ' S inducing homeomorphic subtrees; this
subtree is then called a Maximum Agreement Subtree[14, 10, 17]. The primary disadvantage of
this approach is that it does not return an evolutionary tree on the entire species set.
There is however a connection between this problem and one of the local consensus methods.
The tree produced by the relaxed discord local consensus method contains the maximum
agreement subtree as a homeomorphic subtree. This is not too hard to see.
The other approach which we take here, requires that the resolution of the inconsistencies
be represented in a single evolutionary tree for the entire species set. A classical problem in
this area is the Tree Compatibility Problem (also called the Cladistic Character Compatibility
Problem)[7, 8, 9] The Tree Compatibility Problem says that the set T of trees is compatible if a
tree T exists such that for every triple A ' S, T resolves A if and only T
which resolves A. This problem can be solved in linear time[12, 19]. The weakness of
this approach is that in practice, many data sets are incompatible, and it is therefore necessary
to be able to handle the case where some pairs of trees resolve triples differently.
Some other approaches of this type are the strict consensus and the median tree problems.
These models are stated in terms of unrooted trees, so that instead of clusters, characters (i.e.
bipartitions) on the species set are used to represent the trees. Using the character encoding of
the consensus tree as a measure of fitness to the input, the strict consensus seeks a tree with only
those characters that appear in every tree in the input. The median tree, on the other hand, is
defined by a metric, rooted trees which is defined to be the cardinality of the
symmetric difference of the character sets of T 1 and T 2 . Given input trees T is the
median tree if it minimizes P
The median tree can be computed in polynomial time
and has a nice characterization in terms of the character encoding [4, 16, 6]. Both the above
notions are related to versions of the local consensus problem, and the relevant local consensus
trees always contain at least as much 'information' as these trees.
The work represented in this paper can be extended in several directions. As we have noted,
for all local consensus functions the local consensus tree of a set of k trees can be computed in
time polynomial in k and Many of these local consensus trees can be constructed in
O(kn) time.
--R
The design and analysis of computer algorithms
A formal theory of consensus
The median procedure for n-Trees
Mitochondrial DNA sequences of primates: tempo and mode of evolution
Optimal algorithms for comparing trees with labeled leaves
Optimal evolutionary tree comparison by sparse dynamic programming
Numerical methods for inferring evolutionary trees
Efficient algorithms for inferring evolutionary trees
Determining the evolutionary tree
Maximum agreement subtree in a set of evolutionary trees - metrics and efficient algorithms
A fast algorithm for constructing rooted trees from constraints
The Complexity of the Median Procedure for Binary Trees
computing the maximum agreement subtree
Fast Algorithm for Finding Nearest Common Ancestors
--TR | graphs;algorithms;evolutionary trees |
291523 | First-Order Queries on Finite Structures Over the Reals. | We investigate properties of finite relational structures over the reals expressed by first-order sentences whose predicates are the relations of the structure plus arbitrary polynomial inequalities, and whose quantifiers can range over the whole set of reals. In constraint programming terminology, this corresponds to Boolean real polynomial constraint queries on finite structures. The fact that quantifiers range over all reals seems crucial; however, we observe that each sentence in the first-order theory of the reals can be evaluated by letting each quantifier range over only a finite set of real numbers without changing its truth value. Inspired by this observation, we then show that when all polynomials used are linear, each query can be expressed uniformly on all finite structures by a sentence of which the quantifiers range only over the finite domain of the structure. In other words, linear constraint programming on finite structures can be reduced to ordinary query evaluation as usual in finite model theory and databases. Moreover, if only "generic" queries are taken into consideration, we show that this can be reduced even further by proving that such queries can be expressed by sentences using as polynomial inequalities only those of the simple form x < y. | Introduction
In this paper we are motivated by two fields of
computer science which heavily rely on logic: relational
databases and constraint programming.
We will look at the latter from the perspective of
the former.
In classical relational database theory [1], a database
is modeled as a relational structure. The
domain of this structure is some fixed universe U
of possible data elements (such as all strings, or
all natural numbers), and is typically infinite. The
relations of the structure, in contrast, are always
finite as they model finite tables holding data. As
a consequence, the active domain of the database,
consisting of all data elements actually occurring
in one or more of the relations, is finite as well.
A (Boolean) query is a mapping from databases
(over some fixed relational signature) to true
or false. A basic way of expressing a query is
by a first-order sentence over the relational sig-
nature. For example, on a database containing
information on children and hobbies, the
query "does each parent have at least all hobbies
his children have?" is expressed by the sentence
(8p)(8c)(8h)(Child (p; c) - Hobby(c;
Since the domain of each database is U, the
quantifiers in a sentence expressing a query will
naturally range over the whole infinite U. It is
thus not entirely obvious that under this natural
interpretation the query will always be effectively
computable. That first-order queries are indeed
computable follows immediately from a result by
Aylamazyan, Gilula, Stolboushkin, and Schwartz
[4] (for simplicity hereafter referred to as "the four
Russians"). They showed that in order to obtain
the result of the query it suffices to let the quantifiers
range over the active domain augmented with
a finite set of q additional data elements, where
q is the number of quantified variables in the formula
expressing the query. The intuition behind
this result is that all data elements outside the
active domain of a given database are alike with
respect to that database.
Alternatively, we can choose to let the quantifiers
range over the active domain only, thus obtaining
a semantics which is quite different from
the natural interpretation. For example, consider
databases over the single unary relation symbol
. Then the sentence (8x)P (x) will always be
false under the natural interpretation, while under
the active-domain interpretation it will always be
true. In fact, it is not obvious that each query expressible
under the natural interpretation is also
expressible under the active-domain interpreta-
tion. Hull and Su [11] established that the implication
indeed holds. (The converse implication
holds as well, since the active-domain interpretation
can easily be simulated under the natural
interpretation using bounded quantification.)
In recent years, much attention has been paid
to "constraint programming languages" (e.g., [5]).
In particular, in 1990, Kanellakis, Kuper and
Revesz demonstrated that the idea of constraint
programming also applies to database query languages
by introducing the framework of "con-
straint query languages" [12]. An important instance
of this framework is that of real polynomial
constraints. Here, the universe U of data
elements is the field R of real numbers. Data-bases
then are relational structures over R, but
the database relations need no longer be finite; it
suffices that they are definable as finite Boolean
combinations of polynomial inequalities. In other
words, each k-ary relation of the structure must
be a semi-algebraic subset of R k [6].
A basic way of querying real polynomial constraint
databases is again by first-order sentences,
which can now contain polynomial inequalities in
addition to the predicate symbols of the relational
signature. For example, if the database holds a
set S of points in R 2 , the query "do all points
in S lie on a common circle?" is expressed by
that quantifiers are naturally interpreted
as ranging over the whole of R. In order
to evaluate such a sentence on a database, we replace
each predicate symbol in the formula by the
polynomial definition of the corresponding data-base
relation, and obtain a sentence in the pure
first-order theory of the reals. As is well-known,
this theory is decidable [15]; the truth value of the
obtained sentence yields the result of the query.
So, real polynomial constraint queries are effectively
computable.
Finite relations are semi-algebraic, so that finite
relational databases over the reals form an important
special case of real polynomial constraint
databases. For example, if we want to model a database
holding a finite number of rectangles, we
can either choose to store the full extents of the
rectangles, resulting in the infinite set of all points
on the rectangles (represented in terms of linear
inequalities in the obvious way), or we can choose
to store only the corner points of each rectangle,
resulting in a finite relation.
In the present paper, we investigate whether
the results by the four Russians and by Hull and
Su, mentioned in the beginning of this Introduc-
tion, carry over from classical first-order queries
on relational databases to polynomial constraint
queries on finite databases over the reals. In-
deed, as in the classical case, one can give an
alternative active-domain semantics to constraint
sentences and again ask whether this is without
loss of expressive power. Note, however, that
active-domain quantification defies the very nature
of constraint programming as a means to
reason about intentionally defined, potentially in-
finite, ranges of values. Hence, it is not obvious
that the results just mentioned might carry over
at all.
Nonetheless, we have found a natural analog of
the four Russians theorem, and we have been able
to establish the verbatim analog of the Hull-Su
theorem in the case when only linear polynomials
are used. This is often the case in practice.
Our result might be paraphrased by saying that
on finite structures, first-order linear constraint
programming can be reduced to ordinary query
evaluation as usual in finite model theory and databases
Our development is based upon the following
observation. Consider a prenex normal form sentence
in the first-order
theory of the reals. For any finite set D 0 of
real numbers, there exists a sequence D 0 ' D 1 '
of finite sets of reals such that the sentence
can be evaluated by letting each quantifier
range over D i only (rather than over the whole
of R) without changing the sentence's truth value.
By taking D 0 to be the active domain of a given
finite database over the reals, we get the analog
in the real case of the four Russians theorem.
The reader familiar with Collins's method for
quantifier elimination in real-closed fields through
cylindrical algebraic decomposition (cad) [3, 8]
will not be surprised by the above observation. In-
deed, it follows more or less directly from an obvious
adaptation of the cad construction. However,
we give an alternative self-contained proof from
first principles which abstracts away the purely
algorithmical aspects of the cad construction and
focuses on the logic behind it. Importantly, this
proof provides us with a basis to show how in the
case of linear polynomials, the construction of the
sequence departing from the active
domain D 0 can be simulated using a linear
constraint formula. As a result, we obtain the
analog in the real case of the Hull-Su theorem.
In a final section of this paper, we look at
queries that are "generic," i.e., that do not distinguish
between isomorphic databases. Genericity
is a natural criterion in the context of classical
relational databases [2, 7]. Perhaps this is a little
less so for databases over the reals; in other
work [14] we have proposed alternative, "spatial"
genericity criterions based on geometrical intu-
itions. Nevertheless, it remains interesting to investigate
which classically generic queries can be
expressed using linear constraint sentences.
Sentences that do not contain any polynomial
inequalities always express generic queries, but
from the moment a sentence even contains only
simple inequalities of the form x ! y it can
already be non-generic. Furthermore, there is
an example due to Gurevich [1, Exercise 17.27],
showing a generic query expressible with such
simple inequalities but not without. In other
words, simple inequalities, though inherently non-generic
when viewed in isolation, help to express
more generic queries. The natural question now is
to ask whether general linear polynomial inequalities
help even more. We will answer this question
negatively, thus providing a partial rectification
of Kuper's original intuitions [13] (which are incorrect
as stated, by the Gurevich example just
mentioned).
Real formulas
Let R be the field of real numbers. A real formula
is a first-order formula built from atomic formulas
of the form p ? 0, with p a multivariate polynomial
with real coefficients, using logical connectives
and quantifiers in the obvious manner. If
are among x
is a tuple of real numbers, then the satisfaction
of \Phi on R with a i substituted for x i , denoted
R defined in the standard way. As
usual, a formula without free variables is called a
sentence.
Example 2.1 The formula \Phi(a; b; c) j (9x)
a, b and c. [A condition like easily
expressed in terms of conditions of the form p ? 0
by :(p ? 0)-:(\Gammap ? 0).] We have R
but R 6j= \Phi[5; 3; 1].
be a real formula in prenex
normal
with each Q i either 9 or 8 and M quantifier-free.
If D k+1 , . , D n are subsets of R, then we say
that \Phi[-a] is satisfied on (D
(D evaluates to true
when we let each quantifier Q i range over D i only
rather than over the whole of R.
Example 2.2 Let \Phi be the sentence
Our main result of this section can now be
stated as follows:
Theorem 2.3 Let \Phi(-x) be a prenex normal form
real formula as in ( be a finite
subset of R. Then there exists a sequence D k '
of finite subsets of R such that
for all tuples - a on D k , R only if
(D
Example 2.4 As a trivial illustration let
be the formula (9x 2 )x
We have R
we have (D 2 )
In the remainder of this section we give a simple
proof of Theorem 2.3. We will introduce various
auxiliary notions on which we will rely heavily in
later sections.
We first define the following natural equivalence
relation on R
Definition 2.5 Two points - a and - b in R n are
called equivalent (with respect to \Phi), denoted - a j
each polynomial occurring in \Phi has the same
sign (positive, zero, or negative) on - a and - b.
We now extend this equivalence relation inductively
to lower dimensions such that the equivalence
classes at each dimension are "cylindrical"
over the equivalence classes at the next lower dimension
Definition 2.6 Let i ! n and assume j is already
defined on R i+1 . Then for
have -
a for each ff there is a fi such that
conversely, for each fi there is
an ff such that ( - b; fi) j (-a; ff) (with ff and fi real
We note:
Lemma 2.7 For each i, j is of finite index on
R i .
Proof. By downward induction on i. The case
-a) the set of equivalence classes in R i+1 lying
"above" - a, i.e., intersecting the line f(-a; ff) j ff 2
Rg. Clearly, so that we
have an injection mapping each equivalence class
c on R i to the set f-a) j -
a 2 cg. Since j is
of finite index on R i+1 , - can have only a finite
number of possible values and hence j is of finite
index on R i as well.
The relevance of the equivalence relations just
defined is demonstrated by the following lemma.
We use the following
stands for the formula (Q i+1 x
M .
Lemma 2.8 Let k - i - n, and let - a and - b be
equivalent points in R i . Then R
only if R
This lemma can be proven by a straightforward
induction (omitted).
The notion of domain sequence, defined next, is
crucial. The technical lemma following the definition
will directly imply Theorem 2.3.
Definition 2.9 Let D k be a finite subset of R. A
sequence of finite subsets
of R is called a domain sequence with respect to \Phi
if for each k -
Since j is of finite index, we know that a domain
sequence always exists.
Lemma 2.10 Let (D k ; D
sequence w.r.t. \Phi, and let a
only if
(D
Proof. By downward induction on i. The case
n. Note that
We concentrate on the
case the case Q
lar. Denote (a a. For the implication
from left to right, assume R
Then there exists a i+1 2 R such that R
According to Definition 2.9, there
exists a 0
By Lemma 2.8, since R
also have R
(D
We can thus
conclude that (D [-a]. The implication
from right to left is straightforward.
Theorem 2.3 immediately follows from the case
of the above lemma. The reader may have
noticed that we have never relied on the fact that
in the polynomial inequalities p ? 0 occurring
in a real formula, p is really a polynomial. So,
the theorem holds for any first-order language of
real functions. This observation substantiates our
claim made in the Introduction that our proof
"abstracts away the purely algorithmical aspects
of Collins's cad construction and focuses on the
logic behind it". Of course, by departing from
the cad construction one gets an effectively computable
version of Theorem 2.3. We will give an
alternative construction for the linear case in Section
4.
3 Queries on real databases
Fix a relational signature oe consisting of a finite
number of relation symbols S with associated arity
ff(S). A real database B is a structure of type
oe with R as domain, assigning to each relation
in oe a finite relation S B of rank ff(S) on
R. The active domain of B, denoted by adom(B),
is the (finite) set of all real numbers appearing in
one or more relations in B.
A query is a mapping from databases of type oe
to true or false. A basic way of expressing queries
is by first-order formulas which look like real for-
mulas, with the important additional feature that
they can use predicates of the form S(p
where S is a relation symbol in oe of arity a, and
each p i is a polynomial. If \Phi(-x) is a query formula
and B is a database, then the satisfaction
defined in the standard way. In par-
ticular, if \Phi is a sentence, it expresses the query
yielding true on an input database B iff B
Example 3.1 Assume 2.
The query "do all points in S lie on a common
circle?" can be expressed as (9x 0 )(9y 0 )(9r)(8x)
query "is there a point in S whose coordinates are
greater than or equal to 1?" can be expressed as
1). Note that the quantifiers
in query formulas are naturally interpreted
as ranging over the whole of R.
If \Phi is a query sentence and B is a database,
then we can produce a real sentence \Phi B in a
very natural way as follows. Let
be an atomic subformula of \Phi, with S a relation
symbol in oe. We know that S B is a finite
relation consisting of, say, the m tuples
)g. Then replace
a ) in \Phi by
It is obvious that B only if R
Now assume the query sentence \Phi is in prenex
normal
If B is a database and D are subsets of R,
then we say that \Phi is satisfied on (B; D
evaluates to
true on B when we let each quantifier Q i range
over D i only, rather than over the whole of R.
Given the preceding discussion, the following
theorem follows readily from the material in the
previous section:
Theorem 3.2 Let \Phi be a query sentence as in
(y) and let B be a database. If adom(B) ' D 1 '
is a domain sequence with respect to \Phi B ,
This theorem is the analog in the real case of the
four Russians theorem [4] mentioned in the Introduction
4 The linear case
In this section, we focus on linear queries, expressed
by query sentences in which all occurring
polynomials are linear. We prove that each linear
query is expressible by a linear query sentence of
which the quantifiers range over the active domain
of the input database only. Thereto, we introduce
a particular way to construct domain sequences
on the active domain of a database, based
on Gaussian elimination. We then show that this
construction can be simulated in a uniform (i.e.,
database-independent) way by a linear query formula
Before embarking, we point out that the notion
of equivalence of points with respect to some given
real formula \Phi (Definitions 2.5 and 2.6) depends
only on the set of polynomials occurring in \Phi. So
we can also talk of equivalence with respect to
some given set of polynomials.
Now let \Pi be a set of linear polynomials on the
Such a polynomial p is of
the form c p
We define a sequence
linear polynomials inductively as
follows:
c q
In words, each \Pi i is a set of linear polynomials
obtained from \Pi i+1 by Gaussian
elimination.
Equivalence of points in R i with respect to \Pi
can be characterized in terms of the polynomials
in \Pi i as follows:
Proposition 4.1 Let 1 - i - n and let -
Then - a j - b with respect to \Pi if and only if each
polynomial in \Pi i has the same sign (positive, zero,
or negative) on -
a and - b.
Proof. (Sketch) By downward induction on i.
The case According
to Definition 2.6, - a j - b if for each ff there
is a fi such that (-a; ff) j ( - b; fi) (and conversely; for
simplicity we will ignore this part in the present
sketch). Equivalently, by induction, for each ff
there is a fi such that each polynomial in \Pi i+1
has the same sign on (-a; ff) and ( - b; fi). For sim-
plicity, we ignore in this sketch the polynomials
equivalently,
for each ff there is a fi such that for each p 2 \Pi i+1 ,
or ?. Now it can be seen that this is equivalent
to p(-a)=c p
for all By definition of \Pi i this is the
same as saying that each p 2 \Pi i has the same sign
on - a and - b.
Now let \Phi be a linear query sentence (Q 1
prenex normal form, and let B
be a database. Recall the definition of the real
described in the previous section; note
that since \Phi is linear, \Phi B is linear as well. Fix
\Pi to be the set of polynomials occurring in \Phi B ,
and consider the sequence \Pi defined just
above. We observe:
Lemma 4.2 Let 1 - i - n. Then \Pi i is a finite
union of sets of the form
Both the number of these sets and the coefficients
c i and d i for each set do not depend on the particular
database B.
Proof. Consider the case
consists of the polynomials occurring in \Phi B . The
elements of \Pi can be classified into two different
kinds: those that already occur in \Phi, and those
that are of the form e, with p a polynomial
occurring in \Phi and e 2 adom(B). In the latter
case, p\Gammae may be assumed to occur for all possible
we omit the argument that this
assumption is without loss of generality. Hence,
the lemma holds for n. The case i ! n now
follows easily by induction.
We are now in a position to define a particular
domain sequence with respect to \Phi B , based on the
sequence
Definition 4.3 The linear sequence on B with
respect to \Phi is the sequence D 0 '
inductively defined as follows: D 0 equals
adom(B), and for 1
where D 0
i is D
Proposition 4.4 The linear sequence on B with
respect to \Phi is a domain sequence with respect to
Proof. According to Definition 2.9, we must
show for each
Consider the definition of D i in terms of D 0
4.3 above. We distinguish
the following possibilities for ff:
1.
2. ff ?
3.
is the maximal element
in E i such that e 1 ! ff, and e 2 is the minimal
element such that ff ! e 2 .
It is obvious that ff 0 2 moreover, from the way
defined, it is clear that all polynomials in \Pi i
have the same sign on (-a; ff) and (-a; ff 0 ). Hence,
by Proposition 4.1, the proposition follows.
After one final lemma we will be able to state
and prove the main result of this section:
Lemma 4.5 For each 0 - i - n there exists
a finite set P of linear polynomials such that
for each database B, the i-th member D i of the
linear sequence on B with respect to \Phi equals
with z independent of B.
Proof. By induction on i. The case
is trivial since D
The definition of D i in
terms of D 0
i in Definition 4.3 is clearly of the
consists of the four polynomials
We have
clearly of the form
some P 00 , and by induction, D i\Gamma1 is of the form
for some P 000 . By combining these expressions
using a tedious but straightforward substitution
process, we obtain the desired form for D i .
Theorem 4.6 For each linear query sentence \Phi
there is a linear query sentence \Psi such that for
each database B, B
\Psi, where adom denotes that the quantifiers in \Psi
range over the active domain of the database only.
Proof. Let be the
linear sequence on B with respect to \Phi. By Theorem
3.2 and Proposition 4.4, we know that B
We can write the latter
explicitly as B
case
8 is similar). From Lemma 4.5 we know that
D 1 can be written as fp(y
Pg. So, equivalently, we have
where each (9y i ) ranges only over adom(B). By
replacing in a similar manner, we
obtain the desired sentence \Psi.
5 Generic queries
Two databases B and B 0 over the same relational
signature oe are called isomorphic if there is a
for each relation symbol S in oe. A
query which yields the same result on isomorphic
databases is called generic.
For example, assume that oe consists of a single
binary relation symbol S. Databases of type
oe can be viewed as finite directed graphs whose
nodes are real numbers. Of course, any query expressed
in the language L of pure first-order sentences
over oe (i.e., not containing any polynomial
inequalities) is generic. Other examples of generic
queries are "is the graph connected?" or "is the
number of edges even?".
In the language L ! consisting of those query
sentences where all inequalities are of the simple
queries can be easily expressed, such as
y. As pointed out in the
Introduction, however, there are generic queries
expressible in L ! but not in L. We have been
able to prove that there is no similar gain in expressiveness
when moving from L ! to full linear
query sentences:
Theorem 5.1 For each linear query sentence \Phi
expressing a generic query there is a query sentence
\Psi in L ! such that for each database B,
As in Theorem 4.6, adom denotes that quantifiers
range over the active domain only; we know
by Theorem 4.6 that this active-domain interpretation
is without loss of generality.
Proof. (Sketch) We first observe:
Lemma 5.2 Let p be a linear polynomial on the
There exists a real formula
involving only simple inequalities
of the form x disjunction, conjunction, and
negation, such that for each natural number s - 1,
1 As an aside, we would like the reader to note that it
is possible to specialize Theorem 4.6 to query sentences in
using a different construction of domain sequence.
each sufficiently large real number a ? 0, and
each tuple y on the set fa; a
true.
This lemma is proven by noting that the application
of the (multivariate) polynomial p to
can be viewed as the
application of another, univariate polynomial to
a. In particular, for a sufficiently large, the sign of
the latter application is determined by the sign of
the leading coefficient. The difficulty to be overcome
is that this univariate polynomial depends
on the particular y However, it can be
seen that it depends essentially only on the way
how the y are ordered. We omit the details
Using the genericity of \Phi, we can now exploit
the above lemma to prove the theorem as follows.
Define \Psi to be the query sentence obtained by
replacing each polynomial inequality p ? 0 occurring
in \Phi by the corresponding formula / p as provided
by the lemma. Now let B a database, and
let s be the size of adom(B). Let a ? 0 be sufficiently
large and let ae be an order-preserving bijection
from adom(B) to fa; a g. Then we
have
\Psi. The first equivalence holds because
\Phi is generic, the second equivalence is obvious
from the lemma and the definition of \Psi, and
the third equivalence holds because ae is order-preserving
and \Psi 2 L ! (query sentences in L !
cannot distinguish between databases that are isomorphic
via an order-preserving bijection).
We can conclude that all generic queries that
are not expressible in L ! , like even cardinality
of a relation or connectivity of a graph, are
not expressible as a linear query either. Non-
expressibility in L ! has been addressed at length
by Grumbach and Su [9]. Grumbach, Su, and
Tollu [10] have also obtained inexpressibility results
for linear queries, using complexity argu-
ments. In particular, they showed that in the
context of the rationals Q rather than the reals
R, linear queries are the complexity class AC 0 ,
while even cardinality and connectivity are not.
Acknowledgment
We are grateful to Bart
Kuijpers for his careful reading of earlier drafts
of the material presented in this paper.
--R
Foundations of Databases.
Universality of data retrieval languages.
Geometric reasoning with logic and algebra.
Constraint Logic Programming: Selected Re- search
G'eometrie Alg'ebrique R'eelle.
Computable queries for relational data bases.
Quantifier elimination for real closed fields by cylindrical algebraic decom- position
Finitely representable databases.
Linear constraint databases.
Domain independence and the relational calculus.
Constraint query languages.
On the expressive power of the relational calculus with arithmetic con- straints
--TR
--CTR
Leonid Libkin, A collapse result for constraint queries over structures of small degree, Information Processing Letters, v.86 n.5, p.277-281, 15 June
Gabriel M. Kuper , Jianwen Su, A representation independent language for planar spatial databases with Euclidean distance, Journal of Computer and System Sciences, v.73 n.6, p.845-874, September, 2007
Michael Benedikt , Martin Grohe , Leonid Libkin , Luc Segoufin, Reachability and connectivity queries in constraint databases, Proceedings of the nineteenth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, p.104-115, May 15-18, 2000, Dallas, Texas, United States
Michael Benedikt , Martin Grohe , Leonid Libkin , Luc Segoufin, Reachability and connectivity queries in constraint databases, Journal of Computer and System Sciences, v.66 n.1, p.169-206, 01 February
Michael Benedikt , Leonid Libkin, Relational queries over interpreted structures, Journal of the ACM (JACM), v.47 n.4, p.644-680, July 2000
Evgeny Dantsin , Thomas Eiter , Georg Gottlob , Andrei Voronkov, Complexity and expressive power of logic programming, ACM Computing Surveys (CSUR), v.33 n.3, p.374-425, September 2001 | linear arithmetic;first-order logic;relational databases;constraint programming |
291534 | An Asymptotic-Induced Scheme for Nonstationary Transport Equations in the Diffusive Limit. | An asymptotic-induced scheme for nonstationary transport equations with the diffusion scaling is developed. The scheme works uniformly for all ranges of mean-free paths. It is based on the asymptotic analysis of the diffusion limit of the transport equation.A theoretical investigation of the behavior of the scheme in the diffusion limit is given and an approximation property is proven. Moreover, numerical results for different physical situations are shown and the uniform convergence of the scheme is established numerically. | Introduction
. Transport equations are used to describe many physical phe-
nomena. Some of the best known examples are neutron transport, radiative transfer
equations, semiconductors or gas kinetics. The situation for small mean free paths
is mathematically described by an asymptotic analysis. Depending on the transport
equation and on the kind of scaling, different limit equations are obtained. For example
the gas kinetic equations may lead to Euler or (in)compressible Navier Stokes
equations. The limit equation for small mean free paths of radiative transfer, neutron
transport, or semiconductor equations is the diffusion and the drift-diffusion equation,
respectively. We refer to [3, 4, 12, 18, 20, 28] and [2, 6, 8, 10].
The main problem for numerical work on transport equations in these regimes
is the stiffness of the equations for small mean free paths. For standard numerical
schemes one has to use a very fine and expensive discretization with a discretization
size depending on the mean free path. Moreover, in general a full resolution of the
relaxation process is not necessary. The general aim is to develop numerical schemes
working uniformly for different regimes. In particular, the discretization size should
be independent of the mean free path. In recent years there has been a lot of work
on numerical methods for kinetic equations in stiff regimes. For example, stationary
transport equations in the diffusion limit have been considered, e.g., in [14, 15, 22, 21].
Nonstationary kinetic equations with a scaling leading to first order hydrodynamic
equations like the Euler equation are treated in [7, 9]. Usually for the latter case a
fractional step method with a semi-implicit treatment of the equations is used. For
general work on implicit methods for transport equations we refer to [27] and references
therein. We mention here also work on implicit methods for the full Boltzmann
equation, see [5]. Moreover, the relaxation limit of transport equations may be used
to develop schemes for the hydrodynamic equations themselves. These schemes have
been developed by many authors. For a recent general approach to these so called
relaxed or kinetic schemes we refer to [16].
The present work considers a scheme for nonstationary transport equations with
a scaling leading to the diffusion equation as the limit equation. The different space
time scalings involved in the problem are treated in a proper way. We use the standard
perturbation procedure leading from the transport to the diffusion equation.
FB Mathematik, University of Kaiserslautern, 67663 Kaiserslautern, Germany,
([email protected]).
A. KLAR
Essentially the problem is transformed into a system of equations of relaxation form
and then a fractional step method is used. The analysis of the resulting problem is
based on ideas developed in [7]. Including the results of a boundary layer analysis in
the scheme, kinetic boundary layers are also treated in a correct way. Sections 2 and
3 contain a description of the results of the standard asymptotic procedure and the
presentation of the time discretization in our scheme. In Section 4 the diffusion limit
of the scheme is considered. In Section 5 the fully discretized equations are presented.
An approximation property for different ranges of the mean free path is proven in 6.
Section 7 contains numerical results for several examples and a numerical comparison
with other schemes.
Finishing the introduction we mention that the ideas developed in this paper can
be transfered to the gas kinetic and the semiconductor case, where the above scaling
leads in the limit to the incompressible Navier-Stokes equation and the drift-diffusion
equation respectively. In particular, in the gas dynamic case a more careful use has
to be made of the perturbation procedure leading from the Boltzmann equation to
the incompressible Navier-Stokes equation. This problem will be treated in a separate
paper.
2. The Equations. We consider transport equations of the following form
where S is assumed to be the unit sphere around 0 in R 3 . The collision operator Q is
defined by
with are some constants and the scattering cross
section oe is independent of v. K is an integral operator
Z
s symmetric in v and v 0 , rotationally invariant, are
some constants, and
R
1. K is compact. The collision operator
has as collision invariants only constants and is negative in a suitable function space.
The source term G(x) - 0 is assumed to be independent of v. Initial and boundary
conditions are given by
and
where @D is the boundary
of\Omega and the outer normal of
@\Omega at the point
x. See [3] for a thorough theoretical investigation of this equation. Extensions of the
following to other cases like, e.g., v- dependence of oe and G are possible.
Introducing the usual diffusion space-time scaling x ! x
the mean free path and scaling G(x) one obtains the scaled equations
ASYMPTOTIC-INDUCED SCHEME FOR TRANSPORT EQUATIONS 3
With the standard perturbation procedure, see e.g. [3, 4, 12, 20] the limit equation
for (2.2) as ffl tends to 0 can be derived by writing f as
One obtains
and up to a constant
be the solution of
R
3. Since by assumption s is rotationally invariant, it follows [3] that 8i; j 2
R
is the solution of the diffusion equation
oe
Doing a boundary layer analysis, one observes that the correct zeroth order boundary
conditions for the diffusion equation are given by a kinetic half space problem: Let
be the bounded solution of the following halfspace problem at x
Then
Here - x (\Gamma1; t) is independent of v.
Remark: In the absorbing case the scaled equation (2.2) is changed into
where oe A is the absorption cross section. The diffusion equation turns into
oe
3. The Numerical Scheme. For a numerical scheme for the transport equation
in the small mean free path limit it is desirable that varying mean free paths ffl can be
treated with a fixed discretization such that it is not necessary to adapt the time step
once the mean free paths tend to 0. Moreover, it is also desirable that the scheme is
in the limit ffl ! 0 a good discretization of the diffusion equation.
These points are obviously not fullfilled for a simple explicit time discretization
of (2.2) like
since, as ffl tends to 0, the time step must be shrinked due to stability considerations
in order to treat the advection term (the CFL condition has to be fulfilled) and the
4 A. KLAR
collision term properly. Therefore, large computation times are needed for small mean
path for such a scheme. In contrast, for a fully implicit discretization
there is no restriction on the time step due to stability considerations. However, one
has to solve a stationary equation in every time step, which is again time consuming.
We mention that, due to the development of fast multigrid algorithms [19, 24, 25, 26],
for the stationary equation, computation times for a fully implicit scheme are strongly
reduced. A numerical comparison of these types of algorithms with the one developed
here is presented in Section 7.
The aim in this work is to develop a semi-implicit scheme treating only such terms
in an implicit way for which it is necessary to do so in order to obtain a scheme working
uniformly in ffl. In particular, due to the different advection ( 1
scales, it is in the original formulation (2.2) not clear whether the advection has to be
treated implicitely or not. One may nevertheless discretize the original equations in a
straightforward way by treating the advection explicitely and the scattering term in
an implicit way:
This simple type of discretization has several drawbacks compared to the scheme
developed below, we discuss them at the end of Section 4.
We suggest to use the standard perturbation procedure to transform equation
(2.2) into two equations. A fractional step scheme with a semi-implicit procedure is
then used for the resulting equations. The idea is to follow the expansion procedure,
collect suitable terms together, such that only terms on
the scale 1
are involved.
be the solution of the set of equations
We take the inital and boundary values
and
One observes that f 0 fulfills the original equation (2.2) and the initial and
boundary conditions. It is therefore the desired solution of the original problem.
The results of the boundary layer analysis, see, e.g. [3], are included in the scheme
by choosing h in the following way:
be the solution of the halfspace problem (2.3). Since the outgoing
ASYMPTOTIC-INDUCED SCHEME FOR TRANSPORT EQUATIONS 5
function at the boundary for the kinetic problem (2.2) is the same as the outgoing
solution of the half space problem for ffl tending to 0, we define
In the limit ffl tending to 0 we obtain in this way the correct boundary value. For
independent of v we get q(x; v; It is obviously not
reasonable to determine the outgoing function by solving the halfspace problem. This
would need too much computing time. Here a fast approximate scheme as in [11] or
[17] is needed to determine the outgoing function. For example a first approximation
is given by choosing simply an approximation ~
of the asymptotic value
of the halfspace problem as the outgoing function:
The simplest approximation of - x (\Gamma1; t)is given by equalizing the half range fluxes
of the halfspace problem at 0 and 1:
~
R
R
A more sophisticated approximation for q, see [17], is given by
~
R
R
(3.9)D4-
Z
(v
R
R
dv
and
Z
(w
We remark that a correct treatment of the boundary conditions is important, in
particular, if zeroth order kinetic boundary layers are present and one is using a
coarse spatial grid not resolving the layer. See Section 7 for some examples. Using
the approximations above one obtains a good approximation of the solution with a first
order boundary layer even if only a very coarse grid is used. The first approximation
yields in general already very good results as can be seen in the numerical experiments
in Section 7. However, in certain situations the use of the second approximation might
be necessary to obtain an improved accuracy, compare Figure 7.6 in Section 7.
The system of equations (3.4,3.5) will be solved with a fractional step scheme:
Step 2:
6 A. KLAR
For Step 1 an explicit discretization will be used, Step 2 is discretized implicitely to
treat the stiffness of the equations in a correct way.
Let \Deltat denote the time step and f k
\Deltat the time iterations
approximating f 0 (x; v; k\Deltat); f 1 (x; v; k\Deltat). The initial and boundary values are given
as above. Introducing the notation
Z
the time discretization is then given by the following:
Step 2:
\Deltat
Rewriting (3.13,3.14) we obtain
\Deltat
\Deltat
and
\Deltat
\Deltat
This leads to
Step 2:
oe
where the operator A is defined by
\Deltatoe
\Deltatoe I
and
\Deltat
\Deltatoe I
ASYMPTOTIC-INDUCED SCHEME FOR TRANSPORT EQUATIONS 7
Here I denotes the identity. The operator
\Deltatoe I
is positive and invertible with a bounded inverse in a suitable function space for all
since K is compact and I \Gamma K is positive having only the constants as
collision invariants.
For example in the case of one-group transport with K =!? we obtain
and
In this case the semi-implicit scheme reduces to a fully explicit one. In general, in
each time step we have to solve in Step 2 two linear Fredholm integral equation of the
\Deltatoe I
This may be achieved by standard methods [1, 13].
Remark: In the absorbing case one proceeds as described above treating the
absorption in an explicit way. Then one obtains
Step 2 is unchanged.
4. The Diffusion Limit. We start with the investigation of the behaviour of
the time discretized scheme as ffl tends to 0 for fixed \Deltat.
As the operators A and B have the following behaviour: Introducing
suitable spaces, e.g. L 2 (S), we have for \Deltat fixed and ffl small, since
positive the following:
\Deltatoe
and
\Deltatoe
Using these estimates we get that the scheme reduces in the diffusion limit, ffl
tending to 0, to the following
8 A. KLAR
Step 2:
oe
Moreover, we have
and
where h was defined in Section 2. This yields
Step 2:
oe
Considering Step 2 and Step 1 together we obtain for
oe
oe
or
\Theta
oe
This is the simplest explicit time discretization for the diffusion equation. The
boundary conditions for the diffusion equation that are given in the limit by the
solution of the halfspace problem (2.3) fit to the boundary conditions for the kinetic
scheme as defined in the last section.
We finish this section by comparing the above scheme with the scheme (3.3) in
Section 3. Doing the standard asymptotic analysis [21] we get for (3.3) as ffl ! 0
oe
This means we obtain an explicit discretization of the diffusion equation as for the
above scheme, but due to the ! f
it is not the usual one. This type of
discretization of the diffusion equation is worse in terms of accuracy and stability than
(4.4). For example, doing a stability analysis one observes that only time steps are
allowed which are half the size of those that can be used in (4.4). This is essentially
due to the fully explicit treatment of the advection term in (3.3). Moreover, the
scheme developed in Section 3 gives the possibility to treat for example the collision
terms in a semi-implicit way as given in (3.13,3.14). This is at least for one-group
transport with K =!? a decisive advantage, since the semi-implicit scheme presented
here reduces in this case to a fully explicit one. If one would be trying to do the same
thing based on the scheme (3.3) it would turn out that the limit equation is not any
more the diffusion equation.
ASYMPTOTIC-INDUCED SCHEME FOR TRANSPORT EQUATIONS 9
5. The Fully Discretized Equations. We restrict from now on for notational
simplicity to the case, where f 0 and f 1 depend only on the first space coordinate: The
domain under consideration is [0; L]. Moreover, we consider only the space discretiza-
tion. The velocity space can be treated by using standard discretizations, see, e.g.,
[23].
We define a staggered grid x
\Deltax
, and x i\Gamma 1=
We use the notation
and
Moreover the operators D + and D \Gamma are defined as
The discretization of the initial values is straightforward. The boundary conditions
are discretized by
Discretizing the f 0 derivative in (3.16) with D \Gamma and the f 1 derivative in (3.11) with
D+ yields the following scheme
Step 2:
oe
A. KLAR
In the limit for small ffl we obtain the space discretized diffusion equation
oe
\Deltat
oe
or
\Theta
\Deltat
oe
This is a standard explicit discretization of the diffusion equation. In particular,
we obtain independent of the size of the discretization \Deltax a good discretization of
the limit equation for all ranges of the mean free path. The discretization possesses
all diffusion limits, the so called thin, intermediate and thick diffusion limit, see [22].
We observe, that we need in the limit a relation like
\Deltat - (\Deltax) 2oe
as for the diffusion equation, to obtain positivity and stability of our scheme. This
condition may be relaxed for ffl large.
6. A Uniform Approximation Property. In this section we prove a uniform
approximation property of our scheme. We give an estimate for the consistency error,
considering the integral form of equations (3.4,3.5) assuming that the true solution is
smooth.
Written in integral form the equations for f 0 (t) and f 1 (t) are for the Cauchy
problem and one space dimension
]ds:
Approximating the integral by an integral over step functions defined in each interval
of length \Deltat and approximating the derivative with respect to x as before, we get for
oe
oe
where we defined f (k)
ASYMPTOTIC-INDUCED SCHEME FOR TRANSPORT EQUATIONS 11
Reconsidering equations (5.1,5.2) and (5.3,5.4) for Step 1 and Step 2 of the numerical
scheme and putting the steps together, we get for one time step
or
A
or
A
oe
oe
+\Deltat
A
and a similar formula for f n
1 .
In the following we want to estimate the consistency error, i.e. the difference
between the true solution (f of the integral equation (6.1,6.2) and the value
1 ) that is obtained by introducing the true solution (f
instead of (f k
into the above formula (6.3).
This means we estimate the difference between (f 0 (t); f 1 (t)) and ( ~
~
A
oe
oe
+\Deltat
A
and a similar formula for ~
1 . Restricting in the proof for simplicity to the case
I , we concentrate in the following on proving a
pointwise estimate for
The proof is based on four lemmas.
Lemma 6.1.
where C is a constant independent of ffl.
Proof.
A. KLAR
Z (k+1)\Deltat
k\Deltat
oe
oe
oe
]ds
oe
oe
Z (k+1)\Deltat
k\Deltat
oe
\Deltax)ds
oe
oe
k\Deltat
oe
ds
where
oe
oe
oe
\Deltax
oe
used. Since the second term is 0, this is
smaller than
Lemma 6.2.
A
where C is a constant independent of ffl.
Proof.
A
A
with
1. This is equal to
C \Deltat
ASYMPTOTIC-INDUCED SCHEME FOR TRANSPORT EQUATIONS 13
since A. The estimates to prove the next Lemma can be found in [7]
Lemma 6.3.
with In particular for one has
Proof.
We have
This gives
\Deltat
Moreover, since
\Deltatoe
\Deltatoe
\Deltatoe
we get
\Deltatoe
\Deltatoe
\Deltatoe
Estimates (6.4) and (6.5) give the first assertion. To prove the second assertion we
use the first one with
However, this is smaller than C \Deltat, since, if \Deltatoe
2 we have that it is smaller than
if \Deltatoe
that it is smaller than
(noe \Deltat
(noe \Deltat
14 A. KLAR
Lemma 6.4.
\Deltat
where C is a constant independent of ffl.
Proof.
A
oe
oe
The first two terms are estimated by Lemma 2 and Lemma 3. They are smaller than
C \Deltat:
The third term is for ffl - ffl 0 smaller than
Using I the second term on the right hand side in
(6.6) is smaller than
\Deltatoe
\Deltatoe
\Deltatoe
\Deltatoe
\Deltat
\Deltat
Again due to Lemma 3 with k substituted by the first term in (6.6) is smaller
than
\Deltatoe
\Deltatoe
\Deltatoe
\Deltatoe
\Deltatoe
ASYMPTOTIC-INDUCED SCHEME FOR TRANSPORT EQUATIONS 15
since x). For \Deltatoe
smaller than
For \Deltatoe
smaller than
\Deltatoe
\Deltatoe
\Deltatoe
\Deltatoe
\Deltatoe
due to
1. This means that the first term in (6.6) is smaller
than
\Deltat
\Deltat
with C independent of ffl. Collecting all the terms the lemma is proven. All together,
using Lemma 1 and 4 we have proven for t 2 [0; T ];
\Deltat
This means, that for small \Deltat; \Deltax and fixed ffl, the estimate tends to 0 like \Deltat \Deltax.
However, also for a meshsize, that is large compared to ffl the estimate shows, that
we get convergence to 0. For example, for ffl - C \Deltat we obtain convergence to 0 like
\Deltat \Deltax.
We mention that \Deltat has to be chosen in relation to \Deltax . E.g. in the diffusion
limit we need \Deltat to be of the order of (\Deltax) 2 as we have seen in the last section.
7. Numerical Results and Examples. In this section a numerical study of
the scheme is presented and the scheme is compared with fully explicit and fully
schemes.
We restrict to the one-group transport equation in slab geometry, i.e. x 2 [0; L]
and K =!?. This yields
3 . The velocity discretization is done using in all
situations a 16 point quadrature set.
We compute the solution with the semi-implicit scheme derived above for different
space discretizations. To obtain positivity and stability of the semi-implicit scheme
in the limit ffl tending to 0 one has to take - for a fixed space discretization \Deltax - a
time step \Deltat of the size given by (5.6). As mentioned above this can be relaxed for
large ffl. In particular, this means that the size of \Deltat can be chosen independent of ffl.
Comparison with the explicit scheme (3.1):
In contrast to the above we get that the explicit discretization (3.1) of equation (2.2)
requires a time step of the order min(\Deltaxffl; ffl 2
oe
) to obtain positivity and stability. In
particular, for small ffl the step size \Deltat has to be chosen in this case of the order ffl 2 ,
in contrast to the semi-implicit scheme. A comparison of the CPU time necessary for
one time step yields that the semi-implicit scheme needs about 2 times the CPU time
of the explicit scheme. This yields a big gain in computing time for small ffl for the
semi-implicit scheme compared to an explicit one. In particular, it is reasonable to use
A. KLAR
the semi-implicit scheme, if 2 \Delta
oe
2 oe and if the desired
accuracy does not require a smaller time step, than the one that can be taken for the
semi-implicit scheme. To obtain a certain required accuracy of the solution one has to
use time steps as shown in the table below for some examples, see Table 1. Looking
at
Table
one observes that using an explicit scheme is not reasonable for small ffl.
Either the semi-implicit or the implicit scheme are faster. However, this changes for
large, where the explicit scheme may be better due to the small computation times
per time step.
Comparison with the fully implicit scheme (3.2):
A fully implicit dicretization of the equation obviously allows bigger time steps, since
there is no stability restriction on the time step in this case. Nevertheless, for the
accurate simulation of the time development small time steps may be necessary. To
get an accurate resolution of the behaviour of the solution up to an error of a certain
order the size of the time step for the implicit scheme has to be chosen according to
Table
An implementation of a fully implicit scheme shows that in order to obtain a
sufficient accuracy the stationary equation has to be evaluated to a very high accuracy
approximately up to an error of the order 10 \Gamma8 . To achieve this a standard iteration
scheme using for example a diamond difference discretization needs a large number of
iteration steps (sweeps over the computational domain). A comparison of the CPU
time for one iteration step shows that one time step of the semi-implicit iteration
needs less than 2 times the CPU-time of an iteration of the stationary scheme. Table
2 shows that the semi-implicit scheme has a big advantage compared to a standard
implicit iteration in many situations.
However, of course, computation times for an implicit scheme are strongly reduced
if a multigrid algorithms as described, e.g., in [24] is used. Using the convergence
estimates in [24] one observes that in essentially two cycles an accuracy of the
one needed for the solution of the stationary equation is obtained. One V (1; 1) cycle
costs about the same CPU time as 4 sweeps over the computational domain. I.e. the
estimated costs for one time step of a fully implicit scheme with a multigrid algorithm
is about 4 times as large as the one for the semi-implicit scheme. The complexity of
the implementation of a multigrid scheme especially in higher dimensions has to be
taken into consideration as well.
We consider a situation with
conditions equal to 0 at and equal to 1 at 1. The space discretizations
are 0:01. The time steps required for the semi-implicit scheme by
stability considerations are in this case \Deltat = 0:015; \Deltat = 0:00015. We consider final
times 0:5. For a stationary state is nearly reached. The
error was calculated by taking the L 1 -norm of the difference with the 'true ' solution
computed with a very fine discretization. The table shows the time steps necessary
to obtain a certain accuracy e with the implicit scheme using a diamond difference
discretization.
Table
1: Time steps required to obtain a certain accuracy e.
These accuracy requirements together with the above estimated CPU time give
the following relation between the CPU time for the explicit (E), the semi-implicit (S)
ASYMPTOTIC-INDUCED SCHEME FOR TRANSPORT EQUATIONS 170.00050.00150.0025
Fig. 7.1. Error for different values of ffl.
and the implicit scheme with multigrid (IM) and with a standard iteration procedure
Table
2: Relative CPU times
This shows that for coarse grids the semi-implicit scheme has to be prefered. For
finer grids and nearly stationary situations the advantage of a fully implicit scheme
with multigrid is clearly seen. Implicit schemes with a standard iteration procedure
are in all considered situations slower than the semi-implicit scheme.
Further investigation of the semi-implicit scheme:
To show the uniform convergence in ffl for the semi-implicit scheme numerically, we
compute the error for different values of ffl ranging from As
before, we use conditions equal to 0 and 1 at
and respectively, and the following values for the space discretization \Deltax with
the corresponding \Deltat values due to the stability condition (5.6):
Hence, each cell contains between 0:125 and 10 5 mean free paths. The error was
calculated by taking the L 1 -norm of the difference of the solutions with discretization
size 0.0125 and 0.025 (error 3), 0.025 and
tively. This results in three curves, which are plotted in Figure 7.1. Looking at the
figure, one observes that the error behaves perfectly uniform as ffl ! 0.
The solution of the kinetic equation computed by the new scheme derived in
this work is in the following computed for different physical situations. The physical
examples under consideration are
Example 1: the boundary conditions
Example 2: the boundary conditions
are equal to 0 and
A. KLAR0.30.50.70
x
semi-implicit25 true
Fig. 7.2.
Example 3: conditions equal to 0,
Example 4: boundary conditons
0:4. The solution of this problem
has a kinetic boundary layer at
Example 5: As Example 4, but with
Example A two material problem. In [0; 0:1] we consider a
purely absorbing material with 0:1. I.e. the region
has the size of one mean free path. In [0:1; 1:1] we take a purely scattering material
region). The
boundary conditions are f(0; The solution of this
problem has an interface layer at
The initial condition is always 0.
The solutions for the physical situations described above are plotted in the following
figures. In Figure 7.2 to 7.4 the situations from Example 1 to 3 are shown.
The solutions are plotted using space discretizations for the
semi-implicit scheme. We use the label 'semi-implicit10' to denote the solution with
the semi-implicit scheme with 10 spatial cells. The time discretization is chosen due
to the stability condition (5.6) for Example 2 and 3. For Example 1 the restriction
on the time step is relaxed to a CFL-type condition. The reference solution is the
solution with a very fine discretization. For this case the solution of the semi-implicit
scheme and of the other schemes are coincident. The solution of the diffusion equation
is computed by the usual triangular explicit scheme, which is the limiting scheme of
our semi-implicit scheme as ffl tends to 0, compare (4.3). The example shows that for
isotropic boundary conditions the solution is approximated with good accuracy for
different ranges of ffl.
In Figure figure layer1 Example 4 is considered. We plot the reference solution
and the solution of the diffusion equation with boundary coefficients derived from the
halfspace problem. The solutions of the semi-implicit scheme are found with
such that a discretization cell contains and the corresponding size of
the time discretization. The boundary values are found by determining approximately
the outgoing distribution of the halfspace problem (2.3) as described in Section 3.
ASYMPTOTIC-INDUCED SCHEME FOR TRANSPORT EQUATIONS 190.050.150.250
x
true
diffusion
Fig. 7.3.
x
semi-implicit25 true
Fig. 7.4.
This is done using first the approximation of the asymptotic value of the halfspace
problem by (3.7), as the outgoing function (the solution in the plot is labeled 'semi-
implicit10-1') and second an outgoing function determined by formula (3.10) labeled
'semi-implicit10-2'. In this first case the two approaches give coincident results. One
observes that even for a coarse diffusive discretization the behaviour of the solution at
the boundary is found with very good accuracy. We mention that other approaches to
obtain the correct discrete boundary conditions for the stationary equation are shown
in [14, 21].
Figure
7.6 shows Example 5. The same as in Figure 7.5 is shown. However, in
this case one cell contains now 1000 mean free paths. The advantage of using here
an exact approximation of the outgoing fiunction of the half space problem is clearly
seen.
Finally Figure 7.7 shows Example 6. The space discretization is here
in the absorbing region and \Deltax = 0:1 in the scattering region. In particular, one cell
in the scattering region contains 100 mean free paths. The situation at the interface
A. KLAR0.250.350.450.550.650.75
x
true
semi-implicit10-2 diffusion
Fig. 7.5.
x
true
semi-implicit10-2
Fig. 7.6.
is treated in the same way as the one at the boundaries before. One observes again
a good agreement of the solution in the diffusive region with the true solution. We
mention here the work of [14, 18, 21] who treated similar problems for the stationary
equation.
ASYMPTOTIC-INDUCED SCHEME FOR TRANSPORT EQUATIONS 210.020.060.10.14
x
true
Fig. 7.7.
8. Conclusions. From the analytical and numerical results one can conclude:
ffl The semi-implicit scheme works uniformly for all ranges of the mean free
path. This is shown by numerical experiments and a consistency proof.
ffl The limiting scheme for small mean free paths is a standard explicit discretization
of the diffusion equation.
ffl By including a boundary layer analysis one obtains a suitable treatment of
the boundary conditions for coarse (diffusive) discretizations.
ffl A comparison of the scheme with fully explicit and fully implicit schemes
shows advantages and disadvantages. In particular, the semi-implicit scheme
is faster than the fully implicit scheme, if the detailed time development is
computed with a coarse discretization or with higher accuracy requirements.
However, for nearly stationary situations with a fine grid the fully implicit
scheme, if combined with a fast multigrid method as in [24], is faster.
ffl The numerical results have been generated for the one group transport case. A
further numerical treatment should include the implementation of the scheme
with other scattering ratios. Using methods as in [1, 13] this should be possible
without too much difficulties.
--R
A Survey of Numerical Methods for the solution of Fredholm Integral Equations of the Second Kind
Fluid dynamic limits of kinetic equations: Formal derivations
Diffusion approximation and computation of the critical size
Boundary layers and homogenization of transport processes
Implicit and iterative methods for the Boltzmann equation
The fluid dynamical limit of the nonlinear Boltzmann equation
Uniformly accurate schemes for hyperbolic systems with relaxation
The Boltzmann Equation and its Applications
Numerical passage from kinetic to fluid equations
Incompressible Navier Stokes and Euler limits of the Boltzmann equation
A numerical method for computing asymptotic states and outgoing distributions for kinetic linear half space problems
Uniform asymptotic expansions in transport theory with small mean free paths
Theorie und Numerik
The relaxation schemes for systems of conservation laws in arbitrary space dimensions
Diffusion theory as an asymptotic limit of transport theory for nearly critical systems with small mean free paths
Asymptotic solution of neutron transport problems for small mean free path
Asymptotic solution of numerical transport problems in optically thick
Asymptotic solution of numerical transport problems in optically thick
Computational Methods of Neutron Transport
A fast multigrid algorithm for isotropic transport problems I: Pure scattering
A fast multigrid algorithm for isotropic transport problems II: With absorption
Multilevel methods for transport equations in diffusive regimes
Methods of Numerical Mathematics
Diffusion approximation of the linear semiconductor equation
--TR | asymptotic analysis;diffusion limit;transport equations;numerical methods for stiff equations |
291683 | On Residue Symbols and the Mullineux Conjecture. | This paper is concerned with properties of the Mullineux map, which plays a rle in p-modular representation theory of symmetric groups. We introduce the residue symbol for a p-regular partitions, a variation of the Mullineux symbol, which makes the detection and removal of good nodes (as introduced by Kleshchev) in the partition easy to describe. Applications of this idea include a short proof of the combinatorial conjecture to which the Mullineux conjecture had been reduced by Kleshchev. | Introduction
It is a well known fact that for a given prime p the p-modular irreducible
representations D - of the symmetric group S n of degree n are labelled in a
canonical way by the p-regular partitions - of n. When the modular irreducible
representation D - of S n is tensored by the sign representation we get a new
modular irreducible representation D - P . The question about the connection
between the p-regular partitions - and - P was answered in 1995 by the proof
of the so-called "Mullineux Conjecture".
The importance of this result lies in the fact that it provides information about
the decomposition numbers of symmetric groups of a completely different kind
than was previously available. Also it is a starting point for investigations
on the modular irreducible representations of the alternating groups. From
a combinatorial point of view the Mullineux map gives a p-analogue of the
conjugation map on partitions. The analysis of its fixed points has led to some
interesting general partition identities [AO], [B].
The origin of this conjecture was a paper by G. Mullineux [M1] from 1979,
when he defined a bijective involutory map - M on the set of p-regular
partitions and conjectured that this map coincides with the map - P . The
statement is the Mullineux conjecture. To each p-regular partition
Mullineux associated a double array of integers, known now as the Mullineux
symbol and the Mullineux map is defined as an operation on these symbols.
The Mullineux symbol may be seen as a p-analogue of the Frobenius symbol
for partitions.
Before the proof of the Mullineux conjecture many pieces of evidence for it had
been found, both of a combinatorial as well as of representation theoretical na-
ture. The breakthrough was a series of papers by A. Kleshchev [K1], [K2],
[K3] on "modular branching", i.e. on the restrictions of modular irreducible
representations from S n to S n\Gamma1 . Using these results Kleshchev [K3] reduced
the Mullineux conjecture to a purely combinatorial statement about the compatibility
of the Mullineux map with the removal of "good nodes" (see below).
A long and complicated proof of this combinatorial statement was then given
in a paper by Ford and Kleshchev [FK].
In his work on modular branching Kleshchev introduced two important notions,
normal and good nodes in p-regular partitions. Their importance has been
stressed even further in recent work of Kleshchev [K4] on modular restriction.
Also these notions occur in the work of Lascoux, Leclerc and Thibon on Hecke
Algebras at roots of unity and crystal bases of quantum affine algebras [LLT]; it
was discovered that Kleshchev's p-good branching graph on p-regular partitions
is exactly the crystal graph of the basic module of the quantized affine Lie
algebra U q ( -
studied by Misra and Miwa [MM].
From the above it is clear that a better understanding of the Mullineux symbols
is desirable including their relation to the existence of good and normal nodes in
the corresponding partition. In the present paper this relation will be explained
explicitly. We introduce a variation of the Mullineux symbol called the residue
symbol for p-regular partitions. In terms of these the detection of good nodes
is easy and the removal of good nodes has a very simple effect on the residue
symbol. In particular this implies a shorter and much more transparent proof
of the combinatorial part of the Mullineux conjecture with additional insights
(Section 4). We also note that the good behaviour of the residue symbols with
respect to removal of good nodes allows to give an alternative description of the
p-good branching graph, and thus of the crystal graph mentioned above. Some
further illustrations of the usefulness of residue symbols are given in Section 3.
This includes combinatorial results on the fixed points of the Mullineux map.
Basic definitions and preliminaries
Let p be a natural number.
Let - be a p-regular partition of n. The p-rim of - is a part of the rim of -
([JK], p. 56), which is composed of p-segments. Each p-segment except possibly
the last contains p points. The first p-segment consists of the first p points of
the rim of -, starting with the longest row. (If the rim contains at most p
points it is the entire rim.) The next segment is obtained by starting in the
row next below the previous p-segment. This process is continued until the
final row is reached. We let a 1 be the number of nodes in the p-rim of - (1)
and let r 1 be the number of rows in -. Removing the p-rim of - (1) we get a
new p-regular partition - (2) of n \Gamma a 1 . We let a be the length of the p-rim
and the number of parts of - (2) respectively. Continuing this way we get a
sequence of partitions -
and a corresponding Mullineux symbol of -
a 1 a 2 \Delta \Delta \Delta am
The integer m is called the length of the symbol. For p ? n, the well-known
Frobenius symbol F (-) of - is obtained from G p (-) as above by
As usual, here the top and bottom line give the arm and leg lengths of the
principal hooks.
It is easy to recover a p-regular partition - from its Mullineux symbol G p (-).
Start with the hook - (m) , given by am backwards. In placing
each p-rim it is convenient to start from below, at row r i . Moreover, by a slight
reformulation of a result in [M1], the entries of G p (-) satisfy (see [AO])
we call the corresponding
column
\Delta of the Mullineux symbol a singular column, otherwise the column
is called regular.
If G p (-) is as above then the Mullineux conjugate - M of - is by definition the
p-regular partition satisfying
a 1 a 2 \Delta \Delta \Delta am
In particular, for p ? n, this is just the ordinary conjugation of partitions.
Example. Let
(In both cases the nodes of the successive 5-rims are numbered 1; 2; 3; 4).
Thus
Now let p be a prime number and consider the modular representations of S n
in characteristic p; note that for all purely combinatorial results the condition
of primeness is not needed.
The modular irreducible representations D - of S n may be labelled by p-regular
partitions - of n, a partition being p-regular if no part is repeated p (or more)
times ([JK], 6.1); this is the labelling we will consider in the sequel.
Tensoring the modular representation D - of S n by the sign representation of
another modular irreducible representation, labelled by a p-regular
partition - P . Mullineux has then conjectured [M1]:
Conjecture. For any p-regular partition - of n we have -
If - is a p-regular partition we let as before
a 1 a 2 \Delta \Delta \Delta am
denote its Mullineux symbol. We then define the Residue symbol R p (-) of -
as
oe
where x j is the residue of am+1\Gammaj \Gamma r m+1\Gammaj modulo p and y j is the residue
of p. Note that the Mullineux symbol G p (-) can be recovered
from the Residue symbol R p (-) because of the strong restrictions on
the entries in the Mullineux symbol. Also, it is very useful to keep in mind
that for a residue symbol there are no restrictions except that
(which would correspond to starting with the p-singular partition (1 p )). We
also note that a column
\Delta in R p (-) is a singular column in G p (-) if and only
and R 5
oe
Also for the residue symbol of a p-regular partition we have a good description
of the residue symbol of its Mullineux conjugate; this is just obtained by
translating the definition of the Mullineux map on the Mullineux symbol to
the residue symbol notation.
Lemma 2.1 Let the residue symbol of the p-regular partition - be
oe
Then the residue symbol of - M is
ae
oe
where
Notation. We now fix a p-regular partition -. Then e
- denotes the partition
obtained from - by removing all those parts which are equal to 1. We will
assume that - has d such parts, 0 - d - - be the
partition obtained from - by subtracting 1 from all its parts. We say that - is
obtained by removing the first column from -. Unless otherwise specified we
assume that the residue symbol R p (-) for - is as above.
For later induction arguments we formulate the connection between the residue
symbols of - and -. First we consider the process of first column removal; this
is an easy consequence of Proposition 1.3 in [BO] and the definition of the
residue symbol.
Lemma 2.2 Suppose that
oe
Then
ae x 0
oe
Here y 0 is defined to be 1 and the - j 's are defined by
Moreover, if x then the first column in R p (-) (consisting of x 0
1 and y 0
omitted.
Remark 2.3 In the notation of Lemma 2.2 the number d of parts equal to 1
in - is determined by the congruence
Moreover since r 1 is the number of parts of - and y is clear that
y m is the p-residue of the lowest node in the first column of -.
Next we consider the relationship between - and - from the point of adding a
column to -; this follows from Proposition 1.6 in [BO].
Lemma 2.4 Suppose that
ae x 0
oe
Then
oe
Here x
and the - 0
j 's are defined by
Moreover, if y 0
then the first column in R p (-) (consisting of
Remark 2.5 In the notation of Lemma 2.2 and Lemma 2.4 we have
Indeed,
(by definition of - j )
(by definition of - 0
3 Mullineux fixed points in a p-block
The p-core - (p) of a partition - is obtained by removing p-hooks as long as
possible; while the removal process is not unique the resulting p-regular partition
is unique as can most easily be seen in the abacus framework introduced
by James. The reader is referred to [JK] or [O1] for a more detailed introduction
into this notion and its properties. We define the weight w of - by
The representation theoretic significance of the p-core is the fact that it determines
the p-block to which an ordinary or modular irreducible character
labelled by - belongs. The weight of a p-block is the common weight of the
partitions labelling the characters in the block.
be a partition of n. Then
is the Young diagram of -, and (i; called a node of -. If
is a node of - and Y (-) n f(i; j)g is again a Young diagram of a partition, then
A is called a removable node and - n A denotes the corresponding partition of
Similarly, if IN is such that Y (-) [ f(i; j)g is the Young
diagram of a partition of n is called an indent node of - and the
corresponding partition is denoted - [ A.
The p-residue of a node A = (i; j) is defined to be the residue modulo p of
denoted res p). The p-residue diagram of - is obtained by writing
the p-residue of each node of the Young diagram of - in the corresponding
place.
The p-content of a partition - is defined by counting the
number of nodes of a given residue in the p-residue diagram of -, i.e. c i is the
number of nodes of - of p-residue i. In the example above, the p-content of -
is
It is important to note that the p-content determines the p-core of a partition.
This can be explained as follows. First, for given
the associated ~n-vector by
for any vector
there is a unique p-core - with this ~n-vector ~n associated to its p-content c(-)
(for short, we also say that ~n is associated to -.) We refer the reader to [GKS]
for the description of the explicit bijection giving this relation. From [GKS]
we also have the following
Proposition 3.1 Let - be a p-core with associated ~n-vector ~n. Then
in i
with
How do we obtain the ~n-vector associated to - from its Mullineux or residue
symbol? This is answered by the following
Proposition 3.2 Let - be a p-regular partition with Mullineux symbol resp.
residue symbol
a 1 a 2 \Delta \Delta \Delta am
and R p
oe
Then the associated ~n-vector
Proof. In the residue symbol, singular columns do not contribute to the n-vector
as they contain the same number of nodes for each residue. So let us
consider a regular column
y
a
r
\Delta in the Mullineux symbol and the corresponding
p-rim in the p-residue diagram. In this case, the contribution only
comes from the last section of the p-rim. The final node is in row r and column
1 so its p-residue is What is the p-residue of the top node
of this rim section? The length of this section is j a (mod p), hence we have
to go j a \Gamma 1 steps from the final node of residue y to the top node of the
section, which hence has p-residue j y
(going a north or east step always increases the p-residue by 1!). Thus going
along the residues in the last section we have a strip
the contribution of the intermediate residues to the ~n-vector cancel out, and
we only have a contribution 1 for n x and \Gamma1 for n y\Gamma1 , which proves the claim. 2
First we use the preceding proposition to give a short proof of a relation already
noticed by Mullineux [M2]:
Corollary 3.3 Let - be a p-regular partition. Then
Proof. Let the residue symbol of - be R p
oe
So by Lemma 2.1 we have R p (- M
ae
oe
with
Now we consider the contributions of the entries in the residue symbol to the
~n-vectors. If x then we get a contribution 1 to
and \Gamma1 to n k (-) on the one hand, and a contribution 1 to n \Gamma(k+1) (- M )
and \Gamma1 to n \Gamma(j+1) (- M ) on the other hand. If x then from column
i in the residue symbol we get neither a contribution to ~n(-) nor to ~n(- M ).
Hence \Gamman \Gamma(j+1) (-) for all j, i.e. if
Now let be the p-content of -, then c(- 0
and hence
(\Gamman
Now we turn to Mullineux fixed points.
Proposition 3.4 Let p be an odd prime and suppose that - is a p-regular
partition with - M . Then the representation D - belongs to a p-block of
even weight w.
Proof. If - M , then its Mullineux symbol is of the form
a 1 a 2 \Delta \Delta \Delta am
where as before " and where a i is even if and
only if pja i .
Now by Proposition 3.1 we have
is the ~n-vector associated to - and ~
1). By Proposition 3.2 we have
For a we do not get a contribution to the ~n-vector. For a i 6j 0
(mod p) with a i \Gamma1j j (mod p) we get a contribution 1 to n j and \Gamma1 to n \Gamma(j+1) .
Note that we can not get any contribution to n p\Gamma1. Thus we have
Now we obtain for the weight modulo 2:
Hence the weight is even, as claimed. 2
For the following theorem we recall the definition of the numbers k(r; s):
is a partition, and
r
In view of the now proved Mullineux conjecture, the following combinatorial
result implies a representation theoretical result in [O2].
Theorem 3.5 Let p be an odd prime. Let - be a symmetric p-core and n 2 IN
with
even. Then
Proof. We set
For - 2 F(-) we consider its Mullineux symbol; as - is a Mullineux fixed point
this has the form
a 1 a 2 \Delta \Delta \Delta am
a 1 +" 1a 2 +"
being even if and only if pja i .
In this special situation the general restrictions on the entries in Mullineux
symbols stated at the beginning of x2 are now given by:
(ii) If a
(iii) If a
(iv) a i is even if and only if pja
(v)
We have already explained before how to read off the p-core of a partition
from its Mullineux symbol by calculating the ~n-vector. In the proof of the
previous proposition we have already noticed that entries a i j 0 (mod p) do
not contribute to the ~n-vector.
Now the partitions (a properties (i) to (iv) above are just
the partitions satisfying the special congruence and difference conditions for
and the congruence set
considered in [B], [AO]. The bijection described there transforms the set of
partitions above into the set
where modN b denotes the smallest positive number congruent to b mod N .
Computing the ~n-vector from the b i 's instead of the a i 's with the formula
given in the previous proof then gives the same answer since the congruence
sequence of the b i 's is the same as the congruence sequence of the regular a i 's.
For a bar partition b 2 D as above we then compute its so called N -bar quo-
tient; since b has no parts congruent to 0 or p modulo N , the bar quotient is a
p\Gamma1-tuple of partitions. For the properties of these objects we refer the reader
to [MY], [O1]. It remains to check that the N-weight of b equals w, i.e. that
the N -bar core
N) of b satisfies
We recall from above that we have for the ~n-vector of -:
and
Hence by Proposition 3.1 we obtain
As remarked before the bijection transforming a
leaves the congruence sequence modulo of the regular elements in a
invariant. Now for determining the N-bar core of b we have to pair off b i 's
congruent to 2j
each only have to know for each such j the number
But this is equal to
Now the contribution to the 2p-bar core from the conjugate runners 2j +1 and
2 is for any value of n j easily checked to be
Thus the total contribution to the 2p-bar core is exactly the same as the one
calculated above, i.e. we have as was to be proved. 2
4 The combinatorial part of the Mullineux conjec-
ture
We are now going to introduce the main combinatorial concepts for our inves-
tigations. The concept of the node signature sequence and the definition of
its good nodes have their origin in Kleshchev's definition of good nodes of a
partition. First we recapitulate his original definition [K2].
We write the given partition in the form
For we then define
a t and fl(i;
Furthermore, for
We then call i normal if and only if for all there exists
and such that
We call i good if it is normal and if fl(i;
Let us translate this into properties of the nodes of - in the Young diagram that
can most easily be read off the p-residue diagram of -. One sees immediately
that fi(i; j) is just the length of the path from the node at the beginning
of the i-th block of - to the node at the end of the jth block of -. The
condition is then equivalent to the equality of the p-residue
of the indent node in the outer corner of the ith block and the p-residue of the
removable node at the inner corner of the jth block.
Similarly, is equivalent to the equality of the p-residues of
the removable nodes at the end of the ith and jth block.
We will say that a node A = (i; j) is above the node below
and write this relation as B % A. Then a removable node A of - is
normal if for any B 2 indent node of - above A with res
res Ag we can choose a removable node CB of - with A % CB % B and res
res A, such that jfC j. A node A is good if it is the
lowest normal node of its p-residue.
Consider the example 5. In the p-residue diagram
below we have included the indent node at the beginning of the second block,
marked 3, and we have also marked the removable node of residue 3 at the end
of the fourth block in boldface. The equality of these residues corresponds to
We also see immediately from the diagram below that
4 0The set M i corresponds in this picture to taking the removable node, say A, at
the end of the ith block and then collecting into M i (resp. MA ) all the indent
nodes above this block of the same p-residue as A. For i resp. A being normal,
we then have to check whether for any such indent node, B say, at the end of
the jth block we can find a removable node C = CB between A and B of the
same p-residue, and such that the collection of all these removable nodes has
the same size as M i (resp. MA ). The node A (resp. i) is then good if A is the
lowest normal node of its p-residue.
The critical condition for the normality of i resp. A above is just a lattice con-
dition: it says that in any section above A there are at least as many removable
nodes of the p-residue of A as there are indent nodes of the same residue.
With these notions the Mullineux conjecture was reduced by Kleshchev to the
combinatorial
Conjecture. Let - be a p-regular partition, A a good node of -. Then there
exists a good node B of the Mullineux image - M such that (-nA)
Now we define signature sequences.
A (p)-signature is a pair c" where c 2 f0;
is a sign. Thus 2+ and 3\Gamma are examples of 5-signatures.
A (p)-signature sequence X is a sequence
where each c i " i is a signature.
Given such a signature sequence X we define for
We make the conventions that an empty sum is 0 and that + is counted as +1
in the sum.
The i'th peak value - i (X) for X is defined as
and the i'th end value defined as
We call i a good residue for X if - In that case let
and we then say that the residue c k at step k is i-good for X, for short: c k is
i-good for X. Let us note that if c k is i-good for X then c
Indeed, if is clear since otherwise -
contrary to the definition of c k .
contrary to the
definition of - i (X).
The residue c l is called i-normal if c l is i-good for the truncated sequence
The following is quite obvious from the definitions.
Lemma 4.1 Let X be a signature sequence and let
X be obtained from X by adding a signature c t " t at the end. For
the following statements are equivalent
c t is i-good for X.
We are going to define two signature sequences based on -, the node sequence
N(-) and the Mullineux sequence M(-). Although they are defined in very
different ways we will show that they have the same peak and end value for
each i.
The node sequence N(-) consists of the residues of the indent and removable
nodes of -, read from right to left, top to bottom in -. For each indent residue
the sign is + and for each removable residue the sign is \Gamma.
Let us note that according to Remark 2.3 the final signature in N(-) is
Example. Let we have only indicated
the removable and indent nodes in the 5-residue diagram of -.4
Residue
Peak value
Good
(The good signatures (peaks) are underlined and the normal signatures marked
with a prime.)
In other words, in the node sequence N(-) defined before, if c corresponds
to the removable node A, then c and A is normal if and only
if the sequence of signs to the left of A belonging to c j 's with c res A is
latticed read from right to left. Again, the node A resp. c m is good if it is the
last normal node of its residue resp. of its value. The peak value of the node
sequence N(-) is the number of normal nodes of -.
Remark 4.2 Let as before denote e - the partition obtained from - by removing
all those parts which are equal to 1, and let - be the partition obtained from -
by subtracting 1 from all its parts. From the definitions it is obvious that for
all i
Proposition 4.3 Let - and - be as above, and let d be the number of parts 1
in -.
and if
(3) We have
unless the following conditions are all fulfilled
In that case y m is i-good for N(-) and
Proof. Assume that N(-) consists of m 0 signatures (m 0 is odd). Then
N(-) consists of
Suppose that
If
then
where in both sequences c m \Gamma. From this and the definition
of end values (1) and (2) follows easily. Also since the final sign is \Gamma we have
proving (3) in this case.
Suppose d 6= 0.
If again
Again (1) and (2) follows easily. To prove (3) we consider the sequence
Obviously
for all i. The final signature of N(-) has no influence on - i (N(-)), since the
sign is \Gamma. Therefore in order for - i (N (-)) to be different from - i (N(-)) we
need 4.1. Thus condition (i) of
(3) is fulfilled and condition (iii) follows from ( ). Since by assumption d 6= 0
(ii) is also fulfilled. Thus (3) is proved in this case also. 2
We proceed to prove an analogue of Proposition 4.3 for the Mullineux (signa-
ture) sequence M(-), which is defined as follows:
Let the residue symbol of - be
oe
Then
Starting with the signature 0\Gamma corresponds to starting with an empty partition
at the beginning which just has the indent node (1; 1) of residue 0.
oe
and
(The good signatures in M(-) are again underlined and the normal signatures
marked with a prime.)
The table above is identical with the one in the previous example.
Lemma 4.4 Let - and - be as above. Let M (-) be the signature sequence
obtained from M(-) by removing the two final signatures y m+ and (y
Then for all i we have
Proof. We use the notation of Lemma 2.2 for R p (-) and R p (-) and proceed
by induction on m. First we study the beginnings of M (-) and M(-). We
compare
(from M (-))
with
We have put brackets [ ] around a part of (2), because these signatures do
not occur when x
The former gives a contribution \Gamma1 to residue 1 and contributions 0 to all
others, the latter a contribution \Gamma1 to residue 0 and 0 to all others.
The signatures 0\Gamma 0+ in the latter sequence have no influence on the end
values and peak values of M(-), (even when x may be ignored.
Then again we see that (1) gives the same contribution to residue i as (2) 0 to
residue our result is true.
We assume that the result is true for partitions whose Mullineux symbols have
length and have to compare
(from M (-))
with
(from M(-))
By Lemma 2.2, (4) may be written as
We see that up to rearrangement the difference between the residues occurring
in (3) and (4) 0 is just ffi m . Whereas the rearrangement is irrelevant for the end
values it could influence the peak value if signatures with same residue but
different signs are interchanged. The possible coincidences of residues with
different signs are
(first and fourth residue in (3))
or
(second and third residue in (3))
But the equations (ff) and (fi) are equivalent and by Lemma 2.2 they are
fulfilled if and only if
becomes
and
In this case the difference between the occurring residues in 1 (without rear-
rangement) and our statement is true.
If y then the difference between the occurring
residues is again 1(= since there is no coincidence for residues with
different signs we may apply Lemma 4.1 and the induction hypotheses to prove
the statement in this case too. 2
Lemma 4.5 Suppose that in the notation as above we have for
Then d 6= 0.
Proof. Suppose
ends on (i \Gamma 1)\Gamma. But then clearly !
Lemma 4.6 Let the notation be as in Lemma 4.4.
(1) For
y j is i-good for M (-)
ae
j+1 is (i \Gamma 1)-good for M(-)
or
j+1 is (i \Gamma 1)-good for M(-)
(2) For
x j is i-good for M (-)
j is (i \Gamma 1)-good for M(-)
Proof. This follows immediately from the proof of Lemma 4.4. It should be
noted that x 1 cannot be 0-good for M (-) since M (-) starts by 0\Gamma. More-
over, the proof of Lemma 4.4 shows that if x j is i-good for M (-), then we
cannot have
Proposition 4.7 Let - and - be as above.
and if
(3) We have
In that case y m is i-good
for M(-) and
Note. There is a strict analogy between the Propositions 4.3 and 4.7. In part
(3) the assumption d 6= 0 is not necessary in Proposition 4.7 due to Lemma 4.5.
Proof. By Lemma 4.4
If we add y m+ and (y m \Gamma 1)\Gamma to M (-) we get M(-). Therefore an argument
completely analogous to the one used in the case d 6= 0 in the proof of Proposition
4.3 may be applied. 2
Theorem 4.8 Let - be a p-regular partition.
Then for all
Proof. We use induction on the number ' of columns in -. For
d
and R p
ae 0
oe
. Thus
and the result is clear. Assume the result has been proved for partitions with
2. Let - be obtained by removing the first column from -.
By the induction hypothesis we have
for all i. Using Proposition 4.3 and Proposition 4.7 (see also the note to Proposition
4.7) we get the result. 2
Theorem 4.9 The following statements are equivalent for a p-regular partition
(1) There is a good node of residue i in -.
(2) M(-) has i as a good residue.
(3) N(-) has i as a good residue.
Proof. (1) , (3): See the beginning of this section.
(2) , (3): Theorem 4.8. 2
Finally we describe the effect of the removal of a good node on the residue
(or equivalently on the Mullineux symbol). First we prove a lemma:
Lemma 4.10 Suppose that there is a good node of residue i in -. Then the
following statements are equivalent:
(1) The good node of residue i occurs in the first column of -.
Proof. The statement (1) clearly is equivalent to
(where as before e - is obtained from - by removing all parts equal to 1.) We
now have
(by Theorem 4.8)
is i-good for M(-) (by Proposition 4.7)Theorem 4.11 Suppose that the p-regular partition - has a good node A of
residue i. Let
oe
Then for some j, 1 - j - m, one of the following occurs:
(1) x j is i-good for M(-) and
oe
(2) y j is i-good for M(-) and
oe
resp.
oe
Proof. The proof is by induction on j-j. Suppose first that A occurs in the
first column of -. Then the first column in G p (-nA) is obtained from the
first column in G p (-) by subtracting 1 in each entry and all other entries are
unchanged; note that in the case where G p
, we have a degenerate
case and G p (-nA) is the empty symbol. By definition of the residue symbol
this means that y m in R p (-) is replaced by y of course, in
the degenerate case also R p (-nA) is the empty residue symbol. On the other
hand y m is i-good for M(-) by Lemma 4.10, and in the degenerate case y 1 is
0-good for M(-), and so we are done in this case.
Now we assume that A does not occur in the first column of -. Let B be the
node of - corresponding to A. Clearly B is a good node of residue
We may apply the induction hypothesis to - and B. Suppose that
ae x 0
oe
By the induction hypothesis we know that one of the following cases occurs:
Case I. x 0
j in R p (-) is replaced by x 0
j is (i \Gamma 1)-good
for M(-).
Case II. y 0
j in R p (-) is replaced by y 0
j is (i \Gamma 1)-good
for M(-), resp. in the degenerate case y 0
1 is 0-good for M(-) and then the first
column x 0y 0in R p (-) is omitted in R p (-nB).
We treat Case I in detail. Case II is treated in a similar way.
Case I: By Lemma 4.6 we have one of the following cases:
We add a first column to -nB to get -nA. Then R p (-nA) is obtained from
using Lemma 2.4. We fix the notation
ae x 00
oe
and
oe
Case Ia. We know x 0
since we are in Case I and y since we are
in Case Ia. Moreover since
Also x 0
2.2. By Lemma 2.4
where
ae 0 if x 00
But x 00
by the above.
Thus
1. It is readily seen that all other entries
in R p (-nA) coincide with those of R p (-). Thus possibility (2) occurs in the
theorem.
Case Ib. We know x 0
since we are in Case I and x since we are
in Case Ib. Also since
i.e. x 00
2. Let ffi 00
again be defined by
ae 0 if x 00
Then by Lemma 2.4 x
. We claim that ffi 00
j and we also know that x 0
by definition of M(-) x 0
j is not a peak, contrary to our assumption that we are
in Case I. Thus ffi 00
Again it is easily seen that
all other entries of R p (-nA) coincide with those of R p (-). Thus possibility (1)
occurs in the theorem. 2
We illustrate the theorem above by giving Kleshchev's p-good branching graph
for p-regular partitions for in both the usual and the
residue symbol notation; we recall that the p-good branching graph for p-
regular partitions is also the crystal graph for the basic representation of the
quantum affine algebra (see [LLT] for these connections).
Below, an edge from a partition - of m to a partition - of labelled
by the residue r if - is obtained from - by removing a node of residue
those edges are drawn that correspond to the removal of good nodes.
@
@ @
@
@ @
@
@ @
@
@ @
@
@ @
@
@ @
@
@ @
@
@ @
We can now easily deduce the combinatorial conjecture to which the Mullineux
conjecture had been reduced by Kleshchev:
Corollary 4.12 Suppose that the p-regular partition - has a good node A of
residue i. Then its Mullineux conjugate - M has a good node B of residue \Gammai
satisfying
Proof. Considering the residue symbol of - it is easily seen that the Mullineux
sequence of - and its conjugate - M are very closely related. Indeed, the peak
and end values for each residue i in M(-) equal the corresponding values for
the residue \Gammai in M(- M ), and if there is an i-good node at column k in the
residue symbol of -, then there is an \Gammai-good node at column k in the residue
symbol of - M . More precisely, in the regular case these good nodes are one
at the top and one at the bottom of the column, whereas in the singular case
both are at the top. Comparing this with Theorem 4.11 implies the result. 2
Acknowledgements
. The authors gratefully acknowledge the support of the
Danish Natural Science Foundation and of the EC via the European Network
'Algebraic Combinatorics' (grant ERBCHRXCT930400).
--R
Partition identities with an application to group representation theory
A combinatorial proof of a refinement of the Andrews- Olsson partition identity
Theory (A) 68
A proof of the Mullineux conjecture
Cranks and t-cores
The representation theory of the symmetric group
Branching rules for modular representations of symmetric groups I
Branching rules for modular representations of symmetric groups II
Branching rules for modular representations of symmetric groups III
On decomposition numbers and branching coefficients for symmetric and special linear groups
Hecke algebras at roots of unity and crystal basis of quantum affine algebras
Crystal base of the basic representation of U q
Some combinatorial results involving Young diagrams
Bijections of p-regular partitions and p-modular irreducibles of symmetric groups
On the p-cores of p-regular diagrams
Combinatorics and representations of finite groups
The number of modular characters in certain blocks
--TR
--CTR
Jonathan Brundan , Jonathan Kujawa, A New Proof of the Mullineux Conjecture, Journal of Algebraic Combinatorics: An International Journal, v.18 n.1, p.13-39, July | signature sequence;good nodes in residue diagram;modular representation;mullineux conjecture;symmetric group |
291689 | The Enumeration of Fully Commutative Elements of Coxeter Groups. | A Coxeter group element w is fully commutative if any reduced expression for w can be obtained from any other via the interchange of commuting generators. For example, in the symmetric group of degree n, the number of fully commutative elements is the nth Catalan number. The Coxeter groups with finitely many fully commutative elements can be arranged into seven infinite families A_n, B_n, D_n, E_n, F_n, H_n and I_2(m). For each family, we provide explicit generating functions for the number of fully commutative elements and the number of fully commutative involutions; in each case, the generating function is algebraic. | Introduction
A Coxeter group element w is said to be fully commutative if any reduced word for w
can be obtained from any other via the interchange of commuting generators. (More
explicit definitions will be given in Section 1 below.)
For example, in the symmetric group of degree n, the fully commutative elements are
the permutations with no decreasing subsequence of length 3, and they index a basis for
the Temperley-Lieb algebra. The number of such permutations is the nth Catalan number.
In [St1], we classified the irreducible Coxeter groups with finitely many fully commutative
elements. The result is seven infinite families of such groups; namely, An , Bn ,
Dn , En , Fn , Hn and I 2 (m). An equivalent classification was obtained independently by
Graham [G], and in the simply-laced case by Fan [F1]. In this paper, we consider the
problem of enumerating the fully commutative elements of these groups. The main result
(Theorem 2.6) is that for six of the seven infinite families (we omit the trivial dihedral
family I 2 (m)), the generating function for the number of fully commutative elements can
be expressed in terms of three simpler generating functions for certain formal languages
over an alphabet with at most four letters. The languages in question vary from family to
family, but have a uniform description. The resulting generating function one obtains for
each family is algebraic, although in some cases quite complicated. (See (3.7) and (3.11).)
In a general Coxeter group, the fully commutative elements index a basis for a natural
quotient of the corresponding Iwahori-Hecke algebra [G]. (See also [F1] for the simply-
laced case.) For An , this quotient is the Temperley-Lieb algebra. Recently, Fan [F2] has
shown that for types A, B, D, E and (in a sketched proof) F , this quotient is generically
semisimple, and gives recurrences for the dimensions of the irreducible representations.
the question of semisimplicity remains open.) This provides another possible
approach to computing the number of fully commutative elements in these cases; namely,
as the sum of the squares of the dimensions of these representations. Interestingly, Fan also
shows that the sum of these dimensions is the number of fully commutative involutions.
With the above motivation in mind, in Section 4 we consider the problem of enumerating
fully commutative involutions. In this case, we show (Theorem 4.3) that for the six
nontrivial families, the generating function can be expressed in terms of the generating
functions for the palindromic members of the formal languages that occur in the unrestricted
case. Again, each generating function is algebraic, and in some cases, the explicit
form is quite complicated. (See (4.8) and (4.10).)
In Section 5, we provide asymptotic formulas for both the number of fully commutative
elements and the number of fully commutative involutions in each family. In an appendix,
we provide tables of these numbers up through rank 12.
1. Full Commutativity
Throughout this paper, W shall denote a Coxeter group with (finite) generating set S,
Coxeter graph \Gamma, and Coxeter matrix standard reference is [H].
1.1 Words.
For any alphabet A, we use A to denote the free monoid consisting of all finite-length
words A. The multiplication in A is concatenation, and on
occasion will be denoted ' \Delta '. Thus (a; b)(b; a) subsequence
of a obtained by selecting terms from a set of consecutive positions is said to be a subword
or factor of a.
For each w 2 W , we define R(w) ae S to be the set of reduced expressions for w; i.e.,
the set of minimum-length words such that
For each integer m - 0 and s; t 2 S, we define
and let - denote the congruence on S generated by the so-called braid relations; namely,
for all s; t 2 S such that m(s; t) ! 1.
It is well-known that for each w 2 W , R(w) consists of a single equivalence class relative
to -. That is, any reduced word for w can be obtained from any other by means of a
sequence of braid relations [B,xIV.1.5].
1.2 Commutativity classes.
Let - denote the congruence on S generated by the interchange of commuting gener-
ators; i.e., (s; t) - (t; s) for all s; t 2 S such that m(s; 2. The equivalence classes of
this congruence will be referred to as commutativity classes.
Given the heap of s is the partial order of f1; obtained
from the transitive closure of the relations i OE j for all
3. It is easy to see that the isomorphism class of the heap is an invariant of
the commutativity class of s. In fact, although it is not needed here, it can be shown that
only if there is an isomorphism i 7! i 0 of the corresponding heap
orderings with s example, see Proposition 1.2 of [St1].)
In [St1], we defined w 2 W to be fully commutative if R(w) consists of a single commutativity
class; i.e., any reduced word for w can be obtained from any other solely by use of
the braid relations that correspond to commuting generators. It is not hard to show that
.
I (m)54
F
A
Figure
1: The irreducible FC-finite Coxeter groups.
this is equivalent to the property that for all s; t 2 S such that m(s; t) - 3, no member of
R(w) has hs; ti m as a subword, where
It will be convenient to let W FC denote the set of fully commutative members of W .
As mentioned in the introduction, the irreducible FC-finite Coxeter groups (i.e., Coxeter
groups with finitely many fully commutative elements) occur in seven infinite families
denoted An , Bn , Dn , En , Fn , Hn and I 2 (m). The Coxeter graphs of these groups are
displayed in Figure 1. It is interesting to note that there are no "exceptional" groups.
For the dihedral groups, the situation is quite simple. Only the longest element of I 2 (m)
fails to be fully commutative, leaving a total of 2m \Gamma 1 such elements.
Henceforth, we will be concerned only with the groups in the remaining six families.
1.3 Restriction.
For any word s 2 S and any J ae S, let us define sj J to be the restriction of s to J ;
i.e., the subsequence formed by the terms of s that belong to J . Since the interchange of
adjacent commuting generators in s has either the same effect or no effect in sj J , it follows
that for any commutativity class C, the restriction of C to J is well-defined.
Figure
2: A simple branch.
A family F of subsets of S is complete if for all s 2 S there exists J 2 F such that
and for all s; t 2 S such that m(s; t) - 3 there exists J 2 F such that s; t 2 J .
Proposition 1.1. If F is a complete family of subsets of S, then for all s; s 0 2 S , we
have
Proof. The necessity of the stated conditions is clear. For sufficiency, suppose that s is
the first term of s. Since s 2 J for some J 2 F , there must also be at least one occurrence
of s in s 0 . We claim that any term t that precedes the first s in s 0 must commute with s.
If not, then we would have sj fs;tg 6- s 0 j fs;tg , contradicting the fact that sj J - s 0 j J for some
J containing fs; tg. Thus we can replace s 0 with some s 00 - s 0 whose first term is s. If
we delete the initial s from s and s 00 , we obtain words that satisfy the same restriction
conditions as s and s 0 . Hence s - s 00 follows by induction with respect to length.
2. The Generic Case
Choose a distinguished generator s 1 2 S, and let denote the
infinite sequence of Coxeter groups in which W i is obtained from W i\Gamma1 by adding a new
generator s i such that m(s commutes with all other generators of W i\Gamma1 .
In the language of [St1], fs is said to form a simple branch in the graph of Wn .
For ng denote the generating set for Wn , and let \Gamma n denote
the corresponding Coxeter graph. (See Figure 2.) It will be convenient also to let S 0 and
denote the corresponding data for the Coxeter group W 0 obtained when s 1 is deleted
from S. Thus
2.1 Spines, branches, and centers.
For any w 2 W FC
n , we define the spine of w, denoted oe(w), to be the pair (l; A), where l
denotes the number of occurrences of s 1 in some (equivalently, every) reduced word for w,
and A is the subset of defined by the property that k 2 A iff there is no
occurrence of s 2 between the kth and (k 1)th occurrences of s 1 in some (equivalently,
reduced word for w. We refer to l as the length of the spine.
Continuing the hypothesis that w is fully commutative, for J ' Sn we let wj J denote
the commutativity class of sj J for any reduced word s 2 R(w). (It follows from the
Figure
3: An F 7 -heap. Figure 4: Center and branch.
discussion in Section 1.3 that this commutativity class is well-defined.) In particular, for
each w 2 W FC
n , we associate the pair
(wj Sn \GammaS
We refer to wj Sn \GammaS 0
and wj S1 as the branch and central portions of w, respectively.
For example, consider the Coxeter group F 7 . We label its generators fu; t; s
in the order they appear in Figure 1, so that fs is a simple branch. The heap of
a typical fully commutative member of F 7 is displayed in Figure 3. Its spine is (5; f1; 4g),
and the heaps of its central and branch portions are displayed in Figure 4.
"branch set") to be the set of all commutativity classes B over the
ng such that
a subword of some member of B, then
a subword of some member of B, then
Furthermore, given a spine oe = (l; A), we define Bn (oe) to be the set of commutativity
Bn such that there are l occurrences of s 1 in every member of B, and
(B3) The kth and (k 1)th occurrences of s 1 occur consecutively in some member of B
if and only if k 2 A.
We claim (see the lemma below) that Bn (oe) contains the branch portions of every fully
commutative w 2 Wn with spine oe. Note also that Bn depends only on n, not W .
Similarly, let us define "central set") to be the set of commutativity classes
C over the alphabet S
(C1) For all s 2 S 1 , no member of C has (s; s) as a subword.
(C2) If hs; ti m is a subword of some member of C, where
occurs at least twice in this subword. (In particular, s
In addition, we say that compatible with the spine every member
of C has l occurrences of s 1 , and
is a subword of some member of C, where
subword includes the kth and (k 1)th occurrences of s 1 for some k 62 A.
denote the set of oe-compatible members of C. We claim (again, see the
lemma below) that C(oe) contains the central portions of every w 2 W FC
n with spine oe.
Note also that C(oe) depends only on (more precisely, on the Coxeter graph \Gamma),
not the length of the branch attached to it.
Lemma 2.1. The mapping w 7! (wj Sn \GammaS defines a bijection
oe
Bn (oe) \Theta CW (oe):
Proof. For all non-commuting pairs s; t 2 Sn , we have fs; tg
so by Proposition 1.1, the commutativity class of any w 2 W FC
(and hence w itself) is
uniquely determined by wj Sn \GammaS 0
and wj S1 . Thus the map is injective.
Now choose an arbitrary fully commutative w 2 Wn with spine
. The defining properties of the spine immediately imply the
validity of (B3). Since consecutive occurrences of any s 2 Sn do not arise in any s 2 R(w),
it follows that for all k - 1, the kth and (k 1)th occurrences of s in s must be separated
by some t 2 Sn such that m(s; t) - 3. For the only possibilities for t
are in Sn holds. For s 2 S 0 , the only possibilities for t are in S 1 , so (C1)
could fail only if some k, the only elements separating the kth and (k +1)th
occurrences of s 1 in s that do not commute with s 1 are one or more occurrences of s 2 . In
that case, we could choose a reduced word for w so that the subword running from the kth
to the (k +1)th occurrences of s 1 forms a reduced word for a fully commutative element of
the parabolic subgroup isomorphic to An generated by fs g. However, it is easy to
show (e.g., Lemma 4.2 of [St1]) that every reduced word for a fully commutative member
of An has at most one occurrence of each "end node" generator. Thus (C1) holds.
Concerning (B2), (C2) and (C3), suppose that (s occurs as a subword of some
member of the commutativity class B. If i ? 1, then every s 2 Sn that does not commute
with s i belongs to Sn \Gamma S 0 . Hence, some reduced word for w must also contain the subword
contradicting the fact that w is fully commutative. Thus (B2) holds. Similarly,
if we suppose that hs; ti m occurs as a subword of some member of C, where
and s; t 2 S 1 , then again we contradict the hypothesis that w is fully commutative unless
is the only member of S 1 that may not commute with some
member of Sn In either case, since hs; ti m cannot be a subword of any s 2 R(w),
it follows that s 1 occurs at least twice in hs; ti m (proving (C2)), and between two such
occurrences of s 1 , say the kth and (k + 1)th, there must be an occurrence of s 2 in s. By
definition, this means k 62 A, so (C3) holds. Thus B 2 Bn (oe) and C 2 CW (oe).
Finally, it remains to be shown that the map is surjective. For this, let be a
spine, and choose commutativity classes B 2 Bn (oe) and C 2 CW (oe). Select representatives
is a singleton,
and this singleton appears the same number of times in s B and s C (namely, l times), it
follows that there is a word s 2 S
whose restrictions to Sn \Gamma S 0 and S 1 are s B and s C ,
respectively. We claim that s is a reduced word for some w 2 W FC
n , and hence w 7! (B; C).
To prove the claim, first consider the possibility that for some s 2 Sn , (s; s) occurs as
a subword of some member of the commutativity class of s. In that case, depending on
whether s 2 S 1 , the same would be true of either B or C, contradicting (B1) or (C1).
Next consider the possibility that hs; ti m occurs as a subword of some word s 0 in the
commutativity class of s, where 3. We must have either s; t 2 Sn \Gamma S 0 or
hence the same subword appears in some member of B or C, respectively.
In the former case, (B2) requires that 3. However the restriction of s 0
to S 1 would then have consecutive occurrences of s 1 , contradicting (C1). In the latter
case, (C2) and (C3) require that s t, and that the subword hs; ti m includes
the kth and (k 1)th occurrences of s 1 in s 0 for some k 62 A. It follows that s 2 does not
occur between these two instances of s 1 in s 0 , and thus they appear consecutively in the
restriction of s 0 to Sn \Gamma S 0 , contradicting (B3). Hence the claim follows.
The above lemma splits the enumeration of the fully commutative parts of the Coxeter
groups into two subproblems. The first subproblem, which is universal
for all Coxeter groups, is to determine the number of branch commutativity classes with
spine oe; i.e., the cardinality of Bn (oe) for all integers n - 0 and all oe. The second subprob-
lem, which needs only to be done once for each series Wn , is to determine the number of
central commutativity classes with spine oe; i.e., the cardinality of CW (oe).
s
A
ss
s
Figure
5.
2.2 Spinal analysis.
The possible spines that arise in the FC-finite Coxeter groups are severely limited. To
make this claim more precise, suppose that is one of the six nontrivial
families of FC-finite Coxeter groups (i.e., A, B, D, E, F , or H). The Coxeter graph of W
can then be chosen from one of the six in Figure 5. For convenience, we have used s as
the label for the distinguished generator previously denoted s 1 .
Lemma 2.2. If C 2 CW is compatible with the spine is one of the
Coxeter groups in Figure 5, then A ' f1; l \Gamma 1g.
Proof. Let s 2 S be a representative of C, and towards a contradiction, let us suppose
that A includes some k such that 1. Note that it follows that the kth and
1)th occurrences of s in s are neither the first nor the last such occurrences.
For the H-graph, property (C1) implies that the occurrences of s and t alternate in s.
Hence, the kth and (k 1)th occurrences of s appear in the middle of a subword of the
form (s; t; s; t; s; t; s). In particular, these two occurrences of s participate in a subword of
s of the form (t; s; t; s; contradicting (C3).
For the F -graph, property (C1) implies that any two occurrences of s must be separated
by at least one t. On the other hand, the subword between two occurrences of s must be
a reduced word for some fully commutative member of the subgroup generated by ft; ug
(property (C2)), so the occurrences of s and t must alternate, and in the restriction of
s to fs; tg, the kth and (k 1)th occurrences of s appear in the middle of a subword of
the form (s; t; s; t; s; t; s). By (C3), these two occurrences of s cannot participate in an
occurrence of (t; s; t; s) or (s; t; s; t) in s. Hence, the two occurrences of t surrounding the
occurrence of s must be separated by an occurrence of u.
However in that case, (u; t; u) is a subword of some member of the commutativity class
of s, contradicting (C2).
For the E-graph, at least one of t and t 0 must appear between any two occurrences
of s (otherwise (C1) is violated), and both t and t 0 must appear between the kth and
1)th occurrences of s, by (C2). On the other hand, property (C3) also implies that
the subword (strictly) between the 2)th occurrences of s in s must be
a reduced word for some fully commutative member of W , a Coxeter group isomorphic
to A 4 . In particular, this implies that t 0 can appear at most once, and t at most twice, in
this subword. Since we have already accounted for at least four occurrences of t and t 0 ,
we have a contradiction.
This completes the proof, since the remaining three graphs are subgraphs of the preceding
ones.
2.3 Branch enumeration.
The previous lemma shows that for the FC-finite Coxeter groups, we need to solve the
branch enumeration problem (i.e., determine the cardinality of Bn (oe)) only for the spines
1g. For this, we first introduce the notation
for the number of (n+ sequences. That is, B n;l is the number of orderings of
votes for two candidates so that the winning candidate never trails the losing candidate,
with the final tally being n l votes to n \Gamma l votes. (For example, see [C, x1.8].) This
quantity is also the number of standard Young tableaux of shape (n
Lemma 2.3. For integers n; l - 0, we have
Proof. For
n;l denote the cardinality of Bn (oe) for
respectively. In the case oe = (l; ?), the defining properties (B1) and (B3)
for membership of B in Bn (oe) can be replaced with
member of B has (s
It follows that for 1 l, the kth and (k 1)th occurrence of s 1 in any member of B
must be separated by exactly one s 2 , and the total number of occurrences of s 2 must be
according to whether the first and last occurrences of s 1 are preceded (resp.,
followed) by an s 2 . Furthermore, the restriction of B to fs ng is a commutativity
class with no subwords of the form (s possibly
shifting indices (i i), we thus obtain any one of the members of
l 0 denotes the number of occurrences of s 2 . Accounting for the four possible ways that s 1
and s 2 can be interlaced (or two, if l = 0), we obtain the recurrence
n\Gamma1;l +B (0)
On the other hand, it is easy to show that B n;l satisfies the same recurrence and initial
. (In fact, one can obtain a bijection with ballot sequences by
noting that the terms of the recurrence correspond to specifying the last two votes.)
By word reversal, the cases corresponding to are clearly
equivalent, so we restrict our attention to the former. Properties (B1) and (B3) imply
that the restriction of any B in Bn (oe) to fs must then take the form
where each ' ' represents an optional occurrence of s 2 . We declare the left side of B to be
open if the above restriction has the form (s there is no s 3 separating
the first two occurrences of s 2 . Otherwise, the left side is closed.
Case I. The left side is open. In this case, if we restrict B to fs (and shift
indices), we obtain any one of the members of according
to whether there is an occurrence of s 2 following the last s 1 . (If l = 2, then there is no
choice: l is the only possibility.)
Case II. The left side is closed. In this case, if we delete the first occurrence of s 1
from B, we obtain any one of the commutativity classes in Bn (l \Gamma
The above analysis yields the recurrence
2:
It is easy to verify that the claimed formula for B (1)
n;l satisfies the same recurrence and the
proper initial conditions.
For the restriction of any B in Bn (oe) to fs takes the form
where again each ' ' represents an optional occurrence of s 2 . In the special case l = 3,
this becomes deleting one of the occurrences of s 1 , we obtain any one
of the commutativity classes in Bn (2; f1g).
Assuming l - 4, we now have not only the possibility that the left side of B is open (as
in the case but the right side may be open as well, mutatis mutandis.
Case I. The left and right sides of B are both open. In this case, if we restrict B to
ng (and shift indices), we obtain any one of the members of
Case II. Exactly one of the left or right sides of B is open. Assuming it is the left
side that is open, if we restrict B to fs ng (and shift indices), we obtain any one of
the members of according to whether there is an
occurrence of s 2 following the last s 1 .
Case III. The left and right sides of B are both closed. In this case, if we delete the
first and last s 1 from B, we obtain any one of the members of Bn (l \Gamma 2; ?).
The above analysis yields B (2)
n;2 and the recurrence
for Once again, it is routine to verify that the claimed formula for B (2)
n;l satisfies the
same recurrence and initial conditions.
Remark 2.4. The union of Bn (l; ?) for all l - 0 is the set of commutativity classes
corresponding to the fully commutative members of the Coxeter group Bn whose reduced
words do not contain the subword In the language of [St2], these are the "fully
commutative top elements" of Bn ; in the language of [F1], these are the "commutative
elements" of the Weyl group Cn .
Let R(x) denote the generating series for the Catalan numbers. That is,
Note that xR(x) 1. The following is a standard application of the Lagrange
of [GJ]). We include below a combinatorial proof.
Lemma 2.5. We have
Proof. A ballot sequence in which A defeats B by 2l votes can be factored uniquely
by cutting the sequence after the last moment when candidate B trails
by i votes, 1. The first part consists of a ballot sequence for a tie vote,
and all remaining parts begin with a vote for A, followed by a ballot sequence for a tie.
After deleting the 2l votes for A at the beginnings of these parts, we obtain an ordered
1)-tuple of ballot sequences for ties, for which the generating series is R(x) 2l+1 .
2.4 The generic generating function.
To enumerate the fully commutative elements of the family
remains is the "central" enumeration problem; i.e., determining the cardinalities of CW (oe)
for all spines oe of the form described in Lemma 2.2. Setting aside the details of this
problem until Section 3, let us define
and let C i denote the generating series defined by
l-0
l-3
l-3
Although these quantities depend on W , we prefer to leave this dependence implicit.
Theorem 2.6. If W is one of the six Coxeter groups displayed in Figure 5, we have
Proof. Successive applications of Lemmas 2.1, 2.2, and 2.3 yield
oe
l-0
l-3
l-3
n;l
l-0
l-3
c l;1
l-3
c l;2
Using Lemma 2.5 to simplify the corresponding generating function, 1 we obtain
l-0
l-3
c l;1
l-3
c l;2
It should be noted that when \Gamma1, the coefficient of c l;2
in (2.1) is zero. Thus the range of
summation for this portion of the generating function can be extended to n - \Gamma1.
Bearing in mind that R(x) is routine to verify that this agrees with
the claimed expression.
Remark 2.7. As we shall see in the next section, for each series Wn the generating
functions C i (x) are rational, so the above result implies that the generating series for
belongs to the algebraic function field
3. Enumerating the Central Parts
In this section, we determine the cardinalities of the central sets for each
of the six Coxeter groups W displayed in Figure 5. (The reader may wish to review the
labeling of the generators in these cases, and recall that the distinguished generator s 1
has been given the alias s.) We subsequently apply Theorem 2.6, obtaining the generating
function for the number of fully commutative elements in Wn .
3.1 The A-series.
In this case, s is a singleton generator, so there is only one commutativity class of each
length. It follows easily from the defining properties that the only central commutativity
classes are those of (s) and ( ) (the empty word). These are compatible only with the
spines respectively. Thus we have
and Theorem 2.6 implies
Extracting the coefficient of x n , we obtain
a result first proved in [BJS, x2].
3.2 The B-series.
In this case, we have and the defining properties imply that the central
commutativity classes are singletons in which the occurrences of s and t alternate. It
follows that c l;0 is simply the number of alternating fs; tg-words in which s occurs l times;
namely, 4 (if l ? Also, the only alternating fs; tg-word that is compatible
with a spine (l; A) with A 6= ? is (s; t; s), which is compatible with (2; f1g). Thus we have
4x
After some simplifications, Theorem 2.6 yields
Extracting the coefficient of x
a result first proved in [St2,x5].
3.3 The D-series.
In this case, a set of representatives for the central commutativity classes consist of the
subwords of (s; t; s; t
Of these, only (s; t; t compatible with a spine (l; A) with A 6= ?; the remainder are
compatible only with (l; ?) for some l. Among the subwords of (s; t; s; t
number with l occurrences of s is 8 (if l - 2), 7 (if l = 1), or 3 (if l = 0). Thus we have
and after some simplifications, Theorem 2.6 implies
Extracting the coefficient of x n\Gamma2 , we obtain
a result obtained previously in [F1] and [St2,x10].
3.4 The H-series.
As in the B-series, the central commutativity classes are the singletons formed by each
of the alternating fs; tg-words. In particular, the value of C 0 (x) is identical to its B-series
version. The words that are compatible with spines of the form (l; f1g) are those that
begin with s (and have at least two occurrences of s), and (t; s; t; s); thus c
3. The words compatible with spines of the form f1; l \Gamma 1g are those
that both begin and end with s and have at least four occurrences of s; i.e., c
4. Thus we have
After some simplifications, Theorem 2.6 yields
Extracting the coefficient of x
3: (3.4)
3.5 The F -series.
In this case, we can select a canonical representative s 2 S from each central commutativity
class by insisting that whenever s and u are adjacent in s, u precedes s. Any such
word has a unique factorization l each being
words consisting of an initial s followed by a ft; ug-word. In fact, given our conventions,
we must have
with allowed only if We also cannot have (s; t; u) preceded by (u), (t; u),
or (s; t; u); otherwise, some member of the commutativity class of s contains the forbidden
subword (u; t; u). Conversely, any word meeting these specifications is the canonical
representative of some central commutativity class. The language formed by these words
therefore consists of
together with the exceptional cases f(u); (t; u); (u; s); (t; u; s)g. Hence
Turning now to C 1 (x), note that the central commutativity classes that are compatible
with a spine of the form (l; f1g) are those for which the first two occurrences of s do not
participate in an occurrence of the subwords (s; t; s; t), or (t; s; t; s). If s occurs three or
more times, this requires ( ) to be the first factor in (3.5), followed by an occurrence of
(s; t; u; s; t). Hence, the canonical representatives compatible with (l; f1g) consist of
and four additional cases with l = 2: f(s; t; s); (u; s; t; s); (s; t; s; u); (t; u; s; t; s)g. It follows
that c
l-3
c l;1 x
To determine C 2 (x), note first that (s; t; u; s; t; s) is the unique canonical representative
compatible with the spine (3; f1; 2g). For the spines (l; f1; l \Gamma 1g) with l - 4, compatibility
requires (s) to be the last factor in (3.6), and it must be preceded by (s; t; u; s; t). Hence
l-3
c l;2 x
After simplifying the generating function provided by Theorem 2.6, we obtain
While it is unlikely that there is a simple closed formula for
is interesting to note
that the Fibonacci numbers f n satisfy
f 2n x
f 3n x
so when the coefficient of x n\Gamma2 is extracted in (3.7), we obtain
3.6 The E-series.
We claim that there is a unique member of each central commutativity class (in fact,
any commutativity class in S ) with the property that (s; u), do not occur
as subwords. To see this, note first that the set of left members of these pairs is disjoint
from the set of right members. Secondly, these pairs are precisely the set of commuting
generators of W . Hence, for any pair of words that differ by the interchange of two
adjacent commuting generators, one member of the pair can be viewed as a "reduction"
of the other, in the sense that the set of positions where u and t occur are farther to
the left. Furthermore, since the set of instances of the forbidden pairs in any given word
are pairwise disjoint, it follows by induction that any sequence of reductions eventually
terminates with the same word, proving the claim.
Let L denote the formal language over the alphabet S formed by the canonical representatives
(in the sense defined above) of the central commutativity classes. Given any formal
language K over S, we will write K(x) for the generating function obtained by assigning
the weight x l to each s 2 K for which s occurs l times. Note that by this convention, we
have
Any word s 2 S has a unique factorization
l each being words consisting of an initial s followed by a ft; t 0 ; ug-word. For
membership in L, every subword of s not containing s must be a member of
the set of canonical representative for the fully commutative members of the subgroup
generated by ft; t 0 ; ug. When s is prepended to these words, only six remain canonical:
Thus we have
For each e 2 E, let L e
denote the set of s 2 L for which the initial factor s 0 is e. If
or deletion of the initial s in s yields a member
of L e
for some e 2 f(t); (t; conversely. In terms of generating
functions, we have
Similarly, deletion of s from the second position defines a bijection from L (u) \Gamma f(u); (u; s)g
to so we have
Combining these two decompositions, we obtain
Now consider the language and the refinements K i
consisting of those nonvoid members of K whose initial factor is a i . Since the result of
appending a to any s 2 L remains in L if and only if s does not already end in s,
it follows that L ( g. Similarly, we have
so (3.8) can be rewritten in the form
For the commutativity classes of
a 2 a 3 ; a 3 a 4 a 3 ; a i a
each have representatives in which one or more of the subwords (t; s;
and (t; u; t) appear, and hence cannot be central. Conversely, as a subset of fa
membership in K is characterized by avoidance of the subwords listed above. It follows
that K
Solving this recursive description of the languages K i (essentially a computation in the
ring of formal power series in noncommuting variables a
a 4 a 3 ; a 2 a 4 a 3 a 4 g \Delta f( ); a 2 ; a 2 a 4 a 3 a 5 g. Thus
and hence (3.9) implies
The central commutativity classes compatible with spines of the form (l; f1g) are those
for which the first two occurrences of s do not participate in an occurrence of the subwords
These correspond to the members of L for which the first occurrence
of one of the factors a i is either a 5 or a 6 , followed by at least one more occurrence of
. If a 5 is the first factor, the possibilities are limited to f( ); (u); (t; u)ga 5 a 1 ,
since a 5 can be followed only by a 1 . If the first factor is a 6 , then the choices consist of
the members of K 6 f( ); a 1 g other than a 6 , since no nonvoid member of E can precede a 6 .
Hence, the language of canonical representatives compatible with the spines (l; f1g) is
In particular, (s; t; t are the
members compatible with the spine (2; f1g), so c 5. Hence, using the decomposition
of K 6 determined above, we obtain
The canonical representatives of the central commutativity classes compatible with
spines of the form (l; must have a factorization in which there are at least three
occurrences of the words a i , the first and penultimate of these being a 5 or a 6 . Since a 6
cannot be preceded by any of the factors a i , a 5 must be the penultimate factor. Since
a 5 can be followed only by a 1 , the first factor must therefore be a 6 , there is no non-void
member of E preceding a 6 , and the last factor must be a 1 . From the above decompositions
of K 6 and K
that the language formed by the members of L that start with
a 6 and terminate with a 5 a 1 is
a 6 \Delta fa 2 a 4 ; a 2 a 4 a 3 ; a 2 a 4 a 3 a 4 g \Delta a 2 a 4 a 3 a 5 a
and therefore
Combining our expressions for C i the generating function provided by
Theorem 2.6 can be simplified to the form
4. Fully Commutative Involutions
We will say that a commutativity class C is palindromic if it includes the reverse of
some (equivalently, all) of its members. A fully commutative w 2 W is an involution if
and only if R(w) is palindromic.
In the following, we will adopt the convention that if X is a set of commutativity classes,
then -
X denotes the set of palindromic members of X. Similarly, -
W and -
W FC shall denote
the set of involutions in W and W FC , respectively.
4.1 The generic generating function.
Consider the enumeration of fully commutative involutions in a series of Coxeter groups
of the type considered in Section 2. It is clear that w 2 W FC
n is an involution
if and only if its branch and central portions are palindromic. Thus by Lemma 2.1,
determining the cardinality of W FC
n can be split into two subproblems: enumerating -
Bn (oe)
(the palindromic branch classes) and -
CW (oe) (the palindromic central classes).
For integers n; l - 0, we define -
e.
Lemma 4.1. We have
Bn (l; ?)
Bn (l; f1g)
Proof. Following the proof of Lemma 2.3, for
n;l denote the cardinality
of -
Bn (oe) for respectively. Recall that the occurrences
of s 1 and s 2 must be interlaced in any representative of B 2 Bn (l; ?), and that when we
restrict B to fs ng (and shift indices), we obtain a member of
denotes the number of occurrences of s 2 . To be palindromic, it is therefore necessary and
sufficient that the fs ng-restriction of B is palindromic, and that l
(or 0, if l = 0). This yields the recurrence
It is easy to verify that -
satisfies the same recurrence and initial conditions.
For spines of the form oe = (l; f1g), it is clear that there can be no palindromic classes
since for l ? 2, there must be an occurrence of s 2 between the last two
occurrences of s 1 , but not for the first two. Assuming l = 2, the bijection provided in the
proof of Lemma 2.3 preserves palindromicity, and thus proves the recurrence
It is routine to check that the claimed formula for -
n;2 satisfies the same recurrence and
initial conditions.
For the left and right sides of any palindromic B 2 Bn (oe) must be
both open or both closed, in the sense defined in the proof of Lemma 2.3. Furthermore, a
branch class with this property is palindromic if and only if its restriction to fs ng
is palindromic, so the bijection provided in Lemma 2.3 for this case yields
and -
n;2 . Once again, it is routine to check that the claimed formula for -
n;l
satisfies the same recurrence and initial conditions.
Lemma 4.2. We have
Proof. We have
We can interpret F l;j (x) as the generating function for sequences of votes in an election
in which A defeats B by l votes. Such sequences can be uniquely factored by cutting
the sequence after the last moment when B trails A by i votes, 1. The
first factor consists of an arbitrary sequence for a tie vote, which has generating function
1=
and the remaining l factors each consist of a vote for A, followed by a
"ballot sequence" for a tie vote (cf. Section 2.3), which has generating function xR(x 2 ).
Turning now to the palindromic central commutativity classes, let us define
and associated generating functions
l-0
l-3
Theorem 4.3. If W is one of the Coxeter groups displayed in Figure 5, then
Proof. As noted previously, w 2 W FC is an involution if and only if the central and
branch portions of w are palindromic. Successive applications of Lemmas 2.1, 2.2, and 4.1
therefore yield
oe
Bn (oe)
l-0
l-3
n;l
l-0
l-3
The corresponding generating function thus takes the form
l-0
l-3
-c l;2
using Lemma 4.2.
As we shall see below, both -
are rational, so the generating series for
belongs to the algebraic function field Q(x; R(x 2
4.2 The A-series.
In this case, we have -
since there are only two central
commutativity classes (namely, those of ( ) and (s)), and both are palindromic. Hence
A FC
Either by extracting the coefficient of x n , or more directly from (4.1), we obtain
A FC
4.3 The B-series.
In this case, the central commutativity classes are singletons in which the occurrences
of s and t alternate. For each l - 0, there are two such words that are palindromic and
have l occurrences of s. Among these, (s; t; s) is the only one that is compatible with a
spine (l; A) with A 6= ?. Hence -
Theorem 4.3 implies
Extracting the coefficient of x
dn=2e
4.4 The D-series.
In this case, the palindromic central classes are represented by the odd-length subwords
of (s; t; s; t middle term is t or t 0 , together with ( ), (s), (t; t 0 ), and
In particular, leaving aside (s; t; t there are exactly four such words with l
occurrences of s for each even l - 0, so we have
Also -
is the only representative compatible with a spine of
the form (l; with A 6= ?. After simplifying the expression in Theorem 4.3, we obtain
Extracting the coefficient of x n\Gamma2 yields
(n+1)=2
4.5 The H-series.
The palindromic central classes in this case are the same as those for the B-series; the
only difference is that those corresponding to hs; ti 7 are now compatible with
spines of the form (l; f1; l \Gamma 1g) for l - 4. Thus we have
The generating function provided by Theorem 4.3 is therefore
and hence fi fi -
4.6 The F -series.
Recall that in Section 3.5, we selected a set of canonical representatives for the central
commutativity classes by forbidding the subword (s; u). If s is one such representative,
let s denote the canonical representative obtained by reversing s and then reversing each
offending (s; u)-subword.
If s is the canonical representative of a palindromic class (i.e.,
else s has a unique factorization fitting one of the forms
a
where a is itself a canonical representative for some central commutativity class. Con-
versely, any canonical representative ending in (s) can be uniquely factored into one of
the two forms a \Delta (s) or a \Delta (u; s), and the corresponding word obtained by appending a
remains central. Similarly, any canonical representative ending with (t) but not (u; t) or
factored into the form a \Delta (t), remains central when a is appended.
Now from (3.5), the language of canonical representatives ending in (s) consists of the
exceptional set f(u; s); (t; u; s)g, together with
and the language of representatives ending with (t) but not (u; t) or (u; s; t) is
Including the exceptional cases ( ) and (s), this yields
The unique palindromic classes compatible with the spines (2; f1g) and (3; f1; 2g) are
represented by (s; t; s) and (s; t; u; s; t; s). For the spines
recall from Section 3.5 that a canonical representative compatible with oe must begin with
(s; t; u; s; t) and end with (t; u; s; t; s). Selecting the portions of (4.6) and (4.7) that begin
with (s; t; u; s; t) yields the languages
so we have
Simplification of the generating series provided by Theorem 4.3 yields
F FC
The coefficients can be expressed in terms of the Fibonacci numbers as follows:
F FC
F FC
4.7 The E-series.
In Section 3.6, we selected a canonical representative s for each central commutativity
class. As in the previous section, we let s denote the canonical representative for the
commutativity class of the reverse of s.
If s 2 S is a representative of any palindromic commutativity class, then the set of
generators appearing an odd number of times in s must commute pairwise. Indeed, the
"middle" occurrence of one generator would otherwise precede the "middle" occurrence
of some other generator in every member of the commutativity class. Aside from the
exceptional cases ( ), (u), and (which cannot be followed and preceded by the same
member of S and remain central), it follows that every central palindromic class has a
unique representative fitting one of the forms
a (t) a; a
where a is the canonical representative of some central commutativity class. However,
we cannot assert that the above representatives are themselves canonical; for example,
if then a (u; t 0 ) a is a representative of a central palindromic class, but the
canonical representative of this class is (t; u; s; t
For the representatives whose middle factor is
s must be the first term of a, assuming that a is nonvoid. Furthermore, if we prepend
an initial s (or s; t, in the case of (u; t 0 )), the resulting words (s; t) a, (s;
and (s; t; u; t 0 ) a are (in the notation of Section 3.6) members of the formal languages
respectively. Conversely, any member
of these languages arises in this fashion.
For a representative whose middle factor is (u; s), if we prepend (s; t; t 0 ) to (u; s) a,
we obtain a member of a central commutativity class whose canonical representative is
hence a member of K 6 f(); (s)g. Conversely, any member of K 6 f(); (s)g
other than a way.
Collecting the contributions of the five types of palindromic central classes, along with
the exceptional cases f( ); (u); (s)g, we obtain
For the spine there is a unique oe-compatible central class that is palin-
dromic; namely, the class of (s; t; t 0 ; s). For the spines oe of the form (l; f1; l \Gamma 1g), recall
from (3.10) that the canonical representatives of the oe-compatible classes all begin with
a 6 a 2 a 4 and end with a 3 a 5 a 1 . It follows that for a palindromic central class represented
by a word of the form (4.9) to be compatible with oe, it is necessary and sufficient that a
end with a 3 a 5 a 1 . Using the decompositions obtained in Section 3.6, we find that
a 4 a 3 ; a 2 a 4 a 3 a 4 g \Delta a 2 a 4 a 3 a 5 ;
a 4 a 3 ; a 2 a 4 a 3 a 4 g \Delta a 2 a 4 a 3 a 5 ;
a 6 \Delta fa 2 a 4 ; a 2 a 4 a 3 ; a 2 a 4 a 3 a 4 g \Delta a 2 a 4 a 3 a 5
are the respective portions of K 2 , K 4 , and K 6 that end with a 3 a 5 ; there are no such words
in K 5 . It follows that
The generating function provided by Theorem 4.3 can be simplified to the form
5. Asymptotics
Given the lack of simple expressions for the number of fully commutative members of
En and Fn , it is natural to consider asymptotic formulas.
Theorem 5.1. We have
(a)
1:466 is the real root of x
(b)
1:618 is the largest root of x
Proof. Consider the generating function
Theorem 2.6.
In the case of Fn , we see from (3.7) that the singularities of G(x) consist of a branch cut
at together with simple poles at and the zeroes of
. The latter are (respectively) f1=fl; \Gammafl g,
5)=2 denotes the golden ratio. The smallest of these (in absolute value)
is
0:236, a zero of In particular, since 1=fl 3 ! 1=4, the asymptotic
behavior of
fi fi is governed by the local behavior of G(x) at specifically,
since there is a simple pole at
using (3.7), together with the relations
In the case of En , we see from (3.11) that the singularities of G(x) consist of a branch
cut at together with simple poles at and the zeroes of
These polynomials are related by the fact
that if ff is any zero of is the minimal polynomial of
is the minimal polynomial of ff=(1+ ff) 2 . (The fact that
such a simple relationship exists is not coincidental; see Remark 5.3 below.) The smallest
of the nine zeroes of these polynomials (in absolute value) is
0:682 is the real zero of Equivalently, we have
1=ff is the real root of x 1=4, the asymptotic behavior of
is once again governed by the local behavior of G(x) near a simple pole. In this case, we
obtain
using (3.11) and the fact that
Remark 5.2. For the sake of completeness, it is natural also to consider the asymptotic
number of fully commutative elements in An , Bn , Dn , and Hn . Given the explicit formulas
(3.1), (3.2), (3.3), and (3.4), it is easily established that
using Stirling's formula. In each of these cases, the dominant singularity in the corresponding
generating function is the branch cut at
Remark 5.3. If ff is a pole of f(x), then ff=(1 + ff) is a pole of f(x=(1 \Gamma x)) and
ff=(1+ff) 2 is a pole (of some branch) of f(R(x) \Gamma 1). On the other hand, from Theorem 2.6,
we see that aside from the branch cut at a pole at the singularities
of
are limited to those of C 2 (x), C
Thus, unless there is unexpected cancellation, for each pole ff
of C 2 (x), there will be a triple of poles at ff=(1 in G(x).
Now consider the asymptotic enumeration of fully commutative involutions. Again,
given the explicit formulas (4.2), (4.3), (4.4), and (4.5), it is routine to show that
A FC
In the following, fi and fl retain their meanings from Theorem 5.1.
Theorem 5.4. We have
(a)
(b)
(c)
F FC
(d)
F FC
Proof. Consider the generating series -
Theorem 4.3.
In the case of Fn , we see from (4.8) that the singularities of -
G(x) consist of branch cuts
at together with simple poles at (the zeroes of
(the zeroes of In absolute value, the
smallest of these occur at that the asymptotic
behavior of
F FC
fi fi is determined by the local behavior of -
G(x) at
specifically, we have
F FC
F FC
Using (4.8) and the fact that Q(fl \Gamma3=2
and a similar calculation (details omitted) yields
In the case of En , we see from (4.10) that the singularities of -
G(x) consist of branch
cuts at together with simple poles at
and the square roots of
the zeroes of Continuing the notation from the
proof of Theorem 5.1, the poles occurring closest to the origin are at
2). Thus we have
Using (4.10) and the fact that Q(ffi 1=2
and a similar calculation can be used to determine c \Gamma ; we omit the details.
Appendix
An Bn Dn En Fn Hn
9 16796 53481 29171 44199 153584 182720
Table
1: The number of fully commutative elements. 2
An Bn Dn En Fn Hn
9 252 637 381 443 968 1014
Table
2: The number of fully commutative involutions.
2 The parenthetical entries correspond to cases in which the group in question is either reducible or
isomorphic to a group listed elsewhere.
--R
Some combinatorial properties of Schubert polynomials
"Groupes et Alg'ebres de Lie, Chp. IV-VI,"
"Advanced Combinatorics,"
"A Hecke Algebra Quotient and Properties of Commutative Elements of a Weyl Group,"
Structure of a Hecke algebra quotient
"Combinatorial Enumeration,"
"Modular Representations of Hecke Algebras and Related Algebras,"
"Reflection Groups and Coxeter Groups,"
On the fully commutative elements of Coxeter groups
Some combinatorial aspects of reduced words in finite Coxeter groups
--TR
--CTR
Sara C. Billey , Gregory S. Warrington, Kazhdan-Lusztig Polynomials for 321-Hexagon-Avoiding Permutations, Journal of Algebraic Combinatorics: An International Journal, v.13 n.2, p.111-136, March 2001 | coxeter group;braid relation;reduced word |
292363 | An Iterative Perturbation Method for the Pressure Equation in the Simulation of Miscible Displacement in Porous Media. | The miscible displacement problem in porous media is modeled by a nonlinear coupled system of two partial differential equations: the pressure-velocity equation and the concentration equation. An iterative perturbation procedure is proposed and analyzed for the pressure-velocity equation, which is capable of producing as accurate a velocity approximation as the mixed finite element method, and which requires the solution of symmetric positive definite linear systems. Only the velocity variable is involved in the linear systems, and the pressure variable is obtained by substitution. Trivially applying perturbation methods can only give an error $O(\epsilon)$, while our iterative scheme can improve the error to $O(\epsilon^m)$ at the $m$th iteration level, where $\epsilon$ is a small positive number. Thus the convergence rate of our iterative procedure is $O(\epsilon)$, and consequently a small number of iterations is required. Theoretical convergence analysis and numerical experiments are presented to show the efficiency and accuracy of our method. | Introduction
Miscible displacement occurs, for example, in the tertiary oil-recovery process which can enhance
hydrocarbon recovery in the petroleum reservoir. This process involves the injection of a solvent
at injection wells with the intention of displacing resident oil to production wells. The resident
oil may have been left behind after primary production by reservoir pressure and secondary
production by waterflooding. Since the tertiary process requires expensive chemicals and the
performance of the displacement is not guaranteed, its numerical simulation plays an important
role in determining whether enough additional oil is recovered to make the expense worthwhile
and in optimizing the recovery process of hydrocarbon.
Department of Mathematics/Institute of Applied Mathematics, University of British Columbia, Vancouver,
British Columbia, Canada V6T 1Z2. E-mail address: [email protected]
y Department of Mathematics, Wayne State University, Detroit, Michigan 48202, USA. E-mail address:
[email protected] or [email protected]
Mathematically, miscible displacement in porous media is modeled by a nonlinear coupled
system of two partial differential equations with appropriate boundary and initial conditions.
The pressure-velocity equation is elliptic, while the concentration equation is parabolic but
normally convection-dominated. The concentration equation is derived from the conservation
of mass which involves the Darcy velocity of the fluid mixture, but the pressure variable does
not appear in the concentration equation. Thus a good approximation of the concentration
equation requires accurate solution for the velocity variable. Mixed finite element methods
have been applied [9, 10, 11, 15, 16, 25] to the pressure-velocity equation, which can yield the
velocity one order more accurate than the finite difference or element methods. However, the
finite-dimensional spaces for velocity and pressure need to satisfy the Babuska-Brezzi condition,
and the resulting linear system has a nonpositive definite coefficient matrix. Besides, the number
of degrees of freedom in the linear system doubles that of finite difference or element methods.
Thus efforts have been made to improve the performance of mixed finite element methods
[5, 18, 12].
The purpose of this paper is to propose and analyze an iterative perturbation method for the
pressure-velocity equation, which is capable of producing as accurate velocity approximation as
the mixed finite element method, and requires the solution of symmetric positive definite linear
systems. Only the velocity variable is involved in the linear systems and the pressure variable is
obtained by substitution without solving any linear systems at each iteration level. Aside from
this, the finite element spaces for our velocity and pressure variables need not satisfy the so-called
Babuska-Brezzi condition. The iterative perturbation method is a variant of the augmented
Lagrangian method [18, 5, 6] applied directly to the continuous differential problem, which can
also be viewed as the sequential regularization method [2, 20, 3] applied to stationary problems.
Unlike the augmented Lagrangian method using spectral analysis to discuss the convergence and
its rate for the discretized problems, we use the method of asymptotic expansion directly for
the differential problem following the idea in the sequential regularization method [2, 20]. The
asymptotic method is easier to use for more general and more complicated problems than spectral
analysis since the former applies readily to non-symmetric and infinite-dimensional operators.
Trivially applying perturbation methods (e.g. penalty methods) can only give an error O(ffl),
where ffl is a small positive perturbation parameter. We will prove that our iterative scheme can
improve the error to O(ffl m ) at the m-th iteration level. In other words, the convergence rate
of our iterative procedure is O(ffl). Theoretical convergence analysis and numerical experiments
show that the number of iterations is extremely small, usually two or three.
The organization of the rest of the paper is as follows. In Section 2, we describe our iterative
perturbation method for the time-discretized problem at the partial differential equation level.
In Section 3 we show its convergence and the rate of convergence. Then in Section 4 we give a
fully-discretized version for the iterative perturbation method for the pressure-velocity equation
and a Galerkin method for the concentration equation. Finally, in Section 5 we present numerical
examples to demonstrate the effectiveness and accuracy of our method, and in Section 6 we give
some remarks and future directions.
2 The Iterative Perturbation Method
Consider the miscible displacement of one incompressible fluid by another in a porous reservoir
\Omega ae R 2 over a time period denote the pressure and Darcy velocity
of the fluid mixture, and c the concentration of the invading fluid. Then the mathematical
model is a coupled nonlinear system of partial differential equations
2\Omega \Theta J;
(1)
div
2\Omega \Theta J;
(2)
OE
@c
@t
2\Omega \Theta J;
with the boundary conditions
and initial condition
c) is the mobility tensor of the fluid mixture, fl(x; c) and d(x) are the gravity
and vertical coordinate, q is the imposed external rates of flow, OE(x) is the porosity of the rock,
D(x; t) is the coefficient of molecular diffusion and mechanical dispersion of one fluid into the
other, c) is a known linear function of c representing sources, and - is the exterior
normal to the boundary
We assume that the mobility is symmetric and positive definite. For existence of p, we
assume that the mean value of q is zero and for uniqueness we impose that p have mean value
zero.
In recent years much attention has been devoted to the numerical simulation of this problem.
In this paper we are interested in solving the velocity-pressure equation (1)-(2) using an iterative
perturbation method (IPM) for the time-discretized problem. The iterative perturbation method
considered herein is a variant of augmented Lagrangian methods [18, 5] and sequential
regularization methods [2, 20]. We will analyze the IPM using the technique presented in [2, 20],
since the analysis gives the convergence of the iterative procedure and its convergence rate at
the same time, and is applicable to non-symmetric problems.
After a time discretization, we obtain the following system for u and p at current time step:
2\Omega \Theta J;
div
2\Omega \Theta J;
where ~ c is an approximation of c assumed to be known here. Taking ffl to be a small positive
number, we replace the system (1), (2), (4) by the following iterative perturbation method
such that
a(~c)
2\Omega \Theta J;
and
2\Omega \Theta J;
where the initial guess p 0 is required to satisfy the zero mean value property:
R
p s has mean value zero from (11) and (10). We note that, if taking p 0 j 0, each iteration is a
kind of penalty method (probably without its meaning in the optimization context).
Our iterative perturbation method (IPM) has the following salient features:
1. We solve a small system (9)-(10) for velocity u, and get pressure p from (11) directly. We
will show that the accuracy of such a method is O(ffl s ) at the s-th iteration level. Note
that the system (9)-(10) is well-posed since, unlike the usual penalty method, we need not
take ffl very small.
2. The velocity-pressure equation was recently solved by the mixed finite element method
[9, 10, 11, 15, 16, 25], in which the resulting linear system has a non-positive definite
coefficient matrix, and the discrete spaces for u and p need to satisfy the Babuska-Brezzi
condition. While in the IPM procedure, the system (9)-(10) leads to a symmetric positive
definite coefficient matrix. Since u and p are obtained from equations (9)-(10) and (11)
separately, compatibility conditions between the discrete spaces for u and p are not needed.
3. When the standard finite element method [7, 8, 13, 14, 17, 23, 24] is applied for the pressure
equation, the velocity need to be obtained by finite differencing the pressure variable,
which gives less accuracy. The velocity in our method is obtained directly without finite
differencing. Note that the accuracy of the approximate velocity is important, since the
concentration equation involves the velocity only.
4. We will see that in xx4 and 5 the discrete version of our IPM scheme (9)-(11) gives the
same accuracy for the velocity as the mixed method and requires the solution of well-conditioned
linear systems like Galerkin methods. Note that our numerical experiments
will show that a few (usually two or three) iterations are enough for the IPM scheme.
We now describe some notation to be used throughout the rest of the paper. As usual, we
use kuk p and kukH m to denote the standard norms in the Sobolev spaces L
respectively, where 1 the subscript p in the
norm notation when 2. In addition, we define the divergence space
with the following two norms:
We shall denote by (\Delta; \Delta) and h\Delta; \Deltai the inner product and duality
in\Omega and on \Gamma, respectively.
For a normed linear space B with norm k \Delta kB and a sufficiently regular function
we define
ff
2 and kgk L 1
If [ff; we simplify the notation as kgk respectively. We shall also
denote by M and K generic constants, which may be different at different occurrences. We use
M for continuous problems and K for discrete problems.
3 Convergence Analysis
Before stating our convergence theorem, we first give two lemmas.
There exists a unique solution fu; pg, where u 2 H(div) and p 2 H
1(\Omega\Gamma , to the
problem
2\Omega \Theta J;
div
2\Omega \Theta J;
where p and q have mean value zero. Furthermore, there exits a constant M such that the
following estimate holds:
Proof: Substituting (12) into (13) and (14) we see that p satisfies a Poisson equation with
Neumann boundary condition
a @p
@-
Noting the zero mean value of p and using the standard results of the Poisson equation, we
obtain the uniqueness and existence of p and the estimate
An application of the trace inequality[19]
leads to the inequality (15) for p. Then the existence, uniqueness and the estimate (15) for u
directly.
Writing (9)-(11) as
2\Omega \Theta J;
div
2\Omega \Theta J;
we can obtain a similar estimate for u s and p s from Lemma 1:
Lemma 2 For the solution of the following system:
a(~c)
2\Omega \Theta J;
we have the stability estimate:
Proof: Multiplying both sides of the first equation by u and applying Green's formula
leads to (20).
We are now ready to describe our convergence theorem and prove it using the method of
asymptotic expansion.
Theorem 1 Let fu; pg be the solution of system (7), (8), and (4) and fu s the solution of
(9)-(11). Then there exists a constant M , independent of s and ffl, such that
Proof: We first consider the case
in (9). Comparing the coefficients of like powers of ffl, we thus have
5div
5div
where the boundary condition (10). The equation (22) has infinitely
many solutions in general. We should choose u 10 not only to satisfy (22), but also to ensure
that the solution of (23) exists. A choice of u 10 is the exact solution u of (7), (8), and (4), i.e.
where p is the exact solution of (7), (8), and (4), and satisfying zero mean value condition.
Hence, from Lemma 1, we have has the following form
5 div u
The equation (28) suggests that we choose u 11 and a corresponding p 11 (with zero mean value)
to satisfy
div
According to Lemma 1, u 11 and p 11 exist and have the bound:
Generally, assuming that u 1(i\Gamma1) and p 1(i\Gamma1) have been found for i - 2, we choose u 1i and p 1i
(with zero mean value) satisfying
div
Applying Lemma 1 and (32), we obtain that all u 1i and p 1i exist uniquely and
Next we estimate the remainder of the asymptotic expansion to the m\Gammath power of ffl. Denote
the partial sum and the reminder by
-
Then, from (25)-(27), (29)-(31), and (33)-(35), w 1m satisfies, for m - 1,
Then, using (32), (36) and Lemma 2, we obtain
Noting that using (36), we thus have proved the estimate (21) for u 1 . On the other
hand, (11) can be rewritten as
div
Using (26), (30), (34) and (39), we have
. By taking m - 2, this proves (21) for
Now we look at the second iteration
Note that (40) gives us a series expansion for p 1 . Plugging the expansions of u 2 and p 1 into
and comparing like powers of ffl we obtain
div
5 div u
5 div u
Again, required to satisfy the boundary condition (10). As in the case of
choose u 20 j u and thus have
5 div u
This suggests that we construct u 21 and p 21 to satisfy
div
where p 21 has mean value zero. Obviously u is the solution of (44)-(46).
In general, similar to the case of (with zero mean value) to satisfy
div
m. By the same procedure as in the case of we obtain the error equations
similar to (37) and (38) with an addition of a remainder term 5(p
1m ) on the right-hand
side. simulating the proof of Lemma 2 and using (40) (i.e. kp
we have
where the remainder w 2m satisfies the same estimate (39) as w 1m . Noting u 20 j u and u
we obtain (21) for u 2 . Then, using (11), (40), (45), (48) and the estimate of w 2m , we conclude
that
. By taking m - 3, this gives (21) for p 2 since p 21 j 0.
We can repeat this procedure, and by induction, conclude the results of the theorem.
Remark 2 It is not difficult to see that are a solution to a problem in the
form of (12)-(14). Then using the estimates (15) we may obtain a better error estimates for p:
From Theorem 1 we see that the convergence rate of our iterative scheme (9)-(11) is O(M ffl)
M . This implies that the number of iterations needed to achieve a prescribed accuracy
is very small. The fast convergence of our method makes it dramatically different from penalty
methods.
4 The Galerkin Approximation
In this section, we approximate the velocity-pressure iterative perturbation scheme (9)-(10) and
the concentration equation (3) by using the standard Galerkin method.
4.1 The Approximation Scheme
\Gammag and Wg. The variational
form of (9)-(10) can be written into the following: find
such that
(a
The weak form of the concentration equation (3) reads: find
1(\Omega\Gamma such that
@t
For h u ? 0 and an integer k - 0, with respect to the velocity-pressure equation, we introduce
finite element spaces W h ae W and Y associated with a quasi-regular
triangulation
of\Omega into triangles or rectangles of diameter less than h u . Similarly, we
denote by Z h ae H 1
(\Omega\Gamma the finite-dimensional space for the concentration equation with the grid
size h c and approximation index l. Assume that the following approximation properties hold:
z h 2Z h
where K is a constant. The space W h can be taken to be the vector part of the Raviart-Thomas
[22] space of index k, or Brezzi-Douglas-Marini [4] space of index k + 1.
Given a partition of J;
g. Let fp be fp; u; cg and its approximation at
time level t n . We define our approximation scheme at time t n by the following.
Step 1: Given C n , find fU follows. Take the initial guess -
iteratively -
h such that
(a
where q h is a smoothed source and sink function. Let U
U s and P
Step 2: When U n known, find C n+1 2 Z h such that
\Deltat n
Note that in step 1 of the scheme, the initial guess could be more efficiently taken as -
Our numerical experiments will show that the number of
iterations can usually be taken to be or 3 for a range of perturbation parameter
to 10 \Gamma5 .
For convection-dominated displacement problems, sharp fluid interfaces move along characteristic
or near-characteristic directions. Thus the modified method of characteristics [8, 11, 15,
16, 23, 24, 26] may be applied to treat the concentration equation. In this case, the accurate
Darcy velocity computed using the IPM helps to determine the characteristic direction more
precisely.
4.2
For our error analysis below we shall make use of the following elliptic projections of fu; cg :
(a
(div (R u
such that
(D(u)r(R
and q is the right-hand side function of equation (2) [9]. This - function
is chosen to assure coercivity of the projection form. Then, it can be shown that [9, 10]
ck
ck
From (56) and the definition of Y and Y h , for Y we can also find fR
We now start our error analysis by combining (52), (58) and (61) to get
(a
Then we see that
In view of (63), (65) and (68) we have
Presuming that kp
u , we thus have (for
On the other hand, following the way of getting (67) and combining (52), (11), (58) and (59),
we get
(a
It is not difficult to see that p s presuming p Hence we can find R p
satisfying (66). Since -
h we can take w 0 2 W h such that div w
according to [1, Lemma 3.3]
(assuming\Omega is a bounded, Lipschitz-continuous, and connected
domain), we have
Hence we take in (71) and obtain
In view of (66), (69) and (72), we have
Thus the error estimates (70) and (74) are reduced to bounding C \Gamma Rc. We denote that
Combining (53), (60), (62) and taking z = e n+1 as a test function we obtain the error
equation
\Deltat n
re n+1 )
\Deltat n
\Deltat n
re
We now estimate the right-hand side terms of the error equation (75). For T 1 we have by
Since
c \Deltat n kc t k 2
we see that
\Deltat
Assume that D(u) is Lipschitz [9], by (63), (64), (65), and (70) we have
c kck 2
where j is a small positive number. Obviously,
We rewrite T 5 in the following form that
re
is bounded, from (63), (64), (65) and (70) we obtain
c kck 2
Note that g is a linear function of c, we have
c kck 2
Finally, we see that the left-hand side of (75) dominates2\Deltat n
positive number.
Substituting (76)-(82) into (75) and choosing j small we obtain the following error inequality
\Deltat
where
c kck 2
Applying Gronwall's lemma to (84) we see that
\Deltat n
We are now ready to state our main results for scheme (58)-(60).
Theorem 2 Let fp; u; cg be the solution to Problem (3)-(2), and fP; U; Cg the solution to the
scheme (58)-(60) with s iterations at each time step. Then there exists a constant K, for \Deltat
sufficiently small,
c kck L
\Deltat n kC
c kck L
This theorem tells us that for sufficiently small perturbation parameter ffl, the error estimates
for the velocity, pressure and concentration are optimal.
U s
R
U s
U s
Figure
1. One element with velocity on each edge and pressure at the center.
5 Numerical Experiments
In this section, we present some numerical examples to show how well our iterative scheme
performs, and how the parameter ffl affects the number of iterations and accuracy. For simplicity,
we will just consider the pressure-velocity equation, since the concentration equation has been
analyzed previously [9, 10, 11, 14, 15, 16, 17, 23, 24, 25].
Consider the elliptic problem with Neumann boundary condition
div
where
\Omega is a square and \Gamma its boundary. More general
domain\Omega will not present technical
problems. For simplicity, we also assume that the coefficient a is a scalar.
The approximation scheme takes the form: Find U s 2 W h , for
(a
Partition the
domain\Omega into a set of squares of side length h. We take the space W h to be
the vector part of the Raviart-Thomas [22] space of index 0. Thus
where P k is the set of one variable polynomials of order less than or equal to k. Consequently,
the approximate pressure P s lies in the space of piecewise constants. Partitioning the domain
into triangles or rectangles, or applying higher order approximation polynomials can be treated
analogously.
Let U s
ff denote the constant value of the flux in positive x or y-direction on the edge ff;
L, R, B, T (representing left, right, bottom, and top, respectively), of each element. See Figure
1. Consider w to be the basis function (on the standard reference square). Applying
the trapezoidal rule to (88) we
where q ff is the value of q at the middle point of edge ff. Similarly, letting
simplifying we have the following linear system for each element.
R U s
Note the equation (89) has the discrete version on each element:
From equations (91)-(94) we can easily form the element stiffness matrix. Then assembling
all the element matrices and taking into account the boundary condition we obtain the stiffness
matrix. The force vector can be gotten in an analogous way.
To test the convergence, the following stopping criterion will be adopted
where k is a positive integer, and k \Delta k1 denotes the discrete L 1 norm. When this criterion is
satisfied, the iterative process is stopped and the solutions at iteration s are adopted.
We now apply the iterative procedure (91)-(95) to some test problems on Sun SPARCstation
IPC with computation in C++ data type double. We will use the stopping criterion (96) with
pressure errors are measured for iterates fU against the exact
solutions under the L 1 norm, although in the stopping criterion (96) errors are measured for
iterates only. The initial guesses are always chosen to be zero, so all errors are 1.00 before the
iterative procedure starts.
Example 1: Let velocity u and pressure p satisfy
div
@ The true solutions for velocity u and pressure p are given
by
sin(-x)
sin(-y)
The pressure p and external flow rate div u are chosen in such a way that they both have
mean value zero.
Example 2: Let velocity u and pressure p satisfy the problem
div
1. The true solutions for
velocity u and pressure p are given by
Example 3: Let velocity u and pressure p satisfy the nonhomogeneous problem
div
y
10. The function g is chosen such that
the true solutions for velocity u and pressure p are given by
For Example 1, the results with (uniform) grid sizes 1and 1are shown in Tables 1 through
4. Although more than 2 iterations are required for the iterative procedure to stop when
or 10 \Gamma2 , the approximate velocity and pressure at iteration 2 are accurate enough. For Example
2, the results with grid sizes 1and 1are shown in Tables 5 and 6.
The results of Example 3 with grid sizes 1and 1are shown in Tables 7 and 8. However,
the pressure is less accurate than velocity in this example. This might be caused by the fact
errors are calculated relative to the exact solutions and shown in the L 1 norm. Initial guesses
are always chosen to be zero, so all relative errors are 1.00 at iteration 0.
iteration velocity pressure velocity pressure velocity pressure
Table
2: Numerical results for Example 1 with grid size = 1. Both velocity errors and pressure
errors are calculated relative to the exact solutions and shown in the L 1 norm. All runs stop
after 2 iterations except for the case \Gamma6 in which 1 iteration is required.
iteration velocity pressure velocity pressure velocity pressure
that the pressure iterates fP s g converge to the true pressure p up to a constant, since p has
mean value zero while fP s g do not.
From
Tables
through 8 we conclude that our iterative method performs as well as the theory
predicts. In particular, it can achieve good accuracy for velocity, while the linear systems solved
are symmetric and positive definite. Also, the computational work of our method is much smaller
than that of mixed methods, since the number of iterations required is usually very small.
Table
3: Numerical results for Example 1 with grid size = 1. Both velocity errors and pressure
errors are calculated relative to the exact solutions and shown in the L 1 norm.
iteration velocity pressure velocity pressure velocity pressure
errors are calculated relative to the exact solutions and shown in the L 1 norm. All runs stop
after 2 iterations except for the case \Gamma6 in which 1 iteration is required.
iteration velocity pressure velocity pressure velocity pressure
Table
5: Numerical results for Example 2 with grid size = 1. Both velocity errors and pressure
errors are calculated relative to the exact solutions and shown in the L 1 norm.
iteration velocity pressure velocity pressure velocity pressure
Table
Numerical results for Example 2 with grid size = 1. Both velocity errors and pressure
errors are calculated relative to the exact solutions and shown in the L 1 norm.
iteration velocity pressure velocity pressure velocity pressure
Table
7: Numerical results for Example 3 with grid size = 1. Both velocity errors and pressure
errors are calculated relative to the exact solutions and shown in the L 1 norm.
iteration velocity pressure velocity pressure velocity pressure
iteration velocity pressure velocity pressure velocity pressure
6 Concluding Remarks
We have proposed an iterative procedure for the pressure-velocity equation in the numerical
simulation of miscible displacement in porous media. This procedure is first analyzed at the
differential level and then discretized by finite element methods. Theoretical analysis and numerical
experiments show that this procedure converges at the rate of O(ffl), where ffl is a small
positive number. The fast convergence rate and the ease of choosing relaxation parameter make
our iterative method different from penalty methods and Uzawa's algorithm. Indeed, our numerical
experiments show that two or three iterations are usually enough for a variety of problems,
corresponding to
Compared with mixed finite element methods, the discrete version of our scheme can provide
the same accurate approximations for velocity and pressure, which is crucial in reservoir problems
since velocity is intimately involved in the concentration equation. However, in contrast to mixed
finite element methods, our scheme requires only the solution of symmetric and positive definite
linear systems which have a smaller number of degrees of freedom corresponding to the velocity
variable. Since our method can completely decouple the velocity and pressure variables, the
so-called Babuska-Brezzi condition is not needed in constructing the finite dimensional spaces
for velocity and pressure.
In view of the advantages of our iterative method, we can conclude that it can lead to great
savings in computer memory and small execution time of the numerical algorithm. Also, it
has the capability of effectively dealing with heterogeneous and anisotropic media in which the
permeability tensor may be non-diagonal, rapidly-varying and even discontinuous. In [27], one
of the authors conducted numerical experiments for the miscible misplacement using the IPM
for the pressure-velocity equation and a modified characteristic method for the concentration
equation.
Finally, we point out that the iterative procedure presented in this paper can be applied
easily to three-dimensional problems.
Acknowledgement
: The authors would like to thank Professors Uri Ascher and Jim Dou-
glas, Jr. for their inspiration and support in the pursuit of this research.
--R
Decomposition of vector spaces and application to the Stokes
Sequential regularization methods for higher index DAEs with constraint singularities: Linear index-2 case
Sequential regularization methods for nonlinear higher index DAEs
Mixed and Hybrid Finite Element Methods
Discontinuous upwinding and mixed finite elements for two-phase flows in reservoir simulation
On the approximation of miscible displacement in porous media by a method of characteristics combined with a mixed method
Inexact and preconditioned Uzawa algorithms for saddle point problems
The Mathematics of Reservoir Simulation
Efficient time-stepping methods for miscible displacement problems in porous media
Simulation of miscible displacement using mixed methods and a modified method of characteristics
Convergence analysis of an approximation of miscible displacement in porous media by mixed finite elements and a modified method of characteristics
Galerkin Methods for miscible displacement problems in porous media
Applications to the Numerical Solution of Boundary-Value Problems
Finite Element Methods for Navier-Stokes Equations
A sequential regularization method for time-dependent incompressible Navier-Stokes equations
Regularization methods for differential equations and their numerical solution
A mixed finite element method for second order elliptic problems
Finite elements with characteristic finite element method for a miscible displacement problem
Time stepping along characteristics with incomplete iteration for a Galerkin approximation of miscible displacement in porous media
Mixed methods with dynamic finite element spaces for miscible displacement in porous media
A characteristic mixed method with dynamic finite element space for convection-dominated diffusion problems
Numerical simulation of miscible displacement in porous media using an iterative perturbation algorithm combined with a modified method of characteristics.
--TR | miscible displacement;iterative method;perturbation method;galerkin method;flow in porous media |
292374 | A Sparse Approximate Inverse Preconditioner for Nonsymmetric Linear Systems. | This paper is concerned with a new approach to preconditioning for large, sparse linear systems. A procedure for computing an incomplete factorization of the inverse of a nonsymmetric matrix is developed, and the resulting factorized sparse approximate inverse is used as an explicit preconditioner for conjugate gradient--type methods. Some theoretical properties of the preconditioner are discussed, and numerical experiments on test matrices from the Harwell--Boeing collection and from Tim Davis's collection are presented. Our results indicate that the new preconditioner is cheaper to construct than other approximate inverse preconditioners. Furthermore, the new technique insures convergence rates of the preconditioned iteration which are comparable with those obtained with standard implicit preconditioners. | Introduction
. In this paper we consider the solution of nonsingular linear systems
of the form
(1)
where the coefficient matrix A 2 IR n\Thetan is large and sparse. In particular, we are concerned
with the development of preconditioners for conjugate gradient-type methods. It is well-known
that the rate of convergence of such methods for solving (1) is strongly influenced by
the spectral properties of A. It is therefore natural to try to transform the original system
into one having the same solution but more favorable spectral properties. A preconditioner
is a matrix that can be used to accomplish such a transformation. If G is a nonsingular
Dipartimento di Matematica, Universit'a di Bologna, Italy and CERFACS, 42 Ave. G. Coriolis, 31057
Toulouse Cedex, France ([email protected]). This work was supported in part by a grant under the scientific
cooperation agreement between the CNR and the Czech Academy of Sciences.
y Institute of Computer Science, Academy of Sciences of the Czech Republic, Pod vod'arenskou v-e-z'i 2,
([email protected]). The work of this author was supported in
part by grants GA CR No. 201/93/0067 and GA AS CR No. 230401 and by NSF under grant number
INT-9218024.
Michele Benzi and Miroslav T-uma
matrix which approximates A linear system
(2)
will have the same solution as system (1) but the convergence rate of iterative methods
applied to (2) may be much higher. Problem (2) is preconditioned from the left, but right
preconditioning is also possible. Preconditioning on the right leads to the transformed linear
system
Once the solution y of (3) has been obtained, the solution of (1) is given by
The choice between left or right preconditioning is often dictated by the choice of the
iterative method. It is also possible to use both forms of preconditioning at once (split
preconditioning), see [3] for further details.
Note that in practice it is not required to compute the matrix product GA (or AG)
explicitly, because conjugate gradient-type methods only necessitate the coefficient matrix
in the form of matrix-vector multiplies. Therefore, applying the preconditioner within a
step of a gradient-type method reduces to computing the action of G on a vector.
Loosely speaking, the closer G is to the exact inverse of A, the higher the rate of
convergence of iterative methods will be. Choosing yields convergence in one
step, but of course constructing such a preconditioner is equivalent to solving the original
problem. In practice, the preconditioner G should be easily computed and applied, so that
the total time for the preconditioned iteration is less than the time for the unpreconditioned
one. Typically, the cost of applying the preconditioner at each iteration of a conjugate
gradient-type method should be of the same order as the cost of a matrix-vector multiply
involving A. For a sparse A, this implies that the preconditioner should also be sparse with
a density of nonzeros roughly of the same order as that of A.
Clearly, the effectiveness of a preconditioning strategy is strongly problem and architecture
dependent. For instance, a preconditioner which is expensive to compute may become
viable if it is to be reused many times, since in this case the initial cost of forming the
preconditioner can be amortized over several linear systems. This situation occurs, for in-
stance, when dealing with time-dependent or nonlinear problems, whose numerical solution
gives rise to long sequences of linear systems having the same coefficient matrix (or a slowly
varying one) and different right-hand sides. Furthermore, preconditioners that are very efficient
in a scalar computing environment may show poor performance on vector and parallel
machines, and conversely.
Approximate Inverse Preconditioning 3
A number of preconditioning techniques have been proposed in the literature (see, e.g.,
[2],[3] and the references therein). While it is generally agreed that the construction of efficient
general-purpose preconditioners is not possible, there is still considerable interest in
developing methods which will perform well on a wide range of problems and are well-suited
for state-of-the-art computer architectures. Here we introduce a new algebraic preconditioner
based on an incomplete triangular factorization of A \Gamma1 . This paper is the natural
continuation of [8], where the focus was restricted to symmetric positive definite systems
and to the preconditioned conjugate gradient method (see also [5],[7]).
The paper is organized as follows. In x2 we give a quick overview of implicit and explicit
preconditioning techniques, considering the relative advantages as well as the limitations of
the two approaches. In x3 we summarize some recent work on the most popular approach
to approximate inverse preconditioning, based on Frobenius norm minimization. In x4 we
introduce the new incomplete inverse triangular decomposition technique and describe some
of its theoretical properties. A graph-theoretical characterization of fill-in in the inverse
triangular factorization is presented in x5. In x6 we consider the use of preconditioning
on matrices which have been reduced to block triangular form. Implementation details
and the results of numerical experiments are discussed in xx7 and 8, and some concluding
remarks and indications for future work are given in x9. Our experiments suggest that the
new preconditioner is cheaper to construct than preconditioners based on the optimization
approach. Moreover, good rates of convergence can be achieved by our preconditioner,
comparable with those insured by standard ILU-type techniques.
2. Explicit vs. implicit preconditioning. Most existing preconditioners can be
broadly classified as being either of the implicit or of the explicit kind. A preconditioner
is implicit if its application, within each step of the chosen iterative method, requires the
solution of a linear system. A nonsingular matrix M - A implicitly defines an approximate
applying G requires solving a linear system with coefficient
matrix M . Of course, M should be chosen so that solving a system with matrix M is easier
than solving the original problem (1). Perhaps the most important example is provided by
preconditioners based on an Incomplete LU (ILU) decomposition. Here
U where
L and -
U are sparse triangular matrices which approximate the exact L and U factors of
A. Applying the preconditioner requires the solution of two sparse triangular systems (the
forward and backward solves). Other notable examples of implicit preconditioners include
the ILQ, SSOR and ADI preconditioners, see [3].
4 Michele Benzi and Miroslav T-uma
In contrast, with explicit preconditioning a matrix G - A \Gamma1 is known (possibly as the
product of sparse matrices) and the preconditioning operation reduces to forming one (or
more) matrix-vector product. For instance, many polynomial preconditioners belong to this
class [37]. Other explicit preconditioners will be described in the subsequent sections.
Implicit preconditioners have been intensively studied, and they have been successfully
employed in a number of applications. In spite of this, in the last few years an increasing
amount of attention has been devoted to alternative forms of preconditioning, especially of
the explicit kind. There have been so far two main reasons for this recent trend. In the first
place, shortly after the usage of modern high-performance architectures became widespread,
it was realized that straightforward implementation of implicit preconditioning in conjugate
gradient-like methods could lead to severe degradation of the performance on the new
machines. In particular, the sparse triangular solves involved in ILU-type preconditioning
were found to be a serial bottleneck (due to the recursive nature of the computation), thus
limiting the effectiveness of this approach on vector and parallel computers. It should be
mentioned that considerable effort has been devoted to overcoming this difficulty. As a
result, for some architectures and types of problems it is possible to introduce nontrivial
parallelism and to achieve reasonably good performance in the triangular solves by means
of suitable reordering strategies (see, e.g., [1],[38],[54]). However, the triangular solves
remain the most problematic aspect of the computation, both on shared memory [33] and
distributed memory [10] computers, and for many problems the efficient application of an
implicit preconditioner in a parallel environment still represents a serious challenge.
Another drawback of implicit preconditioners of the ILU-type is the possibility of break-downs
during the incomplete factorization process, due to the occurrence of zero or exceedingly
small pivots. This situation typically arises when dealing with matrices which are
strongly unsymmetric and/or indefinite, even if pivoting is applied (see [11],[49]), and in
general it may even occur for definite problems unless A exhibits some degree of diagonal
dominance. Of course, it is always possible to safeguard the incomplete factorization process
so that it always runs to completion, producing a nonsingular preconditioner, but there
is also no guarantee that the resulting preconditioner will be of acceptable quality. Fur-
thermore, as shown in [23], there are problems for which standard ILU techniques produce
unstable incomplete factors, resulting in useless preconditioners.
Explicit preconditioning techniques, based on directly approximating A \Gamma1 , have been
developed in an attempt to avoid or mitigate such difficulties. Applying an explicit preconditioner
only requires sparse matrix-vector products, which should be easier to parallelize
Approximate Inverse Preconditioning 5
than the sparse triangular solves, and in some cases the construction of the preconditioner
itself is well-suited for parallel implementation. In addition, the construction of an approximate
inverse is sometimes possible even if the matrix does not have a stable incomplete LU
decomposition. Moreover, we mention that sparse incomplete inverses are often used when
constructing approximate Schur complements (pivot blocks) for use in incomplete block
factorization and other two-level preconditioners, see [2],[3],[12],[15].
Of course, explicit preconditioners are far from being completely trouble-free. Even if
a sparse approximate inverse G is computed, care must be exercised to ensure that G is
nonsingular. For nonsymmetric problems, the same matrix G could be a good approximate
inverse if used for left preconditioning and a poor one if used for right preconditioning,
see [36, p. 96],[45, p. 66],[48]. Furthermore, explicit preconditioners are sometimes not
as effective as implicit ones at reducing the number of iterations, in the sense that there
are problems for which they require a higher number of nonzeros in order to achieve the
same rate of convergence insured by implicit preconditioners. One of the reasons for this
limitation is that an explicit preconditioner attempts to approximate A \Gamma1 , which is usually
dense, with a sparse matrix. Thus, an explicit preconditioner is more likely to work well if
A \Gamma1 contains many entries which are small (in magnitude). A favorable situation is when A
exhibits some form of diagonal dominance, but for such problems implicit preconditioning
is also likely to be very effective. Hence, for problems of this type, explicit preconditioners
can be competitive with implicit ones only if explicitness is fully exploited. Finally, explicit
preconditioners are usually more expensive to compute than implicit ones, although this
difference may become negligible in the common situation where several linear systems with
the same coefficient matrix and different right-hand sides have to be solved. In this case
the time for computing the preconditioner is often only a fraction of the time required for
the overall computation. It is also worth repeating that the construction of certain sparse
approximate inverses can be done, at least in principle, in a highly parallel manner, whereas
the scope for parallelism in the construction of ILU-type preconditioners is more limited.
3. Methods based on Frobenius norm minimization. A good deal of work has
been devoted to explicit preconditioning based on the following approach: the sparse approximate
inverse is computed as the matrix G which minimizes kI \Gamma GAk (or kI \Gamma AGk
for right preconditioning) subject to some sparsity constraint (see [4], Ch. 8 of [2],[16],[43],
[44],[32],[31],[11],[30]). Here the matrix norm is usually the Frobenius norm or a weighted
variant of it, for computational reasons. With this choice, the constrained minimization
problem decouples into n independent linear least squares problems (one for each row, or
6 Michele Benzi and Miroslav T-uma
column of G), the number of unknowns for each problem being equal to the number of
nonzeros allowed in each row (or column) of G. This immediately follows from the identity
is the ith unit vector and g i is the ith column of G. Clearly, there is considerable
scope for parallelism in this approach. The resulting sparse least squares problems can
be solved, in principle, independently of each other, either by direct methods (as in [44],
[31],[30]) or iteratively ([11],[42]).
In early papers (e.g. [4],[32],[43]) the sparsity constraint was imposed a priori, and the
minimizer was found relative to a class of matrices with a predetermined sparsity pattern.
For instance, when A is a band matrix with a good degree of diagonal dominance, a banded
approximation to A \Gamma1 is justified, see [18]. However, for general sparse matrices it is very
difficult to guess a good sparsity pattern for an approximate inverse, and several recent
papers have addressed the problem of adaptively defining the nonzero pattern of G in order
to capture "large" entries of the inverse [31],[30]. Indeed, by monitoring the size of each
residual it is possible to decide whether new entries of g i are to be retained or
discarded, in a dynamic fashion. Moreover, the information on the residuals can be utilized
to derive rigorous bounds on the clustering of the singular values of the preconditioned
matrix and therefore to estimate its condition number [31]. It is also possible to formulate
conditions on the norm of the residuals which insure that the approximate inverse will
be nonsingular. Unfortunately, such conditions appear to be of dubious practical value,
because trying to fulfill them could lead to a very dense approximate inverse [16],[11].
A disadvantage of this approach is that symmetry in the coefficient matrix cannot be
exploited. If A is symmetric positive definite (SPD), the sparse approximate inverse will
not be symmetric in general. Even if a preset, symmetric sparsity pattern is enforced, there
is no guarantee that the approximate inverse will be positive definite. This could lead to a
breakdown in the conjugate gradient acceleration. For this reason, Kolotilina and Yeremin
[43],[44] propose to compute an explicit preconditioner of the form
is lower triangular. The preconditioned matrix is then GLAG T
, which is SPD, and the
conjugate gradient method can be applied. The matrix GL is the solution of a constrained
minimization problem for the Frobenius norm of I \Gamma LGL where L is the Cholesky factor
of A. In [43] it is shown how this problem can be solved without explicit knowledge of
any of the entries of L, using only entries of the coefficient matrix A. The same technique
can also be used to compute a factorized approximate inverse of a nonsymmetric matrix by
Approximate Inverse Preconditioning 7
separately approximating the inverses of the L and U factors. As it stands, however, this
technique requires that the sparsity pattern of the approximate inverse triangular factors
be specified in advance, and therefore is not suitable for matrices with a general sparsity
pattern.
There are additional reasons for considering factorized approximate inverses. Clearly,
with the approximate inverse G expressed as the product of two triangular factors it is
trivial to insure that G is nonsingular. Another argument in favor of this approach is given
in [11], where it is observed that factorized forms of general sparse matrices contain more
information for the same storage than if a single product was stored.
The serial cost for the construction of this type of preconditioner is usually very high,
although the theoretical parallel complexity can be quite moderate [44],[30]. The results of
numerical experiments reported in [44] demonstrate that factorized sparse approximate inverse
preconditioners can insure rapid convergence of the preconditioned conjugate gradient
iteration when applied to certain finite element discretizations of 3D PDE problems
arising in elasticity theory. However, in these experiments the preconditioning strategy is
not applied to the coefficient matrix directly, but rather to a reduced system (Schur comple-
ment) which is better conditioned and considerably less sparse than the original problem.
When the approximate inverse preconditioner is applied directly to the original stiffness
matrix A, the rate of convergence of the PCG iteration is rather disappointing.
A comparison between a Frobenius norm-based sparse approximate inverse preconditioner
and the ILU(0) preconditioner on a number of general sparse matrices has been made
in [30]. The reported results show that the explicit preconditioner can insure rates of convergence
comparable with those achieved with the implicit ILU-type approach. Again, it is
observed that the construction of the approximate inverse is often very costly, but amenable
to parallelization.
Factorized sparse approximate inverses have also been considered by other authors, for
instance by Kaporin [39],[40],[41], whose approach is also based on minimizing a certain matrix
functional and is closely related to that of Kolotilina and Yeremin. In the next sections
we present an alternative approach to factorized sparse approximate inverse preconditioning
which is not grounded in optimization, but is based instead on a direct method of matrix
inversion. As we shall see, the serial cost of forming a sparse approximate inverse with this
technique is usually much less than with the optimization approach, while the convergence
rates are still comparable, on average, with those obtained with ILU-type preconditioning.
8 Michele Benzi and Miroslav T-uma
4. A method based on inverse triangular factorization. The optimization approach
to constructing approximate inverses is not the only possible one. In this section we
consider an alternative procedure based on a direct method of matrix inversion, performed
incompletely in order to preserve sparsity. This results in a factorized sparse G - A \Gamma1 .
Being an incomplete matrix factorization method, our procedure resembles classical ILU-
type implicit techniques, and indeed we can draw from the experience accumulated in years
of use of ILU-type preconditioning both at the implementation stage and when deriving
theoretical properties of the preconditioner G. This paper continues the work in [8], where
the symmetric positive definite case was studied (see also [5],[7]).
The construction of our preconditioner is based on an algorithm which computes two
sets of vectors fz i g n
, which are A-biconjugate, i.e. such that w T
only if i 6= j. Given a nonsingular matrix A 2 IR n\Thetan , there is a close relationship between
the problem of inverting A and that of computing two sets of A-biconjugate vectors fz i g n
and fw i g n
. If
is the matrix whose ith column is z i and
is the matrix whose ith column is w i , then
It follows that W and Z are necessarily nonsingular and
Hence, the inverse of A is known if two complete sets of A-biconjugate vectors are known.
Note that there are infinitely many such sets. Matrices W and Z whose columns are A-
biconjugate can be explicitly computed by means of a biconjugation process applied to the
columns of any two nonsingular matrices W (0) , Z (0) 2 IR n\Thetan . A computationally convenient
choice is to let W the biconjugation process is applied to the unit basis
vectors. In order to describe the procedure, let a T
and c T
denote the ith row of A and A T ,
Approximate Inverse Preconditioning 9
respectively (i.e., c i is the ith column of A). The basic A-biconjugation procedure can be
written as follows.
THE BICONJUGATION ALGORITHM
:= a T
z (i)
z (i\Gamma1)
and
Return
This algorithm is essentially due to L. Fox, see Ch. 6 of [25]. Closely related methods
have also been considered by Hestenes and Stiefel [35, pp. 426-427],[34] and by Stewart [52].
A more general treatment is given in the recent paper [14]. Geometrically, the procedure
can be regarded as a generalized Gram-Schmidt orthogonalization with oblique projections
and nonstandard inner products, see [6],[14].
Several observations regarding this algorithm are in order. In the first place we note
that the above formulation contains some redundancy, since in exact arithmetic
Therefore, at step i the computation of the dot product q (i\Gamma1)
i may be replaced
by the assignment q (i\Gamma1)
. Another observation is the fact that the procedure, as
it stands, is vulnerable to breakdown (division by zero), which occurs whenever any of the
Michele Benzi and Miroslav T-uma
quantities p (i\Gamma1)
happens to be zero. It can be shown that in exact arithmetic, the
biconjugation algorithm will not break down if and only if all the leading principal minors of
A are nonzero (see below). For any nonsingular matrix A there exists a permutation matrix
(or Q) such that the procedure applied to PA (or to AQ) will not break down. As in
the LU decomposition with pivoting, such permutation matrices represent row (or column)
interchanges on A which can be performed, if needed, in the course of the computation.
If the biconjugation process can be carried to completion without interchanges, the
resulting Z and W matrices are upper triangular, 1 they both have all diagonal entries equal
to one, and satisfy the identity
We recognize in (5) the familiar LDU decomposition A = LDU , where L is unit lower
triangular, U is unit upper triangular and D is the diagonal matrix with the pivots down
the main diagonal. Because this factorization is unique, we have that the biconjugation
algorithm explicitly computes
and the matrix D, which is exactly the same in (5) and in A = LDU . Hence, the process
produces an inverse triangular decomposition of A or, equivalently, a triangular decomposition
(of the UDL type) of A \Gamma1 . The p i 's returned by the algorithm are the pivots in the
LDU factorization of A. If we denote by \Delta i the ith leading principal minor of A (1 - i - n)
and let the identity (5) implies that
showing that the biconjugation algorithm can be performed without breakdowns if and only
if all leading principal minors of A are non-vanishing. In finite precision arithmetic, pivoting
may be required to promote numerical stability.
Once Z, W and D are available, the solution of a linear system of the form (1) can be
computed, by (4), as
1 Note that this is not necessarily true when a matrix other than the identity is used at the outset, i.e. if
Approximate Inverse Preconditioning 11
In practice, this direct method for solving linear systems is not used on account of its cost:
for a dense n \Theta n matrix, the biconjugation process requires about twice the work as the
LU factorization of A. Notice that the cost of the solve phase using (6) is the same as for
the forward and backward solves with the L and U factors.
If A is symmetric, the number of operations in the biconjugation algorithm can be
halved by observing that W must equal Z. Hence, the process can be carried out using
only the rows of A, the z-vectors and the p (i\Gamma1)
. The columns of the resulting Z form a set
of conjugate directions for A. If A is SPD, no breakdown can occur (in exact arithmetic),
so that pivoting is not required and the algorithm computes the L T DL factorization of
A \Gamma1 . This method was first described in [26]. Geometrically, it amounts to Gram-Schmidt
orthogonalization with inner product hx; yi := x T Ay applied to the unit vectors e
It is sometimes referred to as the conjugate Gram-Schmidt process . The method is still
impractical as a direct solver because it requires about twice the work of Cholesky for dense
matrices. However, as explained in [5] and [6], the same algorithm can also be applied to
nonsymmetric systems, resulting in an implicit LDU factorization where only
are computed. Indeed, it is possible to compute a solution to (1) for any right-hand
side using just Z, D and part of the entries of A. This method has the same arithmetic
complexity as Gaussian elimination when applied to dense problems. When combined with
suitable sparsity-preserving strategies the method can be useful as a sparse direct solver, at
least for some types of problems (see [5],[6]).
For a sparse symmetric and positive definite A, the Z matrix produced by the algorithm
tends to be dense (see the next section), but it can be observed experimentally that very
often, most of the entries in Z have small magnitude. If fill-in in the Z matrix is reduced by
removing suitably small entries in the computation of the z-vectors, the algorithm computes
a sparse matrix -
Z and a diagonal matrix -
D such that
(i.e., a factorized sparse approximate inverse of A). Hence, G can be used as an explicit
preconditioner for the conjugate gradient method. A detailed study of this preconditioning
strategy for SPD problems can be found in [8], where it is proven that the incomplete
inverse factorization exists if A is an H-matrix (analogously to ILU-type factorizations).
The numerical experiments in [8] show that this approach can insure fast convergence of the
PCG iteration, almost as good as with implicit preconditioning of the incomplete Cholesky
type. The construction of the preconditioner itself, while somewhat more expensive than the
computation of the incomplete Cholesky factorization, is still quite cheap. This is in contrast
Michele Benzi and Miroslav T-uma
with the least squares approach described in the previous section, where the construction of
the approximate inverse is usually very time consuming, at least in a sequential environment.
In the remainder of this paper we consider an explicit preconditioning strategy based
on the biconjugation process described above. Sparsity in the Z and W factors of A \Gamma1 is
preserved by removing "small" fill in the z- and w-vectors. A possibility would be to drop
all newly added fill-in elements outside of a preset sparsity pattern above the main diagonal
in Z and W ; however, for general sparse matrices it is very difficult to guess a reasonable
sparsity pattern, and a drop tolerance is used instead. A trivial extension of the results
in [8] shows that the incomplete biconjugation process (incomplete inverse factorization)
cannot break down, in exact arithmetic, if A is an H-matrix. For more general matrices
it is necessary to safeguard the computation in order to avoid breakdowns. This requires
pivot modifications and perhaps some form of pivoting -we postpone the details to x7.
The incomplete biconjugation algorithm computes sparse unit upper triangular matrices
W - W and a nonsingular diagonal matrix -
D - D such that
is a factorized sparse approximate inverse of A which can be used as an explicit preconditioner
for conjugate gradient-type methods for the solution of (1).
We conclude this section with a few remarks on properties of the approximate inverse
preconditioner G just described. If A is not an H-matrix, as already mentioned, the construction
of the preconditioner could break down due to the occurrence of zero or extremely
small pivots. However, following [46], we note that there always exists ff ? 0 such that
ffI is diagonally dominant, and hence an H-matrix. Therefore, if the incomplete bicon-
jugation algorithm breaks down, one could try to select ff ? 0 and re-attempt the process
on the shifted matrix A should be large enough to insure the existence
of the incomplete inverse factorization, but also small enough so that A 0 is close to A. This
approach has several drawbacks: for ill-conditioned matrices, the quality of the resulting
preconditioner is typically poor; furthermore, the breakdown that prompts the shift may
occur near the end of the biconjugation process, and the preconditioner may have to be
recomputed several times before a satisfactory value of ff is found. A better strategy is to
perform diagonal modifications only as the need arises, shifting pivots away from zero if
their magnitude is less than a specified threshold (see x7 for details).
If A is an M-matrix, it follows from the results in [8] that G is a nonnegative matrix.
Approximate Inverse Preconditioning 13
Moreover, it is easy to see that componentwise the following inequalities hold:
where DA is the diagonal part of A. Furthermore, if G 1 and G 2 are two approximate
inverses of the M-matrix A produced by the incomplete biconjugation process and the drop
tolerance used for G 1 is greater than or equal to the drop tolerance used for G 2 , then
The same is true if sparsity patterns are used to determine the nonzero structure in -
Z and
W and the patterns for G 2 include the patterns for G 1 . This monotonicity property is
shared by other sparse approximate inverses, see for example Ch. 8 in [2]. We note that
property (7) is important if the approximate inverse is to be used within an incomplete
block factorization of an M-matrix A, because it insures that all the intermediate matrices
produced in the course of the incomplete factorization preserve the M-matrix property (see
[2, pp. 263-264]).
Finally, after discussing the similarities, we point to a difference between our incomplete
inverse factorization and the ILU-type factorization of a matrix. The incomplete factorization
of an M-matrix A induces a splitting
which is a regular splitting, and
therefore convergent: ae(I \Gamma -
denotes the spectral radius of a
[47],[55]). The same is not true, in general, for our incomplete factorization.
If one considers the induced splitting splitting need
not be convergent. An example is given by the symmetric M-matrix
A =B @
For this matrix, the incomplete inverse factorization with a drop tolerance
intermediate fill-in is dropped if smaller than T in absolute value) produces an approximate
inverse G such that ae(I \Gamma GA) - 1:215 ? 1. This shows that the approximate decomposition
cannot be obtained, in general, from an incomplete factorization of A. In this sense, the
incomplete inverse factorization is not algebraically equivalent to an incomplete LDU factorization
performed on A.
14 Michele Benzi and Miroslav T-uma
5. Fill-in in the biconjugation algorithm. In this section we give a characterization
of the fill-in occurring in the factorized inverse obtained by the biconjugation algorithm.
These results may serve as a guideline to predict the structure of the factorized approximate
inverse, and have an impact on certain aspects of the implementation.
It is well-known that structural nonzeros in the inverse matrix A \Gamma1 can be characterized
by the paths in the graph of the original matrix A (see [24],[29]). The following lemma states
necessary and sufficient conditions for a new entry (fill-in) to be added in one of the z-vectors
at the ith step of the biconjugation algorithm. A similar result holds for the w-vectors. We
make use of the standard no-cancellation assumption.
Lemma 5.1. Let 1
z (i\Gamma1)
if and only if l - i, z (i\Gamma1)
li and, at the same time, at least one of the two following
conditions holds:
Proof. Suppose that z (i\Gamma1)
0: Directly from the update formula for the
z-vectors we see that z (i\Gamma1)
li 6= 0 and l - i, since z (i\Gamma1)
lj becomes
nonzero in the ith step then clearly p (i\Gamma1)
j must be nonzero. But
z (i\Gamma1)
kj a ik
and we get the result. The opposite implication is trivial. 2
Figures
5.1 through 5.6 provide an illustration of the previous lemma. Figure 5.1 shows
the nonzero structure of the matrix FS760 1 of order 760 from the Harwell-Boeing
collection [21]. Figures 5.2-6 show the structure of the factor Z at different stages of the
biconjugation algorithm. These pictures show that in the initial steps, when most of the
entries of Z are still zero, the nonzeros in Z are induced by nonzeros in the corresponding
positions of A. A similar situation occurs, of course, for the process which computes W .
In
Figure
5.7 we show the entries of Z which are larger (in absolute
in
Figure
5.8 we show the incomplete factor -
Z obtained with drop tolerance It
can be seen how well the incomplete process is able to capture the "large" entries in the
complete factor Z. The figures were generated using the routines for plotting sparse matrix
patterns from SPARSKIT [50].
Approximate Inverse Preconditioning 15
Figure
5.1-2: Structure of the matrix FS760 1 (left) and of the factor Z
(right) after 20 steps of the biconjugation process.
Figure
5.3-4: Structure of Z after 70 steps (left) and 200 steps (right)
of the biconjugation process.
Michele Benzi and Miroslav T-uma
Figure
5.5-6: Structure of Z after 400 steps (left) and 760 steps (right)
of the biconjugation process.
Figure
5.7-8: Structure of entries in Z larger than 10 \Gamma10 (left) and structure
of incomplete factor -
Z with drop tolerance
Approximate Inverse Preconditioning 17
A sufficient condition to have a fill-in in the matrix Z after some steps of the biconju-
gation algorithm is given by the following Lemma.
Lemma 5.2. Let E) be a bipartite graph with and such that
If for some indices i l there is a path
in B, then z (i p )
Proof. We use induction on p. Let
0: Of course,
z (i 1 \Gamma1)
and from Lemma 5.1 we get z (i 1 )
Suppose now that Lemma 5.2 is true for all l ! p. Then, z (i
a
using the no-cancellation assumption
we also have z (i p )
The following theorem gives a necessary and sufficient condition for a nonzero entry to
appear in position (l; j), l ! j, in the inverse triangular factor.
Theorem 5.3. Let 1 only if for some p - 1 there are
Proof. We first show that the stated conditions are sufficient. By Lemma 5.1, the nonzeros
a
imply that z (i 1 )
l 1 j is also nonzero. If are done. Otherwise, z (i 2 \Gamma1)
and a
0: Taking into account that z l 2 we get that z (i 2 )
is
nonzero. Repeating these arguments inductively we finally get z (i p )
l
Consequently,
z (i)
Assume now that z lj 6= 0. Lemma 5.1 implies that at least one of the following two
conditions holds: either there exists li 0 6= 0, or there
exist indices i such that a i 00 k 6= 0, z (i 00 \Gamma1)
li 00 6= 0: In the
former case we have the necessary conditions. In the latter case we can apply Lemma 5.1
inductively to z (i 00 \Gamma1)
After at most j inductive steps we obtain the conditions. 2
Clearly, the characterization of fill-in in the inverse triangular factorization is less transparent
than the necessary and sufficient condition which characterize nonzeros in the non-
factorized inverse.
Michele Benzi and Miroslav T-uma
6. Preconditioning for block triangular matrices. Many sparse matrices arising
in real-world applications may be reduced to block triangular form (see Ch. 6 in [20]). In
this section we discuss the application of preconditioning techniques to linear systems with
a block (lower) triangular coefficient matrix, closely following [30].
The reduction to block triangular form is usually obtained with a two-step procedure,
as outlined in [20]. In the first step, the rows of A are permuted to bring nonzero entries on
the main diagonal, producing a matrix PA. In the second step, symmetric permutations
are used to find the block triangular form [53]. The resulting matrix can be represented as
A k1 A k2 \Delta \Delta \Delta A kkC C C C A
where the diagonal blocks A ii are assumed to be irreducible. Because A is nonsingular, the
diagonal blocks A ii must also be nonsingular.
Suppose that we compute approximate inverses of the diagonal blocks A
the incomplete biconjugation algorithm, so that A \Gamma1
Z ii
ii
ii
the inverse of A is approximated as follows (cf. [30]):
A
22
A k1 A k2 \Delta \Delta
QP:
The preconditioning step in a conjugate gradient-type method requires the evaluation
of the action of G on a vector, i.e. the computation of z = Gd for a given vector d, at each
step of the preconditioned iterative method. This can be done by a back-substitution of the
z
where
d =B @
z =B @
Approximate Inverse Preconditioning 19
with the partitioning of - z and -
d induced by the block structure of Q(PA)Q T : The computation
of which is required by certain preconditioned iterative methods, is
accomplished in a similar way.
With this approach, fill-in is confined to the approximate inverses of the diagonal blocks,
often resulting in a more sparse preconditioner. Notice also that the approximate inverses
G ii can be computed in parallel. The price to pay is the loss of part of the explicitness
when the approximate inverse preconditioner is applied, as noted in [30].
For comparison purposes, we apply the same scheme with ILU preconditioning. Specif-
ically, we approximate A as
A 21
A k1 A k2
where each diagonal block A ii is approximated by an ILU decomposition -
U ii . Applying
the preconditioner requires the solution of a linear system d at each step of the
preconditioned iteration. This can be done with the back-substitution
where
with the same partitioning of - z and -
d as above. The use of transposed ILU preconditioning
is similar.
this type of ILU block preconditioning we introduce some explicitness in the
application of the preconditioner. Again, note that the ILU factorizations of the diagonal
blocks can be performed in parallel.
We will see in the section on numerical experiments that reduction to the block triangular
form influences the behavior of the preconditioned iterations in different ways depending
on whether approximate inverse techniques or ILU-type preconditioning are used.
Michele Benzi and Miroslav T-uma
7. Implementation aspects. It is possible to implement the incomplete inverse factorization
algorithm in x4 in at least two distinct ways. The first implementation is similar in
spirit to the classical submatrix formulation of sparse Gaussian elimination as represented,
for instance, in [19],[57]. This approach relies on sparse incomplete rank-one updates of
the matrices -
Z and -
applied in the form of outer vector products. These updates are
the most time-consuming part of the computation. In the course of the updates, new fill-in
elements whose magnitude is less than a prescribed drop tolerance T are dropped. In this
approach, dynamic data structures have to be used for the -
Z and -
matrices. Note that
at step i of the incomplete inverse factorization, only the ith row a T
and the ith column c T
are required. The matrix A is stored in static data structures both by rows and by columns
(of course, a single array is needed for the numerical values of the entries of A).
For this implementation to be efficient, some additional elbow room is necessary. For
instance, in the computation of the incomplete -
Z factor the elbow room was twice the
space anticipated for storing the nonzeros in the factor itself. As we are looking for a
preconditioner with about the same number of nonzeros as the original matrix, the estimated
number of nonzeros in -
Z is half the number of nonzeros in the original matrix A. For each
column of -
Z we give an initial prediction of fill-in based on the results of x5. Thus, the
initial structure of -
Z is given by the structure of the upper triangular part of A. Of course,
W is handled similarly. If the space initially allocated for a given column is not enough, the
situation is solved in a way which is standard when working with dynamic data structures,
by looking for a block of free space at the end of the active part of the dynamic data
structure large enough to contain the current column, or by a garbage collection (see [57]).
Because most of the fill-in in -
Z and -
W appears in the late steps of the biconjugation process,
we were able to keep the amount of dynamic data structure manipulations at relatively low
levels. In the following, this implementation will be referred to as the DDS implementation.
Despite our efforts to minimize the amount of symbolic manipulations in the DDS im-
plementation, some of its disadvantages such as the nonlocal character of the computations
and a high proportion of non-floating-point operations still remain. This is an important
drawback of submatrix (right-looking, undelayed) algorithms using dynamic data structures
when no useful structural prediction is known and no efficient block strategy is used. Even
when all the operations are performed in-core, the work with both the row and column
lists in each step of the outer cycle is rather irregular. Therefore, for larger problems, most
operations are still scattered around the memory and are out-of-cache. As a consequence,
it is difficult to achieve high efficiency with the code, and any attempt to parallelize the
Approximate Inverse Preconditioning 21
computation of the preconditioner in this form will face serious problems (see [57] for a
discussion of the difficulties in parallelizing sparse rank-one updates).
For these reasons we considered an alternative implementation (hereafter referred to
as SDS) which only makes use of static data structures, based on a left-looking, delayed
update version of the biconjugation algorithm. This amounts to a rearrangement of the
computations, as shown below. For simplicity we only consider the Z factor, and assume
no breakdown occurs:
(1) Let z (0)= e 1
z (0)
(j \Gamma1)
:= a T
z (j \Gamma1)
z (j)
:= z (j \Gamma1)
(j \Gamma1)
(j \Gamma1)
z (j \Gamma1)
:= a T
z (i\Gamma1)
This procedure can be implemented with only static data structures, at the cost of increasing
the number of floating-point operations. Indeed, in our implementation we found
it necessary to recompute the dot products p (j \Gamma1)
z (j \Gamma1)
if they are used more than
once for updating subsequent columns. This increase in arithmetic complexity is more or
less pronounced, depending on the problem and on the density of the preconditioner. On
the other hand, this formulation greatly decreases the amount of irregular data structure
manipulations. It also appears better suited to parallel implementation, because the dot
products and the vector updates in the innermost loop can be done in parallel. Notice that
with SDS, it is no longer true that a single row and column of A are used at each step of the
outer loop. It is worth mentioning that numerically, the DDS and SDS implementations of
the incomplete biconjugation process are completely equivalent.
The SDS implementation is straightforward. Suppose the first steps have been
completed. In order to determine which columns of the already determined part of -
Z play
22 Michele Benzi and Miroslav T-uma
a role in the rank-one updates used to form the jth column of -
Z we only need a linked
list scanning the structure of the columns of A. This linked list is coded similarly to the
mechanism which determines the structure of the jth row of the Cholesky factor L in the
numerical factorization in SPARSPAK (see [27],[13]).
In addition to the approximate inverse preconditioner, we also coded the standard row
implementation of the classical ILU(0) preconditioner (see, e.g., [50]). We chose a no-fill
implicit preconditioner because we are mostly interested in comparing preconditioners with
a nonzero density close to that of the original matrix A.
On input, all our codes for the computation of the preconditioners check whether the
coefficient matrix has a zero-free diagonal. If not, row reordering of the matrix is used
to permute nonzeros on the diagonal. For both the ILU(0) and the approximate inverse
factorization, we introduced a simple pivot modification to avoid breakdown. Whenever
some diagonal element in any of our algorithms to compute a preconditioner was found to
be small, in our case less in absolute value than the IEEE machine precision ffl - 2:2
we increased it to 10 \Gamma3 . We have no special reasons for this choice, other than it worked well
in practice. It should be mentioned that in the numerical experiments, this safeguarding
measure was required more often for ILU(0) than for the approximate inverse factorization.
For the experiments on matrices which can be nontrivially reduced to block triangular
form, we used the routine MC13D from MA28 [19] to get the block triangular form.
8. Numerical experiments. In this section we present the results of numerical experiments
on a range of problems from the Harwell-Boeing collection [21] and from Tim
Davis' collection [17]. All matrices used were rescaled by dividing their elements by the
absolute value of their largest nonzero entry. No other scaling was used. The right-hand
side of each linear system was computed from the solution vector x of all ones, the choice
used, e.g., in [57].
We experimented with several iterative solvers of the conjugate gradient type. Here we
present results for three selected methods, which we found to be sufficiently representative:
van der Vorst's Bi-CGSTAB method (denoted BST in the tables), the QMR method of
Freund and Nachtigal, and Saad and Schultz's GMRES (restarted every 20 steps, denoted
G(20) in the tables) with Householder orthogonalization [56]. See [3] for a description of
these methods, and the report [9] for experiments with other solvers.
Approximate Inverse Preconditioning 23
The matrices used in the experiments come from reservoir simulation (ORS*, PORES2,
SAYLR* and SHERMAN*), chemical kinetics (FS5414), network flow (HOR131), circuit
simulation (JPWH991, MEMPLUS and ADD*), petroleum engineering (WATT* matrices)
and incompressible flow computations (RAEFSKY*, SWANG1). The order N and number
NNZ of nonzeros for each test problem are given in Table 1, together with the number
of iterations and computing times for the unpreconditioned iterative methods. A y means
that convergence was not attained in 1000 iterations for Bi-CGSTAB and QMR, and 500
iterations for GMRES(20).
Its Time
Table
1: Test problems (N= order of matrix, NNZ= nonzeros in matrix) and convergence
results for the iterative methods without preconditioning.
Michele Benzi and Miroslav T-uma
All tests were performed on a SGI Crimson workstation with RISC processor R4000
using double precision arithmetic. Codes were written in standard Fortran 77 and compiled
with the optimization option -O4. CPU time is given in seconds and it was measured using
the standard function dtime.
The initial guess for the iterative solvers was always x The stopping criterion
used was jjr k jj is the (unpreconditioned) updated residual. Note that
because r we have that 1 - jjr 0 jj 1 - nzr where nzr denotes the maximum
number of nonzeros in a row of A.
The following tables present the results of experiments with the ILU(0) preconditioner
and with the approximate inverse preconditioner based on the biconjugation process (here-
after referred to as AIBC). Observe that the number of nonzeros in the ILU(0) preconditioner
is equal to the number NNZ of nonzeros in the original matrix, whereas for the AIBC
preconditioner fill-in is given by the total number of nonzeros in the factors -
W and -
D.
In the tables, the number of nonzeros in AIBC is denoted by F ill. Right preconditioning
was used for all the experiments.
The comparison between the implicit and the explicit preconditioner is based on the
amount of fill and on the rate of convergence as measured by the number of iterations.
These two parameters can realistically describe the scalar behavior of the preconditioned
iterative methods. Of course, an important advantage of the inverse preconditioner, its
explicitness, is not captured by this description.
The accuracy of the AIBC preconditioner is controlled by the value of the drop tolerance
T . Smaller drop tolerances result in a more dense preconditioner and very often (but not
always) in a higher convergence rate for the preconditioned iteration. For our experiments
we consider relatively sparse preconditioners. In most cases we were able to adjust the
value of T so as to obtain an inverse preconditioner with a nonzero density close to that
of A (and hence of the ILU(0) preconditioner). Due to the scaling of the matrix entries,
the choice very often the right one. We also give results for the approximate
inverse obtained with a somewhat smaller value of the drop tolerance, in order to show how
the number of iterations can be reduced by allowing more fill-in in the preconditioner. For
some problems we could not find a value of T for which the number of nonzeros in AIBC
is close to NNZ. In these cases the approximate inverse preconditioner tended to be either
very dense or very sparse.
Approximate Inverse Preconditioning 25
In
Table
2 we give the timings for the preconditioner computation, iteration counts and
timings for the three iterative solvers preconditioned with ILU(0). The same information is
given in Table 3 for the approximate inverse preconditioner AIBC. For AIBC we give two
timings for the construction of the preconditioner, the first for the DDS implementation
using dynamic data structures and the second for the SDS implementation using only static
data structures.
ILU - Its ILU - Time
MATRIX P-time BST QMR G(20) BST QMR G(20)
RAEFSKY1 2.457
Table
2: Time to form the ILU(0) preconditioner (P-time), number of iterations and time
for Bi-CGSTAB, QMR and GMRES(20) with ILU(0) preconditioning.
26 Michele Benzi and Miroslav T-uma
P-time AIBC - Its AIBC - Time
MATRIX Fill DDS SDS BST QMR G(20) BST QMR G(20)
5204
JPWH991 7063 0.31 0.26 15 27 28 0.24 0.67 0.78
48362 0.68 2.63 33 43 64 2.61 5.39 8.63
26654 0.89 2.45
Table
3: Time to form the AIBC preconditioner (P-time) using DDS and SDS implemen-
tations, number of iterations and time for Bi-CGSTAB, QMR and GMRES(20) with AIBC
Approximate Inverse Preconditioning 27
It appears from these results that the ILU(0) and AIBC preconditioners are roughly
equivalent from the point of view of the rate of convergence, with ILU(0) having a slight
edge. On many problems the two preconditioners give similar results. There are a few cases,
like PORES2, for which ILU(0) is much better than AIBC, and others (like MEMPLUS)
where the situation is reversed. For some problems it is necessary to allow a relatively
high fill in the approximate inverse preconditioner in order to have a convergence rate
comparable with that insured by ILU(0) (cf. SAYLR4), but there are cases where a very
sparse AIBC gives excellent results (see the ADD or the RAEFSKY matrices). It follows
that the timings for the iterative part of the solution process are pretty close, on average,
for the two preconditioners.
We also notice that using a more dense approximate inverse preconditioner (obtained
with a smaller value of T ) nearly always reduces the number of iterations, although this
does not necessarily mean a reduced computing time since it takes longer to compute the
preconditioner and the cost of each iteration is increased.
Concerning the matrix PORES2, for which our method gives poor results, we observed
that fill-in in the -
W factor was very high. We tried to use different drop tolerances for the
two factors (the one for -
being larger than the one used for -
Z) but this did not help. It
was observed in [31] that finding a sparse right approximate inverse for PORES2 is very
hard and a left approximate inverse should be approximated instead. Unfortunately, our
method produces exactly the same approximate inverse (up to transposition) for A and
A T , therefore we were not able to cope with this problem effectively. We experienced a
similar difficulty with the -
W factor for the matrix SHERMAN2. On the other hand, for
SHERMAN3 we did not face any of the problems reported in [30] and convergence with the
AIBC preconditioner was smooth.
As for the time required to compute the preconditioners, it is obvious that ILU(0) can be
computed more quickly. On the other hand, the computation of the AIBC preconditioner is
not prohibitive. There are problems for which computing AIBC is only two to three times
more expensive than computing ILU(0). More important, our experiments with AIBC
show that the overall solution time is almost always dominated by the iterative part, unless
convergence is extremely rapid, in which case the iteration part takes slightly less time than
the computation of the preconditioner.
This observation suggests that our approximate inverse preconditioner is much cheaper
to construct, in a sequential environment, than approximate inverse preconditioners based
28 Michele Benzi and Miroslav T-uma
on the Frobenius norm approach described in x3. Indeed, if we look at the results presented
in [30] we see that the sequential time required to construct the preconditioner accounts for a
huge portion, often in excess of 90%, of the overall computing time. It is worth emphasizing
that the approach based on Frobenius norm minimization and the one we propose seem to
produce preconditioners of similar quality, in the sense that they are both comparable with
ILU(0) from the point of view of fill-in and rates of convergence, at least on average.
As for the different implementations of AIBC, we see from the results in Table 3 that for
larger problems, the effect of additional floating-point operations in the SDS implementation
is such that the DDS implementation is actually faster. Nevertheless, as already observed
the implementation using static data structures may better suited for parallel architec-
tures. Because in this paper we only consider a scalar implementation, in the remaining
experiments we limit ourselves to the timings for the DDS implementation of AIBC.
In all the experiments (excluding the ones performed to measure the timings presented
in the tables) we monitored also the "true" residual jjb \Gamma Ax k jj 2 . In general, we found that
the discrepancy between this and the norm of the updated residual was small. However,
we found that for some very ill-conditioned matrices in the Harwell-Boeing collection (not
included in the tables) this difference may be very large. For instance, for some of the LNS*
and WEST* matrices, we found that jjr k jj for the final value of r k . This
happened both with the ILU(0) and with the approximate inverse preconditioner, and we
regarded this as a failure of the preconditioned iterative method.
We present in Tables 4 and 5 the results of some experiments on matrices which have
been reduced to block lower triangular form. We compared the number of iterations of
the preconditioned iterative methods and their timings for the block approximate inverse
preconditioner and for the block ILU(0) preconditioner as described in x6. Since some of
the matrices have only trivial block lower triangular form (one block, or two blocks with one
of the blocks of dimension one for some matrices) we excluded them from our experiments.
In
Table
4 we give for each matrix the number NBL of blocks and the results of experiments
with ILU(0). In Table 5 we give analogous results for the AIBC preconditioner. The
amount of fill-in (denoted by F ill) for AIBC is computed as the fill-in in the approximate
inverses of the diagonal blocks plus the number of nonzero entries in the off-diagonal blocks.
Approximate Inverse Preconditioning 29
Block ILU - Its Block ILU - Time
Table
4: Time to compute the block ILU preconditioner (P-time), number of iterations and
time for Bi-CGSTAB, QMR and GMRES(20) with block ILU(0) preconditioning.
Block AIBC - Its Block AIBC - Time
MATRIX Fill P-time BST QMR G(20) BST QMR G(20)
Table
5: Time to compute the block AIBC preconditioner (P-time) , number of iterations and
time in seconds for Bi-CGSTAB, QMR and GMRES(20) with block AIBC preconditioning.
It is clear that in general the reduction to block triangular form does not lead to a
noticeable improvement in the timings, at least in a sequential implementation. We observe
that when the block form is used, the results for ILU(0) are sometimes worse. This can
Michele Benzi and Miroslav T-uma
probably be attributed to the permutations, which are known to cause in some cases a
degradation of the rate of convergence of the preconditioned iterative method [22]. A
notable exception is the matrix WATT2, for which the number of iterations is greatly
reduced. On the other hand, the results for the block approximate inverse preconditioner
are mostly unchanged or somewhat better. Again, matrix WATT2 represents an exception:
this problem greatly benefits from the reduction to block triangular form. In any case,
permutations did not adversely affect the rate of convergence of the preconditioned iterative
method. This fact suggests that perhaps the approximate inverse preconditioner is more
robust than ILU(0) with respect to reorderings.
To gain more insight on how permutations of the original matrix can influence the
quality of both types of preconditioners, we did some experiments where the matrix A was
permuted using the minimum degree algorithm on the structure of A + A T (see [28]). We
applied the resulting permutation to A symmetrically to get PAP T , in order to preserve
the nonzero diagonal. Tables 6 and 7 present the results for the test matrices having trivial
block triangular form. The corresponding preconditioners are denoted by ILU(0)-MD and
AIBC-MD, respectively.
ILU-MD - Its ILU-MD - Time
MATRIX P-time BST QMR G(20) BST QMR G(20)
26 43 47 2.18 4.57 6.45
Table
Time to compute the ILU(0) preconditioner (P-time) for A permuted according
to minimum degree algorithm on A number of iterations and time for Bi-CGSTAB,
QMR and GMRES(20) with ILU(0)-MD preconditioning.
Approximate Inverse Preconditioning 31
AIBC-MD - Its AIBC-MD - Time
MATRIX Fill P-time BST QMR G(20) BST QMR G(20)
7152 0.31
43 48 0.38 1.23 1.38
19409 0.58 104 95 y 2.91 4.14 y
Table
7: Time to compute the AIBC preconditioner (P-time) for A permuted by the minimum
degree algorithm on A number of iterations and time for Bi-CGSTAB, QMR
and GMRES(20) with AIBC-MD preconditioning.
The results in Table 6 show that for some problems, especially those coming from PDEs,
minimum degree reordering has a detrimental effect on the convergence of the iterative
solvers preconditioned with ILU(0). In some cases we see a dramatic increase in the number
of iterations. This is in analogy with the observed fact (see, e.g., [22]) that when the
minimum degree ordering is used, the no-fill incomplete Cholesky decomposition of an SPD
Michele Benzi and Miroslav T-uma
matrix is a poor approximation of the coefficient matrix, at least for problems arising from
the discretization of 2D PDEs. The convergence of the conjugate gradient method with such
a preconditioner (ICCG(0)) is much slower than if the natural ordering of the unknowns
was used. Here we observe a similar phenomenon for nonsymmetric linear systems. Note
the rather striking behavior of matrix ADD20, which benefits greatly from the minimum
degree reordering (this matrix arises from a circuit model and not from the discretization
of a PDE).
It was also observed in [22] that the negative impact of minimum degree on the rate
of convergence of PCG all but disappears when the incomplete Cholesky factorization of
A is computed by means of a drop tolerance rather than by position. It is natural to ask
whether the same holds true for the approximate inverse preconditioner AIBC, which is
computed using a drop tolerance. The results in Table 7 show that this is indeed the case.
For most of the test problems the number of iterations was nearly unaffected (or better)
and in addition we note that the minimum degree ordering helps in preserving sparsity in
the incomplete inverse factors. While this is usually not enough to decrease the computing
times, the fact that it is possible to reduce storage demands for the approximate inverse
preconditioner without negatively affecting the convergence rates might become important
for very large problems.
We conclude this section with some observations concerning the choice of the drop
tolerance T . In all our experiments we used a fixed value of T throughout the incomplete
biconjugation process. However, relative drop tolerances, whose value is adapted from step
to step, could also be considered (see [57] for a thorough discussion of the issues related to
the choice of drop tolerances in the context of ILU). We have observed that the amount of
fill-in is distributed rather unevenly in the course of the approximate inverse factorization. A
large proportion of nonzeros is usually concentrated in the last several columns of -
Z and -
W .
For some problems with large fill, it may be preferable to switch to a larger drop tolerance
when the columns of the incomplete factors start filling-in strongly. Conversely, suppose
we have computed an approximate inverse preconditioner for a certain value of T , and we
find that the preconditioned iteration is converging slowly. Provided that enough storage is
available, one could then try to recompute at least some of the columns of -
Z and -
using
a smaller value of T . Unfortunately, for general sparse matrices there is no guarantee that
this will result in a preconditioner of improved quality. Indeed, allowing more nonzeros in
the preconditioner does not always result in a reduced number of iterations.
Approximate Inverse Preconditioning 33
Finally, it is worthwhile to observe that a dual threshold variant of the incomplete
inverse factorization could be adopted, see [51]. In this approach, a drop tolerance is
applied but a maximum number of nonzeros per column is specified and enforced during
the computation of the preconditioner. In this way, it is possible to control the maximum
storage needed by the preconditioner, which is important for an automated implementation.
This approach has not been tried yet, but we hope to do so in the near future.
9. Conclusions and future work. In this paper we have developed a sparse approximate
inverse preconditioning technique for nonsymmetric linear systems. Our approach is
based on a procedure to compute two sets of biconjugate vectors, performed incompletely
to preserve sparsity. This algorithm produces an approximate triangular factorization of
A \Gamma1 , which is guaranteed to exist if A is an H-matrix (similar to the ILU factorization).
The factorized sparse approximate inverse is used as an explicit preconditioner for
conjugate gradient-type methods. Applying the preconditioner only requires sparse matrix-vector
products, which is of considerable interest for use on parallel computers.
The new preconditioner was used to enhance the convergence of different iterative
solvers. Based on extensive numerical experiments, we found that our preconditioner can
insure convergence rates which are comparable, on average, with those from the standard
preconditioner. While the approximate inverse factorization is more time-consuming
to compute than ILU(0), its cost is not prohibitive, and is typically dominated by
the time required by the iterative part. This is in contrast with other approximate inverse
preconditioners, based on Frobenius norm minimization, which produce similar convergence
rates but are very expensive to compute.
It is possible that in a parallel environment the situation will be reversed, since the
preconditioner construction with the Frobenius norm approach is inherently parallel. How-
ever, there is some scope for parallelization also in the inverse factorization on which our
method is based: for instance, the approximate inverse factors -
Z and -
W can be computed
largely independent of each other. Clearly, this is a point which requires further research,
and no conclusion can be drawn until parallel versions of this and other approximate inverse
preconditioners have been implemented and tested.
Our results point to the fact that the quality of the approximate inverse preconditioner
is not greatly affected by reorderings of the coefficient matrix. This is important in practice
because it suggests that we may use permutations to increase the potential for parallelism or
to reduce the amount of fill in the preconditioner, without spoiling the rate of convergence.
34 Michele Benzi and Miroslav T-uma
The theoretical results on fill-in in x5 provide guidelines for the use of pivoting strategies for
enhancing the sparsity of the approximate inverse factors, and this is a topic that deserves
further research.
Based on the results of our experiments, we conclude that the technique introduced
in this paper has the potential to become a useful tool for the solution of large sparse
nonsymmetric linear systems on modern high-performance architectures. Work on a parallel
implementation of the new preconditioner is currently under way. Future work will also
include a dual threshold implementation of the preconditioner computation.
Acknowledgments
. We would like to thank one of the referees for helpful comments and
suggestions, and Professor Miroslav Fiedler for providing reference [24]. The first author
gratefully acknowledges the hospitality and excellent research environment provided by the
Institute of Computer Science of the Czech Academy of Sciences.
--R
Parallel Implementation of Preconditioned Conjugate Gradient Methods for Solving Sparse Systems of Linear Equations.
Iterative Solution Methods.
Templates for the Solution of Linear Systems.
Parallel algorithms for the solution of certain large sparse linear systems.
A Direct Row-Projection Method for Sparse Linear Systems
A direct projection method for sparse linear systems.
An explicit preconditioner for the conjugate gradient method.
A sparse approximate inverse preconditioner for the conjugate gradient method.
A sparse approximate inverse preconditioner for nonsymmetric linear systems.
Krylov methods preconditioned with incompletely factored matrices on the CM-2
Approximate inverse preconditioners for general sparse matrices.
Approximate inverse techniques for block-partitioned matrices
User's guide for SPARSPAK-A: Waterloo sparse linear equations package
Block preconditioning for the conjugate gradient method.
Approximate inverse preconditionings for sparse linear systems.
Sparse matrix collection.
Decay rates for inverses of band matrices.
Direct Methods for Sparse Matrices.
Users' guide for the Harwell-Boeing sparse matrix collection
The effect of ordering on preconditioned conjugate gradients.
A stability analysis of incomplete LU factorizations.
Inversion of bigraphs and connection with the Gauss elimination.
An Introduction to Numerical Linear Algebra.
Notes on the solution of algebraic linear simultaneous equations.
Computer Solution of Large Sparse Positive Definite Systems.
The evolution of the minimum degree algorithm.
Predicting structure in sparse matrix computations.
On approximate-inverse preconditioners
Parallel preconditioning with sparse approximate inverses.
Parallel preconditioning and approximate inverses on the Connection Machine.
A parallel preconditioned conjugate gradient package for solving sparse linear systems on a Cray Y-MP
Inversion of matrices by biorthogonalization and related results.
Method of conjugate gradients for solving linear systems.
The Theory of Matrices in Numerical Analysis.
Polynomial preconditioning for conjugate gradient calculations.
The efficient parallel iterative solution of large sparse linear sys- tems
Explicitly preconditioned conjugate gradient method for the solution of unsymmetric linear systems.
New convergence results and preconditioning strategies for the conjugate gradient method.
Factorized sparse approximate inverse (FSAI) preconditionings for solving 3D FE systems on massively parallel computers II: Iterative construction of FSAI preconditioners.
Factorized sparse approximate inverse preconditioning I: Theory.
Factorized sparse approximate inverse preconditioning II: Solution of 3D FE systems on massively parallel computers.
Krylov Methods for the Numerical Solution of Initial-Value Problems in Differential-Algebraic Equations
An incomplete factorization technique for positive definite linear systems.
An iterative solution method for linear systems of which the coefficient matrix is a symmetric M-matrix
Some properties of approximate inverses of matrices.
Preconditioning techniques for nonsymmetric and indefinite linear systems.
SPARSKIT: A basic tool kit for sparse matrix computations.
ILUT: A dual threshold incomplete LU factorization.
Conjugate direction methods for solving systems of linear equations.
High performance preconditioning.
Matrix Iterative Analysis.
Implementation of the GMRES method using Householder transformations.
Computational Methods for General Sparse Matrices.
--TR
--CTR
Kai Wang , Jun Zhang, Multigrid treatment and robustness enhancement for factored sparse approximate inverse preconditioning, Applied Numerical Mathematics, v.43 n.4, p.483-500, December 2002
Claus Koschinski, New methods for adapting and for approximating inverses as preconditioners, Applied Numerical Mathematics, v.41 n.1, p.179-218, April 2002
Stephen T. Barnard , Luis M. Bernardo , Horst D. Simon, An MPI Implementation of the SPAI Preconditioner on the T3E, International Journal of High Performance Computing Applications, v.13 n.2, p.107-123, May 1999
N. Guessous , O. Souhar, Multilevel block ILU preconditioner for sparse nonsymmetric M-matrices, Journal of Computational and Applied Mathematics, v.162 n.1, p.231-246, 1 January 2004
Matthias Bollhfer , Volker Mehrmann, Some convergence estimates for algebraic multilevel preconditioners, Contemporary mathematics: theory and applications, American Mathematical Society, Boston, MA, 2001
Michele Benzi , Miroslav Tma, A parallel solver for large-scale Markov chains, Applied Numerical Mathematics, v.41 n.1, p.135-153, April 2002
Mansoor Rezghi , S. Mohammad Hosseini, An ILU preconditioner for nonsymmetric positive definite matrices by using the conjugate Gram-Schmidt process, Journal of Computational and Applied Mathematics, v.188 n.1, p.150-164, 1 April 2006
M. H. Koulaei , F. Toutounian, On computing of block ILU preconditioner for block tridiagonal systems, Journal of Computational and Applied Mathematics, v.202 n.2, p.248-257, May, 2007
Michele Benzi, Preconditioning techniques for large linear systems: a survey, Journal of Computational Physics, v.182 n.2, p.418-477, November 2002 | approximate inverses;incomplete factorizations;sparse linear systems;conjugate gradient-type methods;sparse matrices;preconditioning |
292377 | Approximate Inverse Preconditioners via Sparse-Sparse Iterations. | The standard incomplete LU (ILU) preconditioners often fail for general sparse indefinite matrices because they give rise to "unstable" factors L and U. In such cases, it may be attractive to approximate the inverse of the matrix directly. This paper focuses on approximate inverse preconditioners based on minimizing ||I-AM||F, where AM is the preconditioned matrix. An iterative descent-type method is used to approximate each column of the inverse. For this approach to be efficient, the iteration must be done in sparse mode, i.e., with "sparse-matrix by sparse-vector" operations. Numerical dropping is applied to maintain sparsity; compared to previous methods, this is a natural way to determine the sparsity pattern of the approximate inverse. This paper describes Newton, "global," and column-oriented algorithms, and discusses options for initial guesses, self-preconditioning, and dropping strategies. Some limited theoretical results on the properties and convergence of approximate inverses are derived. Numerical tests on problems from the Harwell--Boeing collection and the FIDAP fluid dynamics analysis package show the strengths and limitations of approximate inverses. Finally, some ideas and experiments with practical variations and applications are presented. | Introduction
. The incomplete LU factorization preconditioners were originally
developed for M-matrices that arise from the discretization of very simple partial
differential equations of elliptic type, usually in one variable. For the rather common
situation where the matrix A is indefinite, standard ILU factorizations may face several
difficulties, the best known of which is the encounter of a zero pivot. However,
there are other problems that are just as serious. Consider an incomplete factorization
of the form
where E is the error. The preconditioned matrices associated with the different forms
of preconditioning are similar to
What is sometimes missed is the fact that the error matrix E in (1.1) is not as
important as the preconditioned error matrix L shown in (1.2) above. When
the matrix A is diagonally dominant, L and U are typically well conditioned, and the
size of L remains confined within reasonable limits, typically with a clustering
of its eigenvalues around the origin. On the other hand, when the original matrix is
not diagonally dominant, L \Gamma1 or U \Gamma1 may have very large norms, causing the error
to be very large and thus adding large perturbations to the identity matrix.
This form of instability was studied by Elman [14] in a detailed analysis of ILU and
MILU preconditioners for finite difference matrices. It can be observed experimentally
This work was supported in part by the National Science Foundation under grant NSF/CCR-
9214116 and in part by NASA under grant NAG2-904.
y Department of Computer Science and Minnesota Supercomputer Institute, University of
Minnesota, 4-192 EE/CSci Bldg., 200 Union St., S.E., Minneapolis, Minnesota, 55455-0154
([email protected] and [email protected]).
that ILU preconditioners can be very poor when L \Gamma1 or U \Gamma1 are large, and that this
situation often occurs for indefinite problems, or problems with large nonsymmetric
parts.
One possible remedy that has been proposed is stabilized or perturbed incomplete
factorizations, for example [15] and the references in [25]. A numerical comparison
with these preconditioners will be given later. In this paper, we consider trying to
find a preconditioner that does not require solving a linear system. For example,
we can precondition the original system with a sparse matrix M that is a direct
approximation to the inverse of A. Sparse approximate inverses are also necessary
for incomplete block factorizations with large sparse blocks, as well as several other
applications, also described later.
We focus on methods of finding approximate inverses based on minimizing the
Frobenius norm of the residual matrix I \Gamma AM , first suggested by Benson and Frederickson
[5, 6]. Consider the minimization of
to seek a right approximate inverse. An important feature of this objective function
is that it can be decoupled as the sum of the squares of the 2-norms of the individual
columns of the residual matrix I \Gamma AM
in which e j and m j are the j-th columns of the identity matrix and of the matrix
M , respectively. Thus, minimizing (1.4) is equivalent to minimizing the individual
functions
This is clearly useful for parallel implementations. It also gives rise to a number of
different options.
The minimization in (1.5) is most often performed directly by prescribing a sparsity
pattern for M and solving the resulting least squares problems. Grote and Simon
choose M to be a banded matrix with 2p emphasizing
the importance of the fast application of the preconditioner in a CM-2 implementa-
tion. This choice of structure is particularly suitable for banded matrices.
Cosgrove, D'iaz and Griewank [10] select the initial structure of M to be diagonal
and then use a procedure to improve the minimum by updating the sparsity pattern of
M . New fill-in elements are chosen so that the fill-in contributes a certain improvement
while minimizing the number of new rows in the least squares subproblem. In similar
work by Grote and Huckle [18], the reduction in the residual norm is tested for each
candidate fill-in element, but fill-in may be introduced more than one at a time.
In other related work, Kolotilina and Yeremin [23] consider symmetric, positive
definite systems and construct factorized sparse approximate inverse preconditioners
which are also symmetric, positive definite. Each factor implicitly approximates the
inverse of the lower triangular Cholesky factor of A. The structure of each factor is
chosen to be the same as the structure of the lower triangular part of A. In their more
recent work [24], fill-in elements may be added, and their locations are chosen such
that the construction and application of the approximate inverse is not much more
expensive on a model hypercube computer. Preconditioners for general systems may
be constructed by approximating the left and right factors separately.
This paper is organized as follows. In x2, we present several approximate inverse
algorithms based on iterative procedures, as well as describe sparse-sparse implementation
and various options. We derive some simple theoretical results for approximate
inverses and the convergence behavior of the algorithms in x3. In x4, we show the
strengths and limitations of approximate inverse preconditioners through numerical
tests with problems from the Harwell-Boeing collection and the FIDAP fluid dynamics
analysis package. Finally in x5, we present some ideas and experiments with practical
variations and applications of approximate inverses.
2. Construction of the approximate inverse via iteration. The sparsity
pattern of an approximate inverse of a general matrix should not be prescribed, since
an appropriate pattern is usually not known beforehand. In contrast to the previous
work described above, the locations and values of the nonzero elements are determined
naturally as a side-effect of utilizing an iterative procedure to minimize (1.3)
or (1.5). In addition, elements in the approximate inverse may be removed by a numerical
dropping strategy if they contribute little to the inverse. These features are
clearly necessary for general sparse matrices. In xx2.1 and 2.2 we briefly describe two
approaches where M is treated as a matrix in its entirety, rather than as individual
columns. We found, however, that these methods converge more slowly than if the
columns are treated separately. In the remaining sections, we consider this latter
approach and the various options that are available.
2.1. Newton iteration. As an alternative to directly minimizing the objective
function (1.3), an approximate inverse may also be computed using an iterative process
known as the method of Hotelling and Bodewig [20]. This method, which is modeled
after Newton's method for solving f(x) j 1=x \Gamma a = 0, has many similarities to our
descent methods which we describe later. The iteration takes the form
For convergence, we require that the spectral radius of I \Gamma AM 0 be less than one, and
if we choose an initial guess of the form M convergence is achieved if
In practice, we can follow Pan and Reif [27] and use
for the right approximate inverse. As the iterations progress, M becomes denser and
denser, and a natural idea here is to perform the above iteration in sparse mode [26],
i.e., drop some elements in M or else the iterations become too expensive. In this
case, however, the convergence properties of the Newton iteration are lost. We will
show the results of some numerical experiments in x4.
2.2. Global iteration. In this section we describe a 'global' approach to minimizing
(1.3), where we use a descent-type method, treating M as an unknown sparse
matrix. The objective function (1.3) is a quadratic function on the space of n \Theta n
matrices, viewed as objects in R n 2
. The actual inner product on the space of matrices
with which the function (1.4) is associated is
One possible descent-type method we may use is steepest descent which we will describe
later. In the following, we will call the array representation of an n 2 vector X
the n \Theta n matrix whose column vectors are the successive n-vectors of X .
In descent algorithms a new iterate M new is defined by taking a step along a
selected direction G, i.e.,
in which ff is selected to minimize the objective function associated with M new . This
is achieved by taking
hAG; AGi
tr ((AG) T AG)
is the residual matrix. Note that the denominator may be
computed as kAGk 2
F . After each of these descent steps is taken, the resulting matrix
will tend to become denser. It is therefore essential to apply some kind of numerical
dropping, either to the new M or to the search direction G before taking the descent
step. In the first case, the descent nature of the step is lost, i.e., it is no longer
guaranteed that F (Mnew ) F (M ), while in the second case, the fill-in in M is more
difficult to control. We will discuss both these alternatives in x2.5.
The simplest choice for the descent direction G is to take it to be the residual
is the new iterate. The corresponding descent algorithm
is referred to as the Minimal Residual (MR) algorithm. In the simpler case where
numerical dropping is applied to M , our global Minimal Residual algorithm will have
the following form.
Algorithm 2.1. (Global Minimal Residual descent algorithm)
1. Select an initial M
2. Until convergence do
3. Compute G
4. Compute ff by (2.2)
5. Compute
6. Apply numerical dropping to M
7. End do
Another popular choice is to take G to be the direction of steepest descent, i.e.,
the direction opposite to the gradient. Thinking in terms of n 2 vectors, the gradient
of F can be viewed as an n 2 vector g such that
where (\Delta; \Delta) is the usual Euclidean inner product. If we represent all vectors as 2-
dimensional n \Theta n arrays, then the above relation is equivalent to
This allows us to determine the gradient as an operator on arrays, rather than n 2
vectors, as is done in the next proposition.
Proposition 2.2. The array representation of the gradient of F with respect to
M is the matrix
in which R is the residual matrix
Proof. For any matrix E we have
\Theta (R \Gamma AE) T (R
\Theta (AE) T R +R
Thus, the differential of F applied to E is the inner product of \Gamma2A T R with E plus
a second order term. The gradient is therefore simply \Gamma2A T R.
The steepest descent algorithm consists of simply replacing G in line 3 of the MR
algorithm described above by This algorithm can be a very slow in some
cases, since it is essentially a steepest descent-type algorithm applied to the normal
equations.
In either global steepest descent or minimal residual, we need to form and store
the G matrix explicitly. The scalars kAGk 2
F and tr(R T AG) can be computed from the
successive columns of AG, which can be generated, used, and discarded. Therefore,
we need not store the matrix AG.
We will show the results of some numerical experiments with this global iteration
and compare them with other methods in x4.
2.3. Implementation of sparse mode MR and GMRES. We now describe
column-oriented algorithms which consist of minimizing the individual objective functions
(1.5). We perform this minimization by taking a sparse initial guess and solving
approximately the n linear subproblems
with a few steps of a nonsymmetric descent-type method, such as MR or untruncated
GMRES. For this method to be efficient, the iterative method must work in sparse
mode, i.e., m j is stored and operated on as a sparse vector, and the Arnoldi basis in
GMRES is kept in sparse format.
In the following MR algorithm, n i iterations are used to solve (2.3) approximately
for each column, giving an approximation to the j-th column of the inverse of A.
Each initial m j is taken from the columns of an initial guess, M 0 . Again, we assume
numerical dropping is applied to M . In the GMRES version of the algorithm, we
never use restarting since since n i is typically very small. Also, a variant called
FGMRES [31] which allows an arbitrary Arnoldi basis, is actually used in this case.
Algorithm 2.3. (Minimal Residual iteration)
1.
2. For each column do
3.
4. For do
6 E. CHOW AND Y. SAAD
5. r j := e
7.
8. Apply numerical dropping to m j
9. End do
10. End do
Thus, the algorithm computes the current residual r j and then minimizes the
residual norm e j;new in the set
In the sparse implementation of MR and GMRES, the matrix-vector product,
SAXPY, and dot product kernels now all entirely involve sparse vectors. The matrix-vector
product is much more efficient if the sparse matrix is stored by columns since
all the entries do not need to be traversed. Efficient codes for all these kernels may
be constructed which utilize a full n-length work vector [11].
Columns from an initial guess M 0 for the approximate inverse are used as the
initial guesses for the iterative solution of the linear subproblems. There are two
obvious choices: M . The scale factor ff is chosen to minimize
the spectral radius ae(I \Gamma ffAM ). Denoting the initial guess as M writing
@
@ff
leads to
tr(AM)
The transpose initial guess is more expensive to use because it is denser than the
identity initial guess. However, for very indefinite systems, this guess immediately
produces a symmetric positive definite preconditioned system, corresponding to the
normal error equations. Depending on the structure of the inverse, a denser initial
guess is often required to involve more of the matrix A in the computation. Interest-
ingly, the cheaper the computation, the more it uses only 'local' information, and the
less able it may be to produce a good approximate inverse.
The choice of initial guess also depends to some degree on 'self-preconditioning'
which we describe next. Additional comments on the choice of initial guess will be
presented there.
2.4. Self-preconditioning. The approximate solution of the linear subproblems
using an iterative method suffers from the same problems as solving the
original problem if A is indefinite or poorly conditioned. However, the linear systems
may be preconditioned with the columns that have already been computed. More
precisely, each system (2.3) for approximating column j may be preconditioned with
0 where the first
0 are the m k that already have been computed,
and the remaining columns are the initial guesses for the m k , j k n.
This suggests that it is possible to define outer iterations that sweep over the
matrix, as well as inner iterations that compute each column. On each subsequent
outer iteration, the initial guess for each column is the previous result for that column.
This technique usually results in much faster convergence of the approximate inverse.
Unfortunately with this approach, the parallelism of constructing the columns of
the approximate inverse simultaneously is lost. However, there is another variant of
self-preconditioning that is easier to implement and more easily parallelizable. Quite
simply, all the inner iterations are computed simultaneously and the results of all the
columns are used as the self-preconditioner for the next outer iteration. Thus, the
preconditioner for the inner iterations changes only after each outer iteration. The
performance of this variant usually lies between full self-preconditioning and no self-
preconditioning. A more reasonable compromise is to compute blocks of columns in
parallel, and some (inner) self-preconditioning may be used.
Self-preconditioning is particularly valuable for very indefinite problems when
combined with a scaled transpose initial guess; the initial preconditioned system AM 0
is positive definite, and the subsequent preconditioned systems somewhat maintain
this property, even in the presence of numerical dropping. Self-preconditioning with
a transpose initial guess, however, may produce worse results if the matrix A is
very ill-conditioned. In this case, the initial worsening of the conditioning of the
system is too severe, and the alternative scaled identity initial guess should be used
instead. We have also found cases where self-preconditioning produces worse results,
usually for positive definite problems; this is not surprising, since the minimizations
would progress very well, only to be hindered by self-preconditioning with a poor
approximate inverse in the early stages. Numerical evidence of these phenomena will
be provided in x4.
Algorithm 2.4 implements the Minimal Residual iteration with self-preconditioning.
In the algorithm, n iterations and n i inner iterations are used. Again,
initially. We have also indicated where numerical dropping might be applied.
Algorithm 2.4. (Self-preconditioned Minimal Residual iteration)
1. Start:
2. For
3. For each column do
4. Define s :=
5. For do
As
7. z := Mr
8. q := Az
9. ff := (r;q)
11. Apply numerical dropping to s
12. End do
13. Update j-th column of
14. End do
15. End do
In a FORTRAN 77 implementation, M is stored as n sparse vectors, each holding
up to lfil entries. M is thus constructed in place.
The multiple outer iterations used in constructing the approximate inverse suggests
the use of factorized updates. Factorized matrices can express denser matrices
than the sum of their numbers of elements alone. Suppose that one outer iteration
has produced the approximate inverse M 1 . Then a second outer iteration tries to
find M 2 , an approximate inverse to AM 1 . In general, after i outer iterations, we are
looking for the update M i+1 which minimizes
min
It is also possible to construct factorized approximate inverses of the form
min
F
which alternate from left to right factors. This latter form is reminiscent of the
symmetric form of Kolotilina and Yeremin [23].
Since the product never formed explicitly, the factorized approach
effectively uses less memory for the preconditioner at the cost of multiplying with each
factor for each matrix-vector multiplication. This approach may be suitable for very
large problems, where memory rather than solution time is the limiting factor. The
implementation, however, is much more complex, since a sequence of matrices needs
to be maintained.
2.5. Numerical dropping strategies. There are many options for numerical
dropping. So far, to ease the presentation, we have only discussed the case where
dropping is performed on the solution vectors or matrices. Section 2.5.1 discusses
this case in more detail, while x2.5.2 discusses the case where dropping is applied to
the search directions. In the latter case, the descent property of the algorithms is
maintained.
2.5.1. Dropping in the solution. When dropping is performed on the solution,
we have options for
1. when dropping is performed, and
2. which elements are dropped.
In the previous algorithms, we have made the first point precise; however, there are
other alternatives. For example, dropping may be performed only after M or each
column of M is computed. Typically this option is too expensive, but as a compromise,
dropping may be performed at the end of a few inner iterations, before M is updated,
namely before step 13 in Algorithm 2.4. Interestingly, we found experimentally that
this option is not always better.
In GMRES, the Krylov basis vectors are kept sparse by dropping elements just
after the self-preconditioning step, before the multiplication by A.
To address which elements are dropped, we can utilize a dual threshold strategy
based on a drop tolerance, droptol, and the maximum number of elements per column,
lfil. By limiting the maximum number of elements per column, the maximum storage
for the preconditioner is known beforehand.
The drop tolerance may be applied directly to the elements to be dropped: i.e.,
elements are dropped if their magnitude is smaller than droptol. However, we found
that this strategy could cause spoiling of the minimization, i.e., the residual norm
may increase after several steps, along with a deterioration of the quality of the
preconditioner.
dropping small elements in m j is sub-optimal, one may ask the question whether
or not dropping can be performed more optimally. A simple perturbation analysis
will help understand the issues. We denote by m j the current column, and by "
the
perturbed column formed by adding the sparse column d in the process of numerical
dropping. The new column and corresponding residual are therefore
The square of the residual norm of the perturbed m j is given by
Recall that \Gamma2A T r j is the gradient of the function (1.5). As is expected from standard
results in optimization, if d is in the direction opposite to the gradient, and if it is
small enough, we can achieve a decrease of the residual norm. Spoiling occurs when
close to zero so that for practical sizes of kdk 2 , kAdk 2
becomes dominant,
causing an increase in the residual norm.
Consider specifically the situation where only one element is dropped, and assume
that all the columns Ae i of A have been pre-scaled so that kAe i 1. In this case,
and the above equation becomes
A strategy could therefore be based on attempting to make the function
nonpositive, a condition which is easy to verify. This suggests selecting elements to
drop in m j only at indices i where the selection function (2.8) is zero or negative.
However, note that this is not entirely rigorous since in practice a few elements are
dropped at the same time. Thus we do not entirely perform dropping via numerical
values alone. In a two-stage process, we first select a number of candidate elements
to be dropped based only on the numerical size as determined by a certain tolerance.
Among these, we drop all those that satisfy the condition
or we can keep those lfil elements that have the largest ae ij .
Another alternative is based on attempting to achieve maximum reduction in the
function (2.8). Ideally, we wish to have
since this will achieve the 'optimal' reduction in (2.8)
This leads to the alternative strategy of dropping elements in positions i of m j where
are the smallest. We found, however, that this strategy produces
poorer results than the previous one, and neither of these strategies completely eliminate
spoiling.
2.5.2. Dropping in the search direction. Dropping may be performed on
the search direction G in Algorithm 2.1, or equivalently in r j and z in Algorithms
2.3 and 2.4 respectively. In these cases, the descent property of the algorithms is
maintained, and the problem of spoiling is avoided.
Starting with a sparse initial guess, the allowed number of fill-ins is gradually
increased at each iteration. For an MR-like algorithm, the search direction d is derived
by dropping entries from the residual direction r. So that the sparsity pattern of the
solution x is controlled, d is chosen to have the same sparsity pattern as x, plus one
new entry, the largest entry in absolute value. No drop tolerance is used. Minimization
is performed by choosing the step-length as
(Ad; Ad)
and thus the residual norm for the new solution is guaranteed to be not more than the
previous residual norm. In contrast to Algorithm 2.3, the residual may be updated
with very little cost. The iterations may continue as long as the residual norm is
larger than some threshold, or a set number of iterations may be used.
If A is indefinite, the normal equations residual direction A T r may be used as the
search direction, or simply to determine the location of the new fill-in. It is interesting
to note that the largest entry in A T r gives the greatest residual norm reduction in a
one-dimensional minimization. When fill-in is allowed to increase gradually using this
search direction, this technique becomes very similar to the adaptive selection scheme
of [18]. The effect is also similar to self-preconditioning with a transpose initial guess.
At the end of each iteration, it is possible to use a second stage that exchanges
entries in the solution with new entries if this causes a reduction in the residual norm.
This is required if the sparsity pattern in the approximate inverse needs to change as
the approximations progress. We have found this to be necessary, particularly for very
unstructured matrices, but have not yet found a strategy that is genuinely effective
[7]. As a result, approximations using numerical dropping in the solution are often
better, even though the scheme just described has a stronger theoretical justification,
similar to that of [18]. This also shows that the adaptive scheme of [18] may benefit
from such an exchange strategy.
Algorithm 2.5 implements a Minimal Residual-like algorithm with this numerical
dropping strategy. The number of inner iterations is usually chosen to be lfil or
somewhat larger.
Algorithm 2.5. (Self-preconditioned MR algorithm with dropping in search
direction)
1. Start:
2. For each column do
3.
4. r j := e
5. For do
7. Choose d to be t with the same pattern as
If one entry which is the
largest remaining entry in absolute value
8. q := Ad
9. ff := (r j ;q)
12. End do
13. End do
If dropping is applied to the unpreconditioned residual, then economical use of this
approximate inverse technique is not limited to approximating the solution to linear
systems with sparse coefficient matrices or sparse right-hand sides. An approximation
may be found, for example, to a factorized matrix, or a dense operator which may
only be accessed with a matrix-vector product. Such a need may arise, for instance,
when preconditioning row projection systems. These approximations are not possible
with other existing approximate inverse techniques.
We must mention here that any adaptive strategy such as this one for choosing the
sparsity pattern makes massive parallelization of the algorithm more difficult. If, for
instance, each processor has the task of computing a few columns of the approximate
inverse, it is not known beforehand which columns of A must be fetched into each
processor.
2.6. Cost of constructing the approximate inverse. The cost of computing
the approximate inverse is relatively high. Let n be the dimension of the linear system,
be the number of outer iterations, and n i be the number of inner iterations (n
in Algorithm 2.5).
We approximate the cost by the number of sparse matrix-sparse vector multiplications
in the sparse mode implementation of MR and GMRES. Profiling for a
few problems shows that this operation accounts for about three-quarters of the time
when self-preconditioning is used. The remaining time is used primarily by the sparse
dot product and sparse SAXPY operations, and in the case of sparse mode GMRES,
the additional work within this algorithm.
If Algorithm 2.4 is used, two sparse mode matrix-vector products are used, the
first one for computing the residual; three are required if self-preconditioning is used.
In Algorithm 2.5 the residual may be updated easily and stored, or recomputed as in
Algorithm 2.4. Again, an additional product is required for self-preconditioning. The
cost is simply nn times the number of these sparse mode matrix-vector multipli-
cations. Each multiplication is cheap, depending on the sparseness of the columns in
M . Dropping in the search directions, however, is slightly more expensive because,
although the vectors are sparser at the beginning, it typically requires much more
inner iterations (e.g., one for each fill-in).
In Newton iteration, two sparse matrix-sparse matrix products are required, although
the convergence rate may be doubled with form of Chebyshev acceleration [28].
Global iterations without self-preconditioning require three matrix-matrix products.
These costs are comparable to the column-oriented algorithms above.
3. Theoretical considerations. Theoretical results regarding the quality of
approximate inverse preconditioners are difficult to establish. However, we can prove
a few rather simple results for general approximate inverses and the convergence
behavior of the algorithms.
3.1. Nonsingularity of M . An important question we wish to address is whether
or not an approximate inverse obtained by the approximations described earlier can
be singular. It cannot be proved that M is nonsingular unless the approximation
is accurate enough, typically to a level that is impractical to attain. This is a difficulty
for all approximate inverse preconditioners, except for triangular factorized
forms described in [23].
The drawback of using M that is possibly singular is the need to check the so-
lution, or the actual residual norm at the end of the linear iterations. In practice,
we have not noticed premature terminations due to a singular preconditioned system,
and this is likely a very rare event.
We begin this section with an easy proposition.
Proposition 3.1. Assume that A is nonsingular and that the residual of the
approximate inverse M satisfies the relation
consistent matrix norm. Then M is nonsingular.
Proof. The result follows immediately from the equality
(3.
and the well-known fact that if kNk ! 1, then I \Gamma N is nonsingular. We note
that the result is true in particular for the Frobenius norm, which, although not an
induced matrix norm, is consistent.
It may sometimes be the case that AM is poorly balanced and as a result I \Gamma AM
can be large. Then balancing AM can yield a smaller norm and possibly a less
restrictive condition for the nonsingularity of M . It is easy to extend the previous
result as follows.
Corollary 3.2. Assume that A is nonsingular and that there exist two nonsingular
diagonal matrices D 1 ; D 2 such that
consistent matrix norm. Then M is nonsingular.
Proof. Applying the previous result to A implies that
will be nonsingular from which the result follows.
Of particular interest is the 1-norm. Each column is obtained independently by
requiring a condition on the residual norm of the form
We typically use the 2-norm since we measure the magnitude of the residual I \Gamma AM
using the Frobenius norm. However, using the 1-norm for a stopping criterion allows
us to prove a number of simple results. We will assume in the following that we
require a condition of the form
for each column. Then we can prove the following result.
Proposition 3.3. Assume that the condition (3.5) is imposed on each computed
column of the approximate inverse and let
1. Any eigenvalue of the preconditioned matrix AM is located in the disc
2. If ! 1, then M is nonsingular.
3. If any k columns of M , with k n, are linearly dependent then at least one
residual associated with one of these columns has a 1-norm 1.
Proof. To prove the first property we invoke Gershgorin's theorem on the matrix
each column of R is the residual vector r . The column version of
Gershgorin's theorem, see e.g., [30, 17], asserts that all the eigenvalues of the matrix
I \Gamma R are located in the union of the disks centered at the diagonal elements
and with radius
In other words, each eigenvalue must satisfy at least one inequality of the form
from which we get
Therefore, each eigenvalue is located in the disk of center 1, and radius . The second
property is a restatement of the previous proposition and follows also from the first
property.
To prove the last point we assume without loss of generality that the first k
columns are linearly dependent. Then there are k scalars ff i , not all zero such that
We can assume also without loss of generality that the 1-norm of the vector of ff's is
equal to one (this can be achieved by rescaling the ff's). Multiplying through (3.7)
by A yields
which gives
Taking the 1-norms of each side, we get
Thus at least one of the 1-norms of the residuals r must be 1.
We may ask the question as to whether similar results can be shown with other
norms. Since the other norms are equivalent we can clearly adapt the above results
in an easy way. For example,
However, the resulting statements would be too weak to be of any practical value. We
can exploit the fact that since we are computing a sparse approximation, the number
p of nonzero elements in each column is small, and thus we replace the scalar n in the
above inequalities by p [18].
We should point out that the result does not tell us anything about the degree
of sparsity of the resulting approximate inverse M . It may well be the case that in
order to guarantee nonsingularity, we must have an M that is dense, or nearly dense.
In fact, in the particular case where the norm in the proposition is the 1-norm, it has
been proved by Cosgrove, D'iaz and Griewank [10] that the approximate inverse may
be structurally dense, in that it is always possible to find a sparse matrix A for which
M will be dense if kI \Gamma AMk 1 ! 1.
14 E. CHOW AND Y. SAAD
Next we examine the sparsity of M and prove a simple result for the case where
an assumption of the form (3.5) is made.
Proposition 3.4. Let assume that a given element b ij of B
satisfies the inequality
then the element m ij is nonzero.
Proof. From the equality
Thus,
and
Thus, if the condition (3.9) is satisfied, we must have us
that if R is small enough, then the nonzero elements of M are located in positions
corresponding to the larger elements in the inverse of A. The following negative result
is an immediate corollary.
Corollary 3.5. Let de defined as in Proposition 3.3. If the nonzero elements
of are -equimodular in that
then the nonzero sparsity pattern of M includes the nonzero sparsity pattern of A \Gamma1 .
In particular, if A \Gamma1 is dense and its elements are -equimodular, then M is also
dense. The smaller the value of , the more likely the condition of the corollary will
be satisfied. Another way of stating the corollary is that we will be able to compute
accurate and sparse approximate inverses only if the elements of the actual inverse
have variations in size. Unfortunately, this is difficult to verify in advance.
3.2. Case of a nearly singular A. Consider first a singular matrix A, with
a singularity of rank one, i.e., the eigenvalue 0 is single. Let z be an eigenvector
associated with this eigenvalue. Then, each subsystem (2.3) that is being solved by
MR or GMRES will provide an approximation to the system, except that it cannot
resolve the component of the initial residual associated with the eigenvector z. In
other words, the iteration may stagnate after a few steps. Let us denote by P the
spectral projector associated with the zero eigenvalue, by m 0 the initial guess to the
system (2.3), and by r the initial residual. For each column j, we would
have at the end of the iteration an approximate solution of the form
whose residual is
The term P r 0 cannot be reduced by any further iterations. Only the norm of
can be reduced by selecting a more accurate ffi. The MR algorithm can also
break down when Ar j vanishes, causing a division by zero in the computation of the
scalar ff j in step 6 of Algorithm 2.3, although this is not a problem with GMRES.
An interesting observation is that in case A is singular, M is not too well defined.
Adding a rank-one matrix zv T to M will indeed yield the same residual
Assume now that A is nearly singular, in that there is one eigenvalue ffl close to
zero with an associated eigenvector z. Note that for any vector v we have
If z and v are of norm one, then the residual is perturbed by a magnitude of ffl. Viewed
from another angle, we can say that for a perturbation of order ffl in the residual, the
approximate inverse can be perturbed by a matrix of norm close to one.
3.3. Eigenvalue clustering around zero. We observed in many of our experiments
that often the matrix M obtained in a self-preconditioned iteration would
admit a cluster of eigenvalues around the origin. More precisely, it seems that if at
some point an eigenvalue of AM moves very close to zero, then this singularity tends
to persist in the later stages in that the zero eigenvalue will move away from zero
only very slowly. These eigenvalues seem to slow-down or even prevent convergence.
In this section, we attempt to analyze this phenomenon. We examine the case where
at a given intermediate iteration the matrix M becomes exactly singular. We start
by assuming that a global MR iteration is taken, and that the preconditioned matrix
AM is singular, i.e., there exists a nonzero vector z such that
In our algorithms, the initial guess for the next (outer) iteration is the current M ,
so the initial residual is . The matrix M 0 resulting from the next self-
preconditioned iteration, either by a global MR or GMRES step, will have a residual
of the form
in which is the residual polynomial. Multiplying (3.10) to the right
by the eigenvector z yields
ae(AM)z
As a result we have
showing that z is an eigenvector of AM 0 associated with the eigenvalue zero.
This result can be extended to column-oriented iterations. First, we assume that
the preconditioning M used in self-preconditioning all n inner iterations in a given
outer loop is fixed. In this case, we need to exploit a left eigenvector w of AM
associated with the eigenvalue zero. Proceeding as above, let m 0
j be the new j-th
column of the approximate inverse. We have
is the residual polynomial associated with the MR or GMRES algorithm for
the j-th column, and is of the form ae j ts j (t).
Multiplying (3.11) to the left by the eigenvector w T yields
As a result w T Am 0
which can be rewritten as w T AM
This gives
establishing the same result on the persistence of a zero eigenvalue as for the global
iteration.
We finally consider the general column-oriented MR or GMRES iterations, in
which the self-preconditioner is updated from one inner iteration to the next. We can
still write
Let M 0 be the new approximate inverse resulting from updating only column j. The
residual associated with M 0 has the same columns as those of the residual associated
with M except for the j-th column which is given above. Therefore
If w is again a left eigenvector of AM associated with the eigenvalue zero, then
multiplying the above equality to the left by w T yields
showing once more that the zero eigenvalue will persist.
3.4. Convergence behavior of self-preconditioned MR. Next we wish to
consider the convergence behavior of the algorithms for constructing an approximate
inverse. We are particularly interested in the situation where self-preconditioning is
used, but no numerical dropping is applied.
3.4.1. Global MR iterations. When self-preconditioning is used in the global
MR iteration, the matrix which defines the search direction is Z
is the current residual. Therefore, the algorithm (without dropping) is as follows.
1. R k := I \Gamma AM k
2. Z k := MR k
3. ff k := hRk ;AZk i
4. M k+1 :=
At each step the new residual matrix R k+1 satisfies the relation
Our first observation is that R k is a polynomial in R 0 . This is because, from the
above relation,
Thus, by induction,
in which p j is a certain polynomial of degree j. Throughout this section we use the
notation
The following recurrence is easy to infer from (3.12),
Note that B k+1 is also a polynomial of degree 2 k in B 0 . In particular, if the initial B 0
(equivalently R 0 ) is symmetric, then all subsequent R k 's and B k 's are also symmetric.
This is achieved when the initial M is a multiple of A T , i.e., when
We are now ready to prove a number of simple results.
Proposition 3.6. If the self-preconditioned MR iteration converges, then it does
so quadratically.
Proof. Define for any ff,
Recall that ff k achieves the minimum of kR(ff)k F over all ff's. In particular,
This proves quadratic convergence at the limit.
The following proposition is a straightforward generalization to the matrix case
of a well-known result [13] concerning the convergence of the vector Minimal Residual
iteration.
Proposition 3.7. Assume that at a given step k, the matrix B k is positive
definite. Then, the following relation holds,
(R
with
cos
6 (R; BR) j
in which min (B) is the smallest eigenvalue of 1(B +B T ) and oe max (B) is the largest
singular value of B.
Proof. Start with
By construction, the new residual R k+1 is orthogonal to AZ k , in the sense of the h\Delta; \Deltai
inner product, and as a result, the second term in the right-hand side of the above
equation vanishes. Noting that AZ
F
F
The result (3.16) follows immediately.
To derive (3.17), note that
in which r i is the i-th column of R, and similarly
For each i we have
and
The result follows after substituting these relations in the ratio (3.17).
Note that because of (3.16) the Frobenius norm of R k+1 is bounded from above
for all k, specifically, kR k+1 kF kR 0 kF for all k. A consequence is that the largest
singular value of B also bounded from above. Specifically, we have
oe
Assume now that M so that all matrices B k are symmetric. If in addition,
each B k is positive definite with its smallest eigenvalue bounded from below by a positive
will converge to the identity matrix. Further, the convergence
will be quadratic at the limit.
3.4.2. Column-oriented MR iterations. The convergence result may be extended
to the case where each column is updated individually by exactly one step
of the MR algorithm. Let M be the current approximate inverse at a given sub-
step. The self-preconditioned MR iteration for computing the j-th column of the
next approximate inverse is obtained by the following sequence of operations.
1. r j := e
2.
3.
4.
Note that ff j can be written as
where we define
to be the preconditioned matrix at the given substep. We now drop the index j to
simplify the notation. The new residual associated with the current column is given
by
We use the orthogonality of the new residual against AMr to obtain
kr new k 2
Replacing ff by its value defined above we get
kr new k 2
Thus, at each inner iteration, the residual norm for the j-th column is reduced according
to the formula
kr new
in which
6 (u; v) denotes the acute angle between the vectors u and v. Assuming that
each column converges, the preconditioned matrix B will converge to the identity.
As a result of this, the angle
will tend to
6 therefore the
convergence ratio sin
6 (r; Br) will also tend to zero, showing superlinear convergence.
We now consider equation (3.21) more carefully in order to analyze more explicitly
the convergence behavior. We will denote by R the residual matrix We
observe that
sin
6
This results in the following statement.
Proposition 3.8. Assume that the self-preconditioned MR algorithm is employed
with one inner step per iteration and no numerical dropping. Then the 2-norm of each
residual of the j-th column is reduced by a factor of at least kI \Gamma AMk 2 ,
where M is the approximate inverse before the current step, i.e.,
kr new
In addition, the Frobenius norm of the residual matrices R obtained after
each outer iteration, satisfies
As a result, when the algorithm converges, it does so quadratically.
Proof. Inequality (3.22) was proved above. To prove quadratic convergence, we
first transform this inequality by using the fact that kXk 2 kXkF to obtain
kr new
Here the k index corresponds to the outer iteration and the j-index to the column.
We note that the Frobenius norm is reduced for each of the inner steps corresponding
to the columns, and therefore
This yields
kr new
F kr j k 2which, upon summation over j gives
This completes the proof.
It is also easy to show a similar result for the following variations:
1. MR with an arbitrary number of inner steps,
2. GMRES(m) for an arbitrary m.
These follow from the fact that the algorithms deliver an approximate column which
has a smaller residual than what we obtain with one inner step MR.
We emphasize that quadratic convergence is guaranteed only at the limit and
that the above theorem does not prove convergence. In the presence of numerical
dropping, the proposition does not hold.
4. Numerical experiments and observations. Experiments with the algorithms
and options described in x2 were performed with matrices from the Harwell-Boeing
sparse matrix collection [12], and matrices extracted from example problems
in the FIDAP fluid dynamics analysis package [16]. The matrices were scaled so that
the 2-norm of each column is unity. In each experiment, we report the number of
GMRES(20) steps to reduce the initial residual of the right-preconditioned linear system
by 10 \Gamma5 . A zero initial guess was used, and the right-hand-side was constructed
so that the solution is a vector of all ones. A dagger (y) in the tables below indicates
that there was no convergence in 500 iterations. In some tables we also show the value
of the Frobenius norm (1.3). Even though this is the function that we minimize, we
see that it is not always a reliable measure of GMRES convergence. All the results
are shown as the outer iterations progress. In Algorithm 2.4 (dropping in solution
vectors) one inner iteration was used unless otherwise indicated; in algorithm 2.5
(dropping in residual vectors) one additional fill-in was allowed per iteration. Various
codes in FORTRAN 77, C++, and Matlab were used, and run in 64-bit precision on
Sun workstations and a Cray C90 supercomputer.
We begin with a comparison of Newton, 'global' and column-oriented iterations.
Our early numerical experiments showed that in practice, Newton iteration converges
very slowly initially and is more adversely affected by numerical dropping. Global
iterations were also worse than column-oriented iterations, perhaps because a single
ff defined by (2.2) is used, as opposed to one for each column in the column-oriented
case.
Table
4.1 gives some numerical results for the WEST0067 matrix from the
Harwell-Boeing collection; the number of GMRES iterations is given as the number
of outer iterations increases. The MR iteration used self-preconditioning with a
scaled transpose initial guess. Dropping based on numerical values in the intermediate
solutions was performed on a column-by-column basis, although in the Newton
and global iterations this restriction is not necessary. In the presence of dropping
we did not find much larger matrices where Newton iteration gave convergent
GMRES iterations. Scaling each iterate M i by 1=kAM i k 1 did not alleviate
the effects of dropping. The superior behavior of global iterations in the presence of
dropping in Table 4.1 was not typical.
Table
WEST0067: Newton, global, and column MR iterations.
dropping
Newton y 414 158 100 41
Global 228 102 25
MR
Newton 463 y 435 y 457
Global
MR 281 120 86 61 43
The eigenvalues of the preconditioned WEST0067 matrix are plotted in Fig. 4.1,
both with and without dropping, using column-oriented MR iterations. As the iterations
proceed, the eigenvalues of the preconditioned system become closer to 1.
Numerical dropping has the effect of spreading out the eigenvalues. When dropping
is severe and spoiling occurs, we have observed two phenomena: either dropping causes
22 E. CHOW AND Y. SAAD
some eigenvalues to become negative, or some eigenvalues stay clustered around the
origin.
-0.3
-0.2
-0.10.10.3
(a) no dropping,
-0.3
-0.2
-0.10.10.3
(b) no dropping,
-0.3
-0.2
-0.10.10.3
(c)
-0.3
-0.2
-0.10.10.3
Fig. 4.1. Eigenvalues of preconditioned system, WEST0067
Next we show some results on matrices that arise from solving the fully-coupled
Navier-Stokes equations. The matrices were extracted from the FIDAP package at the
final nonlinear iteration of each problem in their Examples collection. The matrices are
from 2-dimensional finite element discretizations using 9-node quadrilateral elements
for velocity and temperature, and linear discontinuous elements for pressure.
Table
4.2 lists some statistics about all the positive definite matrices from the
collection. The combination of ill-conditioning and indefiniteness of the other matrices
was too difficult for our methods, and their results are not shown here.
All the matrices are also symmetric, except for Example 7. None of the matrices
could be solved with ILU(0) or ILUT [32], a threshold incomplete LU factorization,
Table
FIDAP Example matrices.
Example n nnz
Flow past a circular cylinder
7 1633 54543 Natural convection in a square cavity
9 3363 99471 Jet impingement in a narrow channel
flow over multiple steps in a channel
13 2568 75628 Axisymmetric flow through a poppet valve
of a liquid in an annulus
radiation heat transfer in a cavity
even with large amounts of fill-in. Our experience with these matrices is that they
produce unstable L and U factors in (1.2).
Table
4.3 shows the results of preconditioning with the approximate inverse, using
dropping in the residual search direction. Since the problems are very ill-conditioned
but positive definite, a scaled identity initial guess with no self-preconditioning was
used. The columns show the results as the iterations and fill-in progress. Convergent
GMRES iterations could be achieved even with lfil as small as 10, showing that an
approximate inverse preconditioner much sparser than the original matrix is possible.
Table
Number of GMRES iterations vs. lfil.
9 203 117 67 51
28 26 24 24
For comparison, we solve the same problems using perturbed ILU factorizations.
Perturbations are added to the inverse of diagonal elements to avoid small pivots,
and thus control the size of the elements in the L and U factors. We use a two-level
block ILU strategy called BILU(0)-SVD(ff), that uses a modified singular value
decomposition to invert the blocks. When a block needs to be inverted,
it is replaced by the perturbed inverse
\Sigma is \Sigma with its singular
values thresholded by ffoe 1 , a factor of the largest singular value.
Table
4.4 shows the results, using a block size of 4. The method is very successful
for this set of problems, showing results comparable to approximate inverse precon-
ditioning, but with less work to compute the preconditioner. None of the problems
converged, however, for 0:1, and there was not one ff that gave the best result
for all problems.
We now show our main results in Table 4.5 for several standard matrices in
the Harwell-Boeing collection. All the problems are nonsymmetric and indefinite,
except for SHERMAN1 which is symmetric, negative definite. In addition, SAYLR3
is singular. SHERMAN2 was reordered with reverse Cuthill-McKee to attempt to
change the sparsity pattern of the inverse. Again, we show the number of GMRES
iterations to convergence against the number of outer iterations used to compute the
approximate inverse. A scaled transpose initial guess was used. When columns in
the initial guess contained more than lfil nonzeros, dropping was applied to the guess.
Table
preconditioner.
Example ff= 0.3 ff= 1.0
9 28 72
Numerical dropping was applied to the intermediate vectors in the solution, retaining
lfil nonzeros and using no drop tolerance.
Table
Number of iterations vs. no .
Matrix
self-preconditioned or unself-preconditioned
For problems SHERMAN2, WEST0989, GRE1107 and NNC666, the results become
worse as the outer iterations progress. This spoiling effect is due to the fact that
the descent property is not maintained when dropping is applied to the intermediate
solutions. This is not the case when dropping is applied to the search direction, as
seen in Table 4.3.
Except for SAYLR3, the problems that could not be solved with ILU(0) also could
not be solved with BILU(0)-SVD(ff), nor with ILUTP, a variant of ILUT more suited
to indefinite problems since it uses partial pivoting to avoid small pivots [29]. ILUTP
also substitutes (10 \Gamma4 times the norm of the row when it is forced to take a zero
pivot, where ffi is the drop tolerance. ILU factorization strategies simply do not apply
in these cases.
We have shown the best results after a few trials with different parameters. The
method is sensitive to the widely differing characteristics of general matrices, and
apart from the comments we have already made for selecting an initial guess and
whether or not to use self-preconditioning, there is no general set of parameters that
works best for constructing the approximate inverse.
The following two tables illustrate some different behaviors that can be seen for
three very different matrices. LAPL0324 is a standard symmetric positive definite
2-D Laplacian matrix of order 324. WEST0067 and PORES3 are both indefinite;
WEST0067 has very little structure, while PORES3 has a symmetric pattern. Table
4.6 shows the number of GMRES(20) iterations and Table 4.7 shows the Frobenius
norm of the residual matrix against the number of outer iterations that were used to
compute the approximate inverse.
Table
Number of iterations vs. no .
Matrix lfil init
WEST0067 none A T p 130
none A T u 484 481 y 472 y
none I p y y y y y
43
none A T u
none I p
PORES3 none A T p y y y y y
none A T u y y 274 174 116
Table
vs. no .
Matrix lfil init
WEST0067 none A T p 4.43 3.21 2.40 1.87 0.95
none A T u 6.07 6.07 6.07 6.07 6.07
none I p 8.17 8.17 8.17 8.17 8.17
LAPL0324 none A T p 7.91 5.69 4.25 3.12 2.23
none A T u 6.62 4.93 4.00 3.41 3.00
none I p 5.34 4.21 3.53 3.08 2.75
PORES3 none A T p 10.78 9.30 8.25 7.66 7.16
none A T u 12.95 12.02 11.48 10.82 10.20
5. Practical variations and applications. Approximate inverses can be expensive
to compute for very large and difficult problems. However, their best potential
is in combinations with other techniques. In essence, we would like to apply these
techniques to problems that are either small, or for which we start close to a good
solution in a certain sense.
We saw in Table 4.5 that approximate inverses work well with small matrices,
most likely because of their local nature. In the next section, we show how smaller
approximate inverses may be used effectively in incomplete block tridiagonal factorizations
5.1. Incomplete block tridiagonal factorizations. Incomplete factorization
of block tridiagonal matrices has been studied extensively in the past decade [1, 2,
3, 4, 9, 21, 22], but there have been very few numerical results reported for general
sparse systems. Banded or polynomial approximations to the pivot blocks have been
primarily used in the past, for systems arising from finite difference discretizations
of partial differential equations. There are currently very few options for incomplete
26 E. CHOW AND Y. SAAD
factorizations of block matrices that require approximate inversion of general large,
sparse blocks.
The inverse-free form of block tridiagonal factorization is
where LA is the strictly lower block tridiagonal part of the coefficient matrix A, UA
is the corresponding upper part, and D is a block diagonal matrix whose blocks D i
are defined by the recurrence
starting with D The factorization is made incomplete by using approximate
inverses rather than the exact inverse in (5.2). This inverse-free form only requires
matrix-vector multiplications in the preconditioning operation.
We illustrate the use of approximate inverses in these factorizations with Example
19 from FIDAP, the largest nonsymmetric matrix in the collection
259879). The problem is an axisymmetric 2D developing pipe flow, using the two-equation
k-ffl model for turbulence. A constant block size of 161 was used, the smallest
block size that would yield a block tridiagonal system (the last block has size 91).
Since the matrix arises from a finite element problem, a more careful selection of the
partitioning may yield better results. In the worse case, a pivot block may be singular;
this would cause difficulties for several approximate inverse techniques such as [23] if
the sparsity pattern is not augmented. In our case, a minimal residual solution in the
null space would be returned.
Since the matrix contains different equations and variables, the rows of the system
were scaled by their 2-norms, and then their columns were scaled similarly. A Krylov
subspace size for GMRES of 50 was used. Table 5.1 first illustrates the solution with
BILU(0)-SVD(ff) with a block size of 5 for comparison. The infinity-norm condition
of the inverse of the block LU factors is estimated with k(LU) \Gamma1 ek1 , where e is the
vector of all ones. This condition estimate decreases dramatically as the perturbation
is increased.
Table
Example 19, BILU(0)-SVD(ff).
condition GMRES
ff estimate steps
0.500 129. 87
1.000 96. 337
Table
5.2 shows the condition estimate, number of GMRES steps to convergence,
timings for setting up the preconditioner and the iterations, and the number of nonzeros
in the preconditioner. The method BTIF denotes the inverse-free factorization
(5.1), and may be used with several approximate inverse techniques. MR-s(lfil) and
MR-r(lfil) denote the minimal residual algorithm using dropping in the solution and
residual vectors, respectively, and LS is the least squares solution using the sparsity
pattern of the pivot block as the sparsity pattern of the approximate inverse. The
MR methods used lfil of 10, and specifically, 3 outer and 1 inner iteration for MR-s,
and lfil iterations for MR-r. Self-preconditioning and transpose initial guesses were
used. LS used the DGELS routine in LAPACK to compute the least squares solu-
tion. The experiments were carried out on one processor of a Sun Sparcstation 10.
The code for constructing the incomplete block factorization is somewhat inefficient
in two ways: it transposes the data structure of the pivot block and the inverse (to
use column-oriented algorithms), and it counts the number of nonzeros in the sparse
matrix-matrix multiplication before performing the actual multiplication.
Table
Example 19, block tridiagonal incomplete factorization.
cond. GMRES CPU time
est. steps precon solve total precon
The timings show that BTIF-MR-s(10) is comparable to BILU(0)-SVD(0.5) but
uses much less memory. Although the actual number of nonzeros in the matrix is
259 879, there were 39 355 block nonzeros required in BILU(0), and therefore almost a
million entries that needed to be stored. BILU(0) required more time in the iterations
because the preconditioner was denser, and needed to operate with much smaller
blocks. The MR methods produced approximate inverses that were sparser than
the original pivot blocks. The LS method produces approximate inverses with the
same number of nonzeros as the pivot blocks, and thus required greater storage and
computation time. The solution was poor, however, possibly because the second,
third, and fourth pivot blocks were poorly approximated. In these cases, at least one
local least squares problem had linearly independent columns. No pivot blocks were
singular.
5.2. Improving a preconditioner. In all of our previous algorithms, we sought
a matrix M to make AM close to the identity matrix. To be more general, we can
seek instead an approximation to some matrix B. Thus, we consider the objective
function
F
in which B is some matrix to be defined. Once we find a matrix M whose objective
function (5.3) is small enough, then the preconditioner for the matrix A is defined by
This implies that B is a matrix which is easy to invert, or rather, that solving systems
with B should be inexpensive. At one extreme when A, the best M is the identity
matrix, but solves with B are expensive. At the other extreme, we find our standard
situation which corresponds to I , and which is characterized by trivial B-solves
but expensive to obtain M matrices. In between these two extremes there are a
number of appealing compromises, perhaps the simplest being the block diagonal of
A.
28 E. CHOW AND Y. SAAD
Another way of viewing the concept of approximately minimizing (5.3) is that of
improving a preconditioner. Here B is an existing preconditioner, for example, an
LU factorization. If the factorization gives an unsatisfactory convergence rate, it is
difficult to improve it by attempting to modify the L and U factors. One solution
would be to discard this factorization and attempt to recompute a fresh one, possibly
with more fill-in. Clearly, this may be wasteful especially in the case when this process
must be iterated a few times due to persistent failures.
For a numerical example of improving a preconditioner, we use approximate inverses
to improve the block-diagonal preconditioners for the ORSREG1, ORSIRR1
and matrices. The experiments used dropping on numerical values with
In Table 5.3, block size is the block size of the block-diagonal
preconditioner, and block precon is the number of GMRES iterations required
for convergence when the block-diagonal preconditioner is used alone. The number of
GMRES iterations is shown against the number of outer iterations used to improve
the preconditioner.
Table
Improving a preconditioner.
block block
Matrix
Besides these applications, we have used approximate inverse techniques for several
other purposes. Like in (5.3), we can generalize our problem to minimize
F
where b is a right-hand side and x is an approximate sparse solution. The right-hand
side b does not need to be sparse if dropping is used in the search direction. Sparse
approximate solutions to linear systems may be used in forming preconditioners, for
example, to form a sparse approximation to a Schur complement or its inverse. See
and [8] for more details.
6. Conclusion. This paper has described an approach for constructing approximate
inverses via sparse-sparse iterations. The sparse mode iterations are designed
to be economical, however, their cost is still not competitive with ILU factorizations.
Other approximate inverse techniques that use adaptive sparsity selection schemes
also suffer from the same drawback. However, several examples show that these preconditioners
may be applied to cases where other existing options, such as perturbed
ILU factorizations, fail.
More importantly, our conclusion is that the greatest value of sparse approximate
inverses may be their use in conjunction with other preconditioners. We demonstrated
this with incomplete block factorizations and improving block diagonal pre-
conditioners. They have also been used successfully for computing sparse solutions
when constructing preconditioners, and one variant has the promise of computing
approximations to operators that may be effectively dense.
Two limitations of approximate inverses in general are their local nature, and
the question of whether or not an inverse can be approximated by a sparse matrix.
Their local nature suggests that their use is more effective on small problems, for
example the pivot blocks in incomplete factorizations, or else large amounts of fill-in
must be allowed. In current work, Tang [33] couples local inverses over a domain in
a Schur complement approach. Preliminary results are consistently better than when
the approximate inverse is applied directly to the matrix, and its effect has similarities
to [7].
In trying to ensure that there is enough variation in the entries of the inverse for
a sparse approximation to be effective, we have tried reordering to reduce the profile
of a matrix. In a very different technique, Wan et. al. [34] compute the approximate
inverse in a wavelet space, where there may be greater variations in the entries of the
inverse, and thus permit a better sparse approximation.
Acknowledgments
. The authors are grateful to the referees for their comments
which substantially improved the quality of this paper. The authors also wish to
acknowledge the support of the Minnesota Supercomputer Institute which provided
the computer facilities and an excellent environment to conduct this research.
--R
Incomplete block matrix factorization preconditioning methods.
On some versions of incomplete block-matrix factorization iterative methods
On approximate factorization methods for block matrices suitable for vector and parallel processors
Iterative solution of large scale linear systems
Iterative solution of large sparse linear systems arising in certain multidimensional approximation problems
Approximate inverse techniques for block-partitioned matrices
Block preconditioning for the conjugate gradient method
Direct Methods for Sparse Matrices
Sparse matrix test problems
Variational iterative methods for non-symmetric systems of linear equations
A stability analysis of incomplete LU factorizations
FIDAP: Examples Manual
Matrix Computations
Parallel preconditioning with sparse approximate inverses
Parallel preconditioning and approximate inverses on the Connection Machine
The Theory of Matrices in Numerical Analysis
On a family of two-level preconditionings of the incomplete block factorization type
Modified block-approximate factorization strategies
Private communication
Efficient parallel solution of linear systems
An improved Newton iteration for the generalized inverse of a matrix
Preconditioning techniques for indefinite and nonsymmetric linear systems
Effective sparse approximate inverse preconditioners.
Fast wavelet-based sparse approximate inverse preconditioners
--TR
--CTR
Davod Khojasteh Salkuyeh , Faezeh Toutounian, BILUS: a block version of ILUS factorization, The Korean Journal of Computational & Applied Mathematics, v.15 n.1-2, p.299-312, May 2004
Philippe Guillaume , Yousef Saad , Masha Sosonkina, Rational approximation preconditioners for sparse linear systems, Journal of Computational and Applied Mathematics, v.158 n.2, p.419-442, 15 September
Prasanth B. Nair , Arindam Choudhury , Andy J. Keane, Some greedy learning algorithms for sparse regression and classification with mercer kernels, The Journal of Machine Learning Research, 3, 3/1/2003
Kai Wang , Jun Zhang, Multigrid treatment and robustness enhancement for factored sparse approximate inverse preconditioning, Applied Numerical Mathematics, v.43 n.4, p.483-500, December 2002
M. Sosonkina , Y. Saad , X. Cai, Using the parallel algebraic recursive multilevel solver in modern physical applications, Future Generation Computer Systems, v.20 n.3, p.489-500, April 2004
N. Guessous , O. Souhar, Multilevel block ILU preconditioner for sparse nonsymmetric M-matrices, Journal of Computational and Applied Mathematics, v.162 n.1, p.231-246, 1 January 2004
Edmond Chow , Michael A. Heroux, An object-oriented framework for block preconditioning, ACM Transactions on Mathematical Software (TOMS), v.24 n.2, p.159-183, June 1998
T. Tanaka , T. Nodera, Effectiveness of approximate inverse preconditioning by using the MR algorithm on an origin 2400, Proceedings of the third international conference on Engineering computational technology, p.115-116, September 04-06, 2002, Stirling, Scotland
Oliver Brker , Marcus J. Grote, Sparse approximate inverse smoothers for geometric and algebraic multigrid, Applied Numerical Mathematics, v.41 n.1, p.61-80, April 2002
Edmond Chow, Parallel Implementation and Practical Use of Sparse Approximate Inverse Preconditioners with a Priori Sparsity Patterns, International Journal of High Performance Computing Applications, v.15 n.1, p.56-74, February 2001
J. Martnez , G. Larrazbal, Wavelet-based SPAI pre-conditioner using local dropping, Mathematics and Computers in Simulation, v.73 n.1, p.200-214, 6 November 2006
Michele Benzi, Preconditioning techniques for large linear systems: a survey, Journal of Computational Physics, v.182 n.2, p.418-477, November 2002 | approximate inverse;threshold dropping strategies;preconditioning;krylov subspace methods |
292401 | Perturbation Analyses for the Cholesky Downdating Problem. | New perturbation analyses are presented for the block Cholesky downdating problem These show how changes in R and X alter the Cholesky factor U. There are two main cases for the perturbation matrix $\D R$ in R: (1) $\D R$ is a general matrix; (2)$\D R$ is an upper triangular matrix. For both cases, first-order perturbation bounds for the downdated Cholesky factor U are given using two approaches --- a detailed "matrix--vector equation" analysis which provides tight bounds and resulting true condition numbers, which unfortunately are costly to compute, and a simpler "matrix equation" analysis which provides results that are weaker but easier to compute or estimate. The analyses more accurately reflect the sensitivity of the problem than previous results. As $X\rightarrow 0$, the asymptotic values of the new condition numbers for case (1) have bounds that are independent of $\kappa_2(R)$ if $R$ was found using the standard pivoting strategy in the Cholesky factorization, and the asymptotic values of the new condition numbers for case (2) are unity. Simple reasoning shows this last result must be true for the sensitivity of the problem, but previous condition numbers did not exhibit this.} | Introduction
. Let A 2 R n\Thetan be a symmetric positive definite matrix. Then
there exists a unique upper triangular matrix R 2 R n\Thetan with positive diagonal elements
such that A = R T R. This factorization is called the Cholesky factorization,
and R is called the Cholesky factor of A.
In this paper we give perturbation analyses of the following problem: given an
upper triangular matrix R 2 R n\Thetan and a matrix X 2 R k\Thetan such that R T
is positive definite, find an upper triangular matrix U 2 R n\Thetan with positive diagonal
elements such that
This problem is called the block Cholesky downdating problem, and the matrix U
is referred to as the downdated Cholesky factor. The block Cholesky downdating
problem has many important applications, and the case for k=1 has been extensively
studied in the literature (see [1, 2, 3, 8, 11, 12, 16, 17, 18]).
Perturbation results for the single Cholesky downdating problem were presented
by Stewart [18]. Eld'en and Park [10] made an analysis for block downdating. But
these two papers just considered the case that only R or X is perturbed. More
complete analyses, with both R and X being perturbed, were given by Pan [15] and
Sun [20]. Pan [15] gave first order perturbation bounds for single downdating. Sun [20]
gave strict, also first order perturbation bounds for single downdating and first order
perturbation bounds for block downdating.
The main purpose of this paper is to establish new first order perturbation results
and present new condition numbers which more closely reflect the true sensitivity of
This research was partially supported by NSERC of Canada Grant OGP0009236.
y School of Computer Science, McGill University, Montreal, Quebec, Canada, H3A 2A7,
([email protected]), ([email protected]).
the problem. In Section 2 we will give the key result of Sun [20], and a new result using
the approach of these earlier papers. In Section 3 we present new perturbation results,
first by the straightforward matrix equation approach, then by the more detailed and
tighter matrix-vector equation approach. The basic ideas behind these two approaches
were discussed in Chang, Paige and Stewart [6, 7]. We give numerical results and
suggest practical condition estimators in Section 4.
Previous papers implied the change \DeltaR in R was upper triangular, and Sun [20]
said this, but neither he nor the others made use of this. In fact a backward stable
algorithm for computing U given R and X would produce the exact result U
for nearby data R+ \DeltaR and X it is not clear that \DeltaR would be upper
triangular - the form of the equivalent backward error \DeltaR would depend on the
algorithm, and if it were upper triangular, it would require a rounding error analysis
to show this. Thus for completeness it seems necessary to consider two separate cases
- upper triangular \DeltaR and general \DeltaR. We do this throughout Sections 3-4, and
get stronger results for upper triangular \DeltaR than in the general case.
In any perturbation analysis it is important to examine how good the results
are. In Section 3.2 we produce provably tight bounds, leading to the true condition
numbers (for the norms chosen). The numerical example in Section 4 indicates how
much better the results of this new analysis can be compared with some earlier ones,
but a theoretical understanding is also desirable. By considering the asymptotic case
as 0, the results simplify, and are easily understandable. We show the new
results have the correct properties as X ! 0, in contrast to earlier results.
Before proceeding, let us introduce some notation. Let n\Thetan , then
up(B), sut(B), slt(B) and diag(B) are defined by
and
2. Previous perturbation results, and an improvement. The condition
number for downdating presented by Pan [15] involves the square of the condition
number of R, - 2
proposed new condition
numbers which are simple and proportional to - 2 (R). The condition number for the
block Cholesky downdating problem proposed in [20] is
(\Gamma) is the smallest singular value
of \Gamma. Notice that for fixed 1. Now we use a
similar approach to derive a new bound, from which Sun's bound follows.
First we derive some relationships among U , R, X and \Gamma.
From (1.1) obviously we have
From (1.1) it follows that
so that taking the 2-norm gives
From (1.1) we have
which, combined with (2.2), gives
From (1.1) we have
which, combined with (2.2), gives
s
By (2.2) we have
Finally from (2.4) we see
To derive first order perturbation results we consider the perturbed version of
where U , U+ \DeltaU and R are upper triangular matrices with positive diagonal elements.
when \DeltaR and \DeltaX are sufficiently small, (2.7) has a unique solution \DeltaU .
Multiplying out the two sides of (2.7) and ignoring second order terms, we obtain a
linear matrix equation for the first order approximation d
\DeltaU to \DeltaU :
U T d
\DeltaU T
In fact it is straightforward to show d
U(0), the rate of change of U(-) at
so d
\DeltaU also has a precise meaning. From (2.8), we have
d
(R
Notice since d
\DeltaU U \Gamma1 is upper triangular, it follows, with (1.2), that
d
(R
But for any symmetric matrix B,
F \Gamma2 (b 2
Thus from (2.9) we have
\DeltaU
pkU \GammaT (R T \DeltaR
which, combined with (2.2) and (2.3), gives
\DeltaU k F -
resulting in the new perturbation bound for relative changes
\DeltaU k F
pp
which leads to the condition numbers for the Cholesky downdating problem:
pp
for U with respect to relative changes in R and X, respectively. Notice from (2.1)
. So we can define a new overall condition number
Rewriting (2.10) as
\DeltaU
and combining it with (2.5) and (2.6), gives Sun's bound
\DeltaU k F
We have seen the right hand side of (2.10) is never worse than that of (2.12), so
Although fi 2 is a minor improvement on fi 1 , it is still not what we want. We can
see this from the asymptotic behavior of these "condition numbers". The Cholesky
factorization is unique, so as X ! 0, U ! R, and X T \DeltaX ! 0 in (2.8). Now for any
upper triangular perturbation \DeltaR in R, \DeltaU ! \DeltaR, so the true condition number
should approach unity. Here
(R). The next section shows how we can
overcome this inadequacy.
3. New perturbation results. In Section 2 we saw the key to deriving first
order perturbation bounds for U in the block Cholesky downdating problem is the
equation (2.8). We will now analyze it in two new approaches. The two approaches
have been used in the perturbation analyses of the Cholesky factorization, the QR factorization
(see Chang, Paige and Stewart [6, 7]), and LU factorization (see Chang [4]
and Stewart [19]). The first approach, the refined matrix equation approach, gives a
clear improvement on the previous results, while the second, the matrix-vector equation
approach, gives a further improvement still, which leads to the true condition
numbers for the block Cholesky downdating problem.
3.1. Refined matrix equation analysis. In the last section we used (2.8) to
produce the matrix equation (2.9), and derived the bounds directly from this. We
now look at this approach more closely.
Let D n be the set of all n \Theta n real positive definite diagonal matrices. For any
U . Note that for any matrix B we have
First with no restriction on \DeltaR we have from (2.9)
d
U
so taking the F-norm gives
\DeltaU
It is easy to show for any B 2 R n\Thetan (see Lemma 5.1 in [7])
g. Thus from (3.1) we have
\DeltaU
(kU \GammaT R T \DeltaR -
U
(using (2:2); (2:3))
which is an elegant result in the changes alone. It leads to the following perturbation
bound in terms of relative changes
\DeltaU
Although here it would be simpler to just define an overall condition number, for later
comparisons it is necessary for us to define the following two quantities as condition
numbers for U with respect to relative changes in R and X, respectively (here subscript
G refers to general \DeltaR, and later the subscript T will refer to upper triangular \DeltaR):
c RG (R; X)
where
c RG (R; X;D) j
Then an overall condition number can be defined as
c G (R; X;D);
where
Obviously we have
c G (R;
Thus with these, we have from (3.3) that
\DeltaU k F
if we take become (2.10), and
It is not difficult to give an example to show fi 2 can be arbitrarily larger than c G (R; X),
as can be seen from the following asymptotic behaviour.
It is shown in [7, x5.1, (5.14)] that with an appropriate choice of D,
has a bound which is a function of n only, if R was found using the standard pivoting
strategy in the Cholesky factorization, and in this case, we see the condition number
c G (R; X) of the problem here is bounded independently of - 2 (R) as X ! 0, for general
\DeltaR. At the end of this section we give an even stronger result when X ! 0 for the
case of upper triangular \DeltaR. Note in the case here that fi 2 in (2.11) can be made as
large as we like, and thus arbitrarily larger than c G (R; X).
In the case where \DeltaR is upper triangular , we can refine the analysis further. From
we have
d
Notice with (1.3) and (1.4)
U \GammaT R T \DeltaRU
But for any upper triangular matrix T we have
so that if we define T
up[diag(U \GammaT R T
Thus from (3.12), (3.13) and (3.14) we obtain
d
\DeltaU
As before, let
U , where . From (3.15) it follows that
\DeltaU
U \GammaT \DeltaR T
Then, applying (3.2) to this, we get the following perturbation bound
\DeltaU k F
(3.
Comparing (3.16) with (3.3) and noticing (2.3), we see the sensitivity of U with respect
to changes in X does not change, so c X (R; X) defined in (3.4) can still be regarded as
a condition number for U with respect to changes in X. But we now need to define
a new condition number for U with respect to upper triangular changes in R, that is
(subscript T indicates upper triangular \DeltaR)
c RT (R; X)
c RT (R; X;D);
where
Thus an overall condition number can be defined as
where
c T (R;
Obviously we have
With these, we have from (3.16) that
\DeltaU k F
What is the relationship between c T (R; X) and c G (R;
n \Theta n upper triangular matrix observe the following two facts:
are the eigenvalues of T , so that
which gives
(Note: In fact we can prove a slightly sharper inequality ksut(T
Therefore
c RT (R;
n)
n)
using (2:2))
n)c RG (R; X;D);
so that
c RT (R; X)
n)c RG (R; X):
Thus we have from (3.8) and (3.18)
n)c G (R; X):
On the other hand, c T (R; X) can be arbitrarily smaller than c G (R; X). This can be
seen from the asymptotic behaviour, which is important in its own right. As
since
so for upper triangular changes in R, whether pivoting was used in finding R or not,
Thus when X ! 0, the bound in (3.19) reflects the true sensitivity of the problem.
For the case of general \DeltaR, if we do not use pivoting it is straightforward to make
c G (R; X) in (3.7) arbitrarily large even with
3.2. Matrix-vector equation analysis. In the last subsection, based on the
structure of \DeltaR, we gave two perturbation bounds using the so called refined matrix
equation approach. Also based on the structure of \DeltaR, we can now obtain provably
sharp, but less intuitive results by viewing the matrix equation (2.8) as a large matrix-vector
equation. For any matrix C n\Thetan , denote by c (i)
j the
vector of the first i elements of c j . With this, we define ("u" denotes "upper")
c (1)c (2):
c (n)
It is the vector formed by stacking the columns of the upper triangular part of C into
one long vector.
First assume \DeltaR is a general real n \Theta n matrix. It is easy to show (2.8) can be
rewritten into the following matrix-vector form (cf [7])
WU uvec( d
2 \Theta n(n+1)
r 11
and YX 2 R n(n+1)
Since U is nonsingular, WU is also, and from (3.22)
uvec( d
U YX vec(\DeltaX);
so taking the 2-norm gives
\DeltaU
resulting in the following perturbation bound
\DeltaU k F
-RG (R; X)
where
Now we would like to show
Before showing this, we will prove a more general result. Suppose from (2.8) we are
able to obtain a perturbation bound of the form
\DeltaU
- ff R
(3.
where ff R and ff X , two functions of R and X, are other measures of the sensitivity of
the Cholesky downdating problem with respect to changes in R and X. Let
Then from (3.23) and (3.28) we have
U ZR vec(\DeltaR)k 2
- ff R
Notice \DeltaR can be any (sufficiently small) n \Theta n real matrix, so we must have
which gives
Similarly, we can show
Notice since (3.9) is a particular case of (3.28), (3.27) follows. Thus we have from
(3.8) and (3.26)
The above analysis shows for general \DeltaR, -RG (R; X) and -X (R; X) are optimal
measures of the sensitivity of U with respect to changes in R and X, respectively, and
thus the bound (3.24) is optimal. So we propose -RG (R; X) and -X (R; X) as the true
condition numbers for U with respect to general changes in R and X, respectively,
and - CDG (R; X) as the true overall condition number of the problem in this case.
It is easy to observe that if X ! 0, - CDG (R; X)
just WU with each entry u ij replaced by r ij . If R was found using the standard
pivoting strategy in the Cholesky factorization, then kW \Gamma1
R ZR k 2 has a bound which
is a function of n alone (see [5] for a proof). So in this case our condition number
- CDG (R; X) also has a bound which is a function of n alone as
Remark 1: Our numerical experiments suggest c G (R; X) is usually a good approximation
to - CDG (R; X). But the following example shows c G (R; X) can sometimes be
arbitrarily larger than - CDG (R; X).
where ffl is a small positive number. It is not difficult to show
But c G (R; X) has an advantage over - CDG (R; X) - it can be quite easy to estimate
- all we need do is choose a suitable D in c G (R; X;D). We consider how to do this
in the next section. In contrast - CDG (R; X) is, as far as we can see, unreasonably
expensive to compute or estimate.
Now we consider the case where \DeltaR is upper triangular. (2.8) can now be rewritten
as the following matrix-vector form
WU uvec( d
2 \Theta n(n+1)
2 and YX 2 R n(n+1)
2 \Thetakn are defined as before, and WR 2
R n(n+1)
2 \Theta n(n+1)
2 is just WU with each entry u ij replaced by r ij . Since U is nonsingular,
WU is also, and from (3.30)
uvec( d
U YX vec(\DeltaX);
so taking the 2-norm gives
\DeltaU
which leads to the following perturbation bound
\DeltaU k F
where
Note -X (R; X) is the same as that defined in (3.25).
As before, we can show that for the case where \DeltaR is upper triangular, - RT (R; X)
and -X (R; X) are optimal measures of the sensitivity of U with respect to changes in
R and X, respectively, and thus the bound (3.32) is optimal. In particular, we have
In fact - RT (R; X) -RG (R; X) can also be proved directly by the fact that the columns
of WR form a proper subset of the columns of ZR , and the second inequality has been
proved before. Thus we have from (3.8), (3.18), (3.26) and (3.33)
By the above analysis, we propose - RT (R; X) and -X (R; X) as the true condition
numbers for U with respect to changes in R and X, respectively, and - CDT (R; X) as
the true overall condition number, in the case that \DeltaR is upper triangular.
If as well X ! 0, then since U ! R, W \Gamma1
So in this case the Cholesky downdating problem becomes very well conditioned no
matter how ill conditioned R or U is.
Remark 2: Numerical experiments also suggest c T (R; X) is usually a good approximation
to - CDT (R; X). But sometimes c T (R; X) can be arbitrarily larger than
- CDT (R; X). This can also be seen from the example in Remark 1. In fact, it is not
difficult to obtain
Like - CDG (R; X), - CDT (R; X) is difficult to compute or estimate. But c T (R; X) is easy
to estimate, which is discussed in the next section.
4. Numerical tests and condition estimators. In Section 3 we presented new
first order perturbation bounds for the the downdated Cholesky factor U using first
the refined matrix equation approach, and then the matrix vector equation approach.
We defined - CDG (R; X) for general \DeltaR, and - CDT (R; X) for upper triangular \DeltaR, as
the true overall condition numbers of the problem. Also we gave two corresponding
practical but weaker condition numbers c G (R; X) and c T (R; X) for the two \DeltaR cases.
We would like to choose D such that c G (R; X;D) and c T (R; X;D) are good approximations
to c G (R; X) and c T (R; X), respectively. We see from (3.5), (3.6) and
(3.17) that we want to find D such that
its infimum.
By a well known result of van der Sluis [21], - 2 (D \Gamma1 U) will be nearly minimal when
the rows of D \Gamma1 U are equilibrated. But this could lead to a large i D . So a reasonable
compromise is to choose D to equilibrate U as far as possible while keeping i D - 1.
Specifically, take
we use a standard condition estimator to estimate
Notice from (2.4) we have oe n
2 . Usually k, the number of
rows of X, is much smaller than n, so oe n (\Gamma) can be computed in O(n 2 ). If k is not
much smaller than n, then we use a standard norm estimator to estimate kXR
in O(n 2 ). Similarly kUk 2 and kRk 2 can be estimated in O(n 2 ). So finally c G (R; X;D)
can be estimated in O(n 2 ). Estimating c T (R; X;D) is not as easy as estimating
c G (R; X;D). The part kdiag(RU (R; X;D) can easily be computed in
O(n), since diag(RU
c RT (R; X;D) can roughly be estimated in O(n 2 ), based onp
F
and the fact that kRU \Gamma1 k F can be estimated by a standard norm estimator in O(n 2 ).
The value of kXU (R; X;D) can be calculated (if k !! n) or estimated
by a standard estimator in O(n 2 ). All the remaining values kRk 2 , kXk 2 and kUk 2
can also be estimated by a standard norm estimator in O(n 2 ). Hence c RT (R; X;D),
c X (R; X;D), and thus c T (R; X;D) can be estimated in O(n 2 ). For standard condition
estimators and norm estimators, see Chapter 14 of [14].
The relationships among the various overall condition numbers for the Cholesky
downdating problem presented in Section 2 and Section 3 are as follows.
n)c G (R; X)
Now we give one numerical example to illustrate these. The example, quoted from
Sun [20], is as follows.
0:240
2:390C C C C C A
. The results obtained using MATLAB are shown in
Table
4.1 for various values of - :
Table
c G (R; X;D) 3.60e+03 3.61e+02 3.79e+01 1.79e+01 1.78e+01 1.78e+01
c T (R; X;D) 2.12e+03 2.12e+02 1.79e+01 1.07e+00 1.00e+00 1.00e+00
Note in Table 4.1 how fi 1 and fi 2 can be far worse than the true condition numbers
- CDG (R; X) and - CDT (R; X), although fi 2 is not as bad as fi 1 . Also we observe
that c G (R; X;D) and c T (R; X;D) are very good approximations to - CDG (R; X)
and - CDT (R; X), respectively. When X become small, all of the condition numbers
decrease. The asymptotic behavior of c G (R; X;D), c T (R; X;D), - CDG (R; X) and
our theoretical results - when
- CDG (R; X) will be bounded in terms of n since here R corresponds to the Cholesky
factor of a correctly pivoted A, and c T (R; X); - CDT (R; X) ! 1.
Acknowledgement
. We would like to thank Ji-guang Sun for suggesting to us
that the approach describe in (a draft version of) [6] might apply to the Cholesky
downdating problem.
--R
Analysis of a recursive least squares hyperbolic rotation algorithm for signal processing
Accurate downdating of least squares solutions
A note on downdating the Cholesky factorization
PhD Thesis
A perturbation analysis for R in the QR factorization
New perturbation analyses for the Cholesky factorization
Perturbation analyses for the QR factorization
Block downdating of least squares solutions
Perturbation analysis for block downdating of a Cholesky decomposition
Numerical computations for univariate linear models
Methods for modifying matrix factor- izations
Matrix computations
Accuracy and Stability of Numerical Algorithms
A perturbation analysis of the problem of downdating a Cholesky factorization
Least squares modification with inverse factorizations: parallel implications
The effects of rounding error on an algorithm for downdating a Cholesky factor- ization
On the Perturbation of LU and Cholesky Factors
Perturbation analysis of the Cholesky downdating and QR updating problems
Condition numbers and equilibration of matrices
--TR | asymptotic condition;downdating;cholesky factorization;perturbation analysis;sensitivity;condition |
292410 | Condition Numbers of Random Triangular Matrices. | Let Ln be a lower triangular matrix of dimension n each of whose nonzero entries is an independent N(0,1) variable, i.e., a random normal variable of mean 0 and variance 1. It is shown that kn, the 2-norm condition number of Ln, satisfies \begin{equation*} \sqrt[n]{\kn} \rightarrow 2 \:\:\: \text{\it almost surely} \end{equation*} as $n\rightarrow\infty$. This exponential growth of kn with n is in striking contrast to the linear growth of the condition numbers of random dense matrices with n that is already known. This phenomenon is not due to small entries on the diagonal (i.e., small eigenvalues) of Ln. Indeed, it is shown that a lower triangular matrix of dimension $n$ whose diagonal entries are fixed at 1 with the subdiagonal entries taken as independent N(0,1) variables is also exponentially ill conditioned with the 2-norm condition number kn of such a matrix satisfying \begin{equation*} \sqrt[n]{\kn}\rightarrow 1.305683410\ldots \:\:\:\text{\it almost surely} \end{equation*} as $n\rightarrow\infty$. A similar pair of results about complex random triangular matrices is established. The results for real triangular matrices are generalized to triangular matrices with entries from any symmetric, strictly stable distribution. | Introduction
Random dense matrices are well-conditioned. If each of the n 2 entries of a
matrix of dimension n is an independent N(0; 1) variable, Edelman has shown
This work was supported by NSF Grant DMS-9500975CS and DOE Grant DE-FG02-
y Department of Computer Science, Cornell University, Ithaca, NY 14853 (diva-
[email protected] and [email protected])
Figure
1: Empirical cumulative density functions of n
-n , for triangular and
unit triangular matrices respectively, with obtained from 1000
random matrices for each n. The random entries are N(0; 1) variables. The
higher values of n correspond to the steeper curves. In the limit n ! 1, the
cdfs converge to Heaviside step functions with jumps at the dashed lines.
that the probability density function (pdf) of -n=n, where -n is the 2-norm
condition number of such a matrix, converges pointwise to the function
as the distribution of -n=n is independent of n in the limit
1, we can say that the condition numbers of random dense matrices
grow only linearly with n. Using this pdf, it can be shown, for example, that
E(log(-n
In striking contrast, the condition number of a random lower triangular
matrix Ln , a matrix of dimension n all of whose diagonal and subdiagonal
entries are independent N(0; 1) variables, grows exponentially with n. If -n is
the 2-norm condition number of Ln (defined as kLn k
show that
almost surely
as n !1 (Theorem 4.3). Figure 1a illustrates this result.
The matrices that arise in the experiments reported in Figure 1 are so ill-conditioned
that the standard method of finding the condition number using
the SVD [10] fails owing to rounding errors. A numerically stable approach for
computing the condition number, which was used to generate the figures, is to
find the inverse of the triangular matrix explicitly using the standard algorithm
for triangular inversion, and then find the norms of the matrix and its inverse
independently.
The exponential growth of -n with n is not due to small entries on the
diagonal since the probability of a diagonal entry being exponentially small is
also exponentially small. For a further demonstration that the diagonal entries
do not cause the exponential growth in -n , we consider condition numbers of
unit triangular matrices, i.e., triangular matrices with ones on the diagonal. If
-n is the condition number of a unit lower triangular matrix of dimension n
with subdiagonal entries taken as independent N(0; 1) variables, then
almost surely
as n !1 (Theorem 5.3). Obviously, in this case the ill-conditioning has nothing
to do with the diagonal entries (i.e., the eigenvalues) since they are all equal
to 1. Section 7 discusses the relationship of the exponential ill-conditioning of
random unit triangular matrices to the stability of Gaussian elimination with
partial pivoting.
We will use Ln to refer to triangular matrices of various kinds - real or
complex, with or without a unit diagonal. But Ln always denotes a lower
triangular matrix of dimension n. If the entries of Ln are random variables, they
are assumed to be independent. Thus, if we merely say that Ln has entries from
a certain distribution, those entries are not only identically distributed but also
independent. Of course, only the nonzero entries of Ln are chosen according to
that distribution. The condition number always refers to the 2-norm condition
number. However, all our results concerning the limits lim n!1 n
-n apply to
all the L p norms, and the L p norms
differ by at most a factor of n. The 2-norm condition number of Ln , defined as
denoted by -n . The context will make clear the distribution
of the entries of Ln .
The analyses and discussions in this paper are phrased for lower, not upper,
triangular matrices. However, all the theorems are true for upper triangular
matrices as well, as is obvious from the fact that a matrix and its transpose
have the same condition number.
We obtain similar results for triangular matrices with entries chosen from
the complex normal distribution ~
we denote the complex
normal distribution of mean 0 and variance oe 2 obtained by taking the real and
imaginary parts as independent N(0; oe 2 =2) variables. Let Ln denote a triangular
matrix with ~
almost surely
as
normal entries tend to have smaller condition numbers than triangular matrices
with real normally distributed entries.
Similarly, let Ln denote a unit lower triangular matrix with ~
subdiagonal
entries. Then,
almost surely
as (Theorem 7.4). Thus, unit triangular matrices with complex normal
entries tend to have slightly bigger condition numbers than unit triangular
matrices with real normal entries.
Our results are similar in spirit to results obtained by Silverstein for random
dense matrices [14]. Consider a matrix of dimension n \Theta (yn), where y 2 [0; 1],
each of whose n 2 y entries is an independent N(0; 1) variable. Denote its largest
and smallest singular values by oe max and oe min , respectively. It is shown in [14]
that
oe
y almost surely
as 1. The complex analogues of these results can be found in [4]. The
technique used in [14] is a beautiful combination of what is now known as the
Golub-Kahan bidiagonalization step in computing the singular value decomposition
with the Gerschgorin circle theorem and the Mar-cenko-Pastur semicircle
law. The techniques used in this paper are more direct.
The exponential growth of
is due to the second factor.
We outline the approach for determining the rate of exponential growth of -n
by assuming Ln triangular with N(0; 1) entries. In Section 2, we derive the joint
probability density function for the entries in any column of L \Gamma1
(Proposition
2.1). If T k is the 2-norm of column
, i.e., the column with
nonzero entries, both positive and negative moments of T k are explicitly derived
in Section 3 (Lemma 3.2). These moments allow us to deduce that n
converges to 2 almost surely (Theorem 4.3). A similar approach is used to determine
the limit of n
with ~
(Theorems 5.3,
7.3, and 7.4 respectively).
The same approach is used more generally to determine the limit of n
as n !1 for Ln with entries drawn from any symmetric, strictly stable distribution
(Theorems 8.3 and 8.5). These theorems are specialized to the Cauchy
distribution, which is symmetric and strictly stable, in Theorems 8.4 and 8.6.
2 Inverse of a Random Triangular Matrix
Consider the matrix
ff 11
\Gammaff 21 ff 22
\Gammaff n1 \Gammaff
where each ff ij is an independent N(0; 1) variable. Consider L \Gamma1
n , and denote
the first k entries in the first column of L \Gamma1
. The t i satisfy the
(a) (b)
Figure
2: Entries of L \Gamma1
n on the same solid line in (a) have the same pdf. Sets
of entries of L \Gamma1
n in the boxes in (b) have the same jpdf.
following relations:
This system of equations can be interpreted as a system of random recurrence
relations. The first entry t 1 is the reciprocal of an N(0; 1) variable. The kth
entry t k is obtained by summing the previous entries t independent
as coefficients, and dividing that sum by an independent
Next, consider an arbitrary column of L \Gamma1
n and denote the first k entries of
that column from the diagonal downwards by t . The entries t i satisfy
random recurrence relations similar in form to (2.1), but the ff ij are a different
block of entries in Ln for different columns. For example, any diagonal entry of
n is the reciprocal of an N(0; 1) variable; the kth diagonal entry is 1=ff kk .
These observations can be represented pictorially. Every entry of L \Gamma1
n at
a fixed distance from the diagonal has the same probability density function
(pdf). We may say that the matrix L \Gamma1
n , like Ln , is "statistically Toeplitz." See
Figure 2a. Moreover, if we consider the first k entries of a column of L \Gamma1
n from
the diagonal downwards, those k entries will have the same joint probability
density function (jpdf) irrespective of the column. See Figure 2b. The different
columns of L \Gamma1
, however, are by no means independent.
Our arguments are stated in terms of the columns of L \Gamma1
n . However, rows
and columns are indistinguishable in this problem; we could equally well have
framed the analysis in terms of rows.
Denote the jpdf of t In the next propo-
sition, a recursive formula for f k is derived. For simplicity, we introduce the
further notation T
k . Throughout this section, Ln is the random
triangular matrix of dimension n with N(0; 1) entries.
Proposition 2.1. The jpdf f the following recurrence:
f
Proof. The t k are defined by the random recurrence in (2.1).
The expression for f 1 is easy to get. If x is an N(0; 1) variable, its pdf isp
The change of variable
To obtain the recursive expression (2.3) for f k , consider the variable - k obtained
by summing the variables t
are independent
variables. For fixed values of t the variable
being a sum of random normal variables, is itself a random normal variable
of mean 0 and variance T 2
. Therefore, the jpdf of - k and t
By (2.1), the variable t k can be obtained as - k =ff, where ff is an independent
variable. The jpdf of ff, - k and t
Changing the variable - k to t integrating out ff, we obtain
i.e., f k is given by (2.3).
Note that the form of the recurrence for f k in Proposition 2.1 mirrors the
random recurrence (2.1) for obtaining t k from the previous entries t
In the following corollary, an explicit expression for f k in terms of the t i is
stated.
Corollary 2.2. For k ? 1, the jpdf f
3 Moments of T k
In this section and the next, Ln continues to represent a triangular matrix
of dimension n with N(0; 1) entries. As we remarked earlier, the exponential
growth of
is due to the second factor kL \Gamma1
. Since the
2-norm of column
n has the same distribution as Tn\Gammai , we derive
formulas for various moments of T k with the intention of understanding the
exponential growth of kL \Gamma1
In the lemma below, we consider the expected value E(T -
positive
and negative values of -. By our notation, j. The notation
is used to reduce clutter in the proof. As usual, R k denotes the real
Euclidean space of dimension k.
The next lemma is stated as a recurrence to reflect the structure of its proof.
Lemma 3.2 contains the same information in a simpler form.
Lemma 3.1. For any real - ! 1, E(T -
k ) is given by the following recurrence:
E(T -
E(T -
E(T -
dx
For
k ) is infinite.
Proof. To obtain (3.1), use T and the pdf of t 1 given by Equation (2.2).
It is easily seen that the integral is convergent if and only if - ! 1.
Next, assume k ? 1. By definition,
E(T -
Z
Using the recursive equation (2.3) for f k , and writing T k in terms of t k and
E(T -
Z
Z
R
By the substitution t the inner integral with respect to dt k can be
reduced to
dx
Inserting this in the multiple integral (3.3) gives the recursive equation (3.2) for
E(T -
k ). It is easily seen that the integral in (3.2) is convergent if and only if
dx
Beginning with the substitution in (3.4), it can be shown that
is the beta function. The relevant expression for
the beta function B(x; y) is Equation (6.2.1) in [1]. Also, if x is chosen from
the standard Cauchy distribution, then We do not need
in terms of the beta function, however; the integral expression (3.4) suffices
for our purposes. Lemma 3.1 can be restated in a more convenient form using
Lemma 3.2. For
- for a finite positive constant C - . Also,
Proof. The expression for E(T -
k ) is a restatement of Lemma 3.1. By elementary
and by the form of the integral in (3.4),
and
Lemma 3.2 implies that the positive moments of T k grow exponentially with
k while the negative moments decrease exponentially with k.
Obtaining bounds for P now a simple matter.
Lemma 3.3. For
Proof.
\Gamma- to obtain an expression for E(T \Gamma-
apply Markov's inequality
[2].
Lemma 3.4. For k - 1,
Proof. As in Lemma 3.3, - ? 0 implies that P
Again, the proof can be completed by obtaining an expression for E(T -
using
Lemma 3.2 followed by an application of Markov's inequality.
4 Exponential Growth of - n
We are now prepared to derive the first main result of the paper, namely, n
almost surely as entries. In
the sequel, a.s. means almost surely as n ! 1. The definition of almost sure
convergence for a sequence of random variables can be found in most textbooks
on probability, for example [2]. Roughly, it means that the convergence holds
for a set of sequences of measure 1.
Lemma 4.1. kLnk 1=n
almost surely as n !1.
Proof. The proof is easy. We provide only an outline. The Frobenius norm of
F , is a sum of n(n variables of mean 1. By the
strong law of large numbers,
F
The proof can be completed using the inequalities n \Gamma1=2 kLnkF - kLn k 2 -
kLn
The proof of Lemma 4.2 is very similar to the proofs of several standard
results in probability, for example the strong law of large numbers [2, p. 80].
Lemma 4.2. As n !1, for any 0
almost surely.
Proof. By Lemma 4.1, it suffices to show that
a:s:
We consider the lower bound first. The 2-norm of the first column of L \Gamma1
n , which
has the same distribution as Tn , is less than or equal to kL \Gamma1
. Therefore, for
Using Lemma 3.3 with
\Gamma-
where
\Gamma- (fl \Gamma1=-
ffl is finite. The first
lemma [2] can be applied to obtain
infinitely often as n
Taking the union of the sets in the above equation over all rational ffl in (0; 1)
and considering the complement of that union, we obtain
\Gamma- as n
In other words, fl \Gamma1=-
a.s.
The upper bound can be established similarly. At least one of the columns of
must have 2-norm greater than or equal to n \Gamma1=2 kL \Gamma1
. Since the 2-norm
of column k + 1 has the same distribution as Tn\Gammak ,
Bounding each term in the summation using Lemma 3.4 gives
Lemma 3.2, the largest term in the summand occurs when
From this point, the proof can be completed in the same manner as the proof
of the lower bound.
Theorem 4.3. For random triangular matrices with N(0; 1) entries, as
almost surely.
Proof. By an inequality sometimes called Lyapunov's [11, p. 144] [2],
ff
for any real fi ! ff. Thus the bounding intervals [fl \Gamma1=-
shrink as - decreases from 1 to 0. A classical theorem [11, p. 139] says that
these intervals actually shrink to the following point:
lim
dx
The exact value of the limit can be evaluated to 2 using the substitution
tan ' followed by complex integration [3, p. 121]. Thus n
Theorem 4.3 holds in exactly the same form if the nonzero entries of Ln are
independent N(0; oe 2 ) variables rather than N(0; 1) variables, since the condition
number is invariant under scaling.
Our approach to Theorem 4.3 began by showing that E(T -
- for
both positive and negative -. Once these expressions for the moments of T k
were obtained, our arguments did not depend in an essential way on how the
recurrence was computed. The following note summarizes the asymptotic information
about a recurrence that can be obtained from a knowledge of its
moments.
be a sequence of random variables. If E(jt exponentially
with n at the rate - n
almost surely as
Similarly, if E(jt exponentially with n at the rate - n
- as
almost surely as n !1. Thus, knowledge
of any positive moment of t n yields an upper bound on n
knowledge of any negative moment yields a lower bound.
5 Unit Triangular Matrices
So far, we have considered triangular matrices whose nonzero entries are inde-
pendent, real N(0; 1) variables. In this section and in Section 7, we establish the
exponential growth of the condition number for other kinds of random triangular
matrices with normally distributed entries. The key steps in the sequence of
lemmas leading to the analogs of Theorem 4.3 are stated but not proved. The
same techniques used in Sections 2, 3 and 4 work here too.
Let Ln be a unit lower triangular matrix of dimension n with N(0; oe 2 ) subdiagonal
entries. Let s be the first k entries from the diagonal downwards
of any column of L \Gamma1
n . The entries s i satisfy the recurrence
are N(0; oe 2 ) variables. The notation S
is used below.
Proposition 5.1. The jpdf of s by the recur-
rence
exp(\Gammas 2
exp(\Gammas 2
and the fact that s
Lemma 5.2. For any real -, E(S -
The note at the end of Section 4 provides part of the link from Lemma 5.2
to the following theorem about -n .
Theorem 5.3. For random unit triangular matrices with N(0; oe 2 ) entries, as
almost surely.
If this limit is denoted by C(oe), then
being the Euler constant.
Proof. The constant K is given by
r-
To evaluate K, we used integral 4:333 of [8].
In contrast to the situation in Theorem 4.3, the constant that n
to in Theorem 5.3 depends on oe. This is because changing oe scales only the
subdiagonal entries of the unit triangular matrix Ln while leaving the diagonal
entries fixed at one. For the case discussed in the Introduction, numerical
integration shows that the constant is
6 A Comment on the Stability of Gaussian
Elimination
The conditioning of random unit triangular matrices has a connection with the
phenomenon of numerical stability of Gaussian elimination. We pause briefly
to explain this connection.
For decades, the standard algorithm for solving general systems of linear
equations has been Gaussian elimination (with "partial" or row pivot-
ing). This algorithm generates an "LU factorization"
permutation matrix, L is unit lower triangular with subdiagonal entries - 1 in
absolute value, and U is upper triangular.
In the mid-1940s it was predicted by Hotelling [12] and von Neumann [9]
that rounding errors must accumulate exponentially in elimination algorithms
of this kind, causing instability for all but small dimensions. In the 1950s,
Wilkinson developed a beautiful theory based on backward error analysis that,
while it explained a great deal about Gaussian elimination, confirmed that for
certain matrices, exponential instability does indeed occur [17]. He showed that
amplification of rounding errors by factors on the order of kL may take place,
and that for certain matrices, kL \Gamma1 k is of order 2 n . Thus for certain matrices,
rounding errors are amplified by O(2 n ), causing a catastrophic loss of n bits of
precision.
Despite these facts, the experience of fifty years of computing has established
that from a practical point of view, Hotelling and von Neumann were wrong:
Gaussian elimination is overwhelmingly stable. In fact, it is not clear that a
single matrix problem has ever led to an instability in this algorithm, except for
the ones produced by numerical analysts with that end in mind, although Foster
and Wright [18] have devised problems leading to instability that plausibly
"might have arisen" in applications. The reason appears to be statistical: the
matrices A for which kL \Gamma1 k is large occupy an exponentially small proportion of
the space of all matrices, so small that such matrices "never" arise in practice.
Experimental evidence of this phenomenon is presented in [16].
This raises the question, why are matrices A for which kL \Gamma1 k is large so
rare? It is here that the behavior of random unit triangular matrices is rele-
vant. A natural hypothesis would be that the matrices L generated by Gaussian
elimination are, to a reasonable approximation, random unit triangular matrices
with off-diagonal entries of a size dependent on the dimension n. If such
matrices could be shown to be almost always well-conditioned, then the stability
of Gaussian elimination would be explained.
We have just shown, however, that unit triangular matrices are exponentially
ill-conditioned. Thus this attempted explanation of the stability of Gaussian
elimination fails, and indeed, the same argument suggests that Gaussian
elimination should be unstable in practice as well as in the worst case. The
resolution of this apparent paradox is that the matrices L produced by Gaussian
elimination are far from random. The signs of the entries of these matrices are
correlated in special ways that have the effect of keeping kL almost always
very small. For example, it is reported in [16] that a certain random matrix A
with led to kL
was taken to be the same matrix
but with the signs of its subdiagonal entries randomized, the result became
From a comparison of Theorem 5.3 with half a century of the history of
Gaussian elimination, then, one may conclude that unit triangular factors of
random dense matrices are very different from random unit triangular matrices.
An explanation of this difference is offered in [15] along the following lines. If
A is random, then its column spaces are randomly oriented in n-space. This
implies that the same holds approximately for the column spaces of L. That
condition, in turn, implies that large values kL \Gamma1 k can arise only exponentially
rarely.
7 Complex Matrices
We now consider matrices with complex entries. Let Ln be a lower triangular
matrix with ~
entries. The complex distribution ~
defined in
the
Introduction
. Let t denote the first k entries from the diagonal
downwards of any column of L \Gamma1
n . The quantities t k satisfy (2.1), but the ff ij
are now independent ~
by R k .
Proposition 7.1. The jpdf of r by the recur-
rence
r 2; (7.1)
h
for
Proof. We sketch only the details that do not arise in the proof of Proposition
2.1. If x and y are independent N(0; oe 2 ) variables,
r cos(') and
r sin('), then r and ' are independent. Moreover, the distribution of r is
Poisson with the pdf
Consider the sum - taken as independent
~
variables. For fixed are independent.
To see their independence, we write out the equations for Re(- k ) and Im(- k ) as
follows:
Re(ff ki
Re(ff ki )Im(t
The linear combinations of Re(ff ki ) and Im(ff ki ) in these two equations can be
realized by taking inner products with the two vectors
The independence of Re(- k ) and Im(- k ) is a consequence of the orthogonality
of v and w, i.e., (v; and the invariance of the jpdf of independent,
identically distributed normal variables under orthogonal transformation [13].
Thus for fixed t the real and imaginary parts of - k are independent
normal variables of mean 0 and variance R k\Gamma1 =2. By Equation (7.3), the pdfs
of are given byR
exp(\Gammax=R
for positive x; y. The expression (7.2) for h k can now be obtained using r
Lemma 7.2. For any - ! 1, E(R -
r 2\Gamma-
Z 1dx
The constant - in Lemma 7.2 can be reduced to
ever, as with fl - in Section 3, the integral expression for - is more convenient
for our purposes. As before, the note at the end of Section 4 is an essential part
of the link from the previous lemma to the following theorem about -n .
Theorem 7.3. For random triangular matrices with complex ~
as n !1,
almost surely:
Theorem 7.3 holds unchanged if the entries are ~
As with
Theorem 4.3, this is because the condition number is invariant under scaling.
let Ln be a unit lower triangular matrix of dimension n with ~
subdiagonal entries. We state only the final theorem about -n .
Theorem 7.4. For random unit triangular matrices with complex ~
tries, as n !1,
almost surely,
where Ei is the exponential integral. If this limit is denoted by C(oe), then
being the Euler constant.
Proof. To obtain K, we evaluated
using the Laplace transform of log(x) given by integral 4.331.1 of [8]. The
explicit formula involving Ei(oe \Gamma2 ) was obtained using integral 4.337.2 of [8].
For
-n converges to
8 Matrices with Entries from Stable Distribution
The techniques used to deduce Theorem 4.3 require that we first derive the
joint density function of the t k , defined by recurrence (2.1), as was done in
Proposition 2.1. That proposition made use of the fact that when the ff ki are
independent and normally distributed, and the t i are fixed, the sum
is also normally distributed. This property of the normal distribution holds for
any stable distribution.
A distribution is said to be stable, if for X i chosen independently from that
distribution,
has the same distribution as c n X has the same distribution as
are constants [6, p. 170]. If the distribution
is said to be strictly stable. As usual, the distribution is symmetric if X has
the same distribution as \GammaX . A symmetric, strictly stable distribution has
exponent a if c standard result of probability theory says that any
stable distribution has an exponent 0 ! a - 2. The normal distribution is stable
with exponent
The techniques used for triangular matrices with normal entries work more
generally when the entries are drawn from a symmetric, strictly stable distri-
bution. Let Ln be a unit lower triangular matrix with entries chosen from a
symmetric, strictly stable distribution. Denote the pdf of that stable distribution
by OE(x). The recurrence for the entries s i of the inverse L \Gamma1
n is given by
(5.1), but ff ki are now independent random variables with the density
function OE(x).
The proposition, the lemma and the theorem below are analogs of Proposition
5.1, Lemma 5.2, and Theorem 5.3 respectively. If the exponent of the
stable distribution is a, denote (js 1 j a
Proposition 8.1. If OE(x) is the density function of a symmetric, strictly stable
distribution with exponent a, the jpdf of s
the recurrence
and the fact that s
Proof. The proof is very similar to the proof of Proposition 2.1. We note that
if ff ki are independent random variables with the pdf OE(x), and the s i
are fixed, then the sum
has the pdf OE(x=S pg. 171].
Lemma 8.2. For any real -, E(S -
Theorem 8.3. For random unit triangular matrices with entries from a sym-
metric, strictly stable distribution with density function OE(x) and exponent a, as
a
almost surely:
Theorem 5.3 is a special case of Theorem 8.3 when OE(x) is the density function
for the symmetric, strictly stable distribution N(0; oe 2 ). Another notable
symmetric, strictly stable distribution is the Cauchy distribution with the density
function
The exponent a for the Cauchy distribution is 1 [6]. Using Theorem 8.3 we
obtain,
Theorem 8.4. For random unit triangular matrices with entries from the standard
Cauchy distribution, as n !1,
almost surely.
Numerical integration shows the constant to be
A similar generalization can be made for triangular matrices without a unit
diagonal. However, the analog of Theorem 8.3 for such matrices involves not
OE(x), but the density function /(x) of the quotient obtained by taking
z as independent variables with the pdf OE. The distribution / can be difficult
to compute and work with.
let Ln be a triangular matrix with entries chosen from a symmetric,
strictly stable distribution with the density function OE(x). We state only the
final theorem about -n .
Theorem 8.5. For random triangular matrices with entries from a symmetric,
strictly stable distribution with density function OE(x) and exponent a, as n !1,
a
almost surely,
where /(x) is the density function of the quotient of two independent variables
with the density function OE(x).
Theorem 4.3 is a special case of Theorem 8.5 when OE(x) is the density function
of the distribution N(0; oe 2 ). The /(x) corresponding to N(0; oe 2 ) is the
standard Cauchy distribution. To apply Theorem 8.5 for the Cauchy distribu-
tion, we note that
log jxj
is the density function of the quotient if the numerator and the denominator
are independent Cauchy variables. Therefore, Theorem 8.5 implies
Theorem 8.6. For random triangular matrices with entries from the standard
Cauchy distribution, as n !1,
almost surely.
The constant of convergence in Theorem 8.6 is
9
Summary
Below is a summary of the exponential growth factors lim n!1 n
-n that we
have established for triangular matrices with normal entries:
Real triangular 2 Theorem 4.3
Real unit triangu-
lar, Theorem 5.3
Complex triangular e Theorem 7.3
Complex unit trian-
gular, Theorem 7.4
The theorems about unit triangular matrices with normally distributed, real
or complex entries apply for any variance oe 2 , not just oe
of convergence for any symmetric, strictly stable distribution were derived in
Theorems 8.3 and 8.5. Those two theorems were specialized to the Cauchy
distribution in Theorems 8.4 and 8.6.
Similar results seem to hold more generally, i.e., even when the entries of
the random triangular matrix are not from a stable distribution. Moreover, the
complete knowledge of moments acheived in Lemma 3.2 and its analogs might be
enough to prove stronger limit theorems than Theorem 4.3 and its analogs. We
will present limit theorems and results about other kinds of random triangular
matrices in a later publication. We will also discuss the connection between
random recurrences and products of random matrices, and the pseudospectra
of infinite random triangular matrices.
We close with two figures that illustrate the first main result of this paper,
namely, for random triangular matrices with N(0; 1) entries, n
almost
surely as n !1 (Theorem 4.3). Figure 3 plots the results of a single run of the
random recurrence (2.1) to 100; 000 steps, confirming the constant 2 to about
two digits. The expense involved in implementing the full recurrence (2.1) for
so many steps would be prohibitive. However, since t k grows at the rate 2 k , we
need include only a fixed number of terms in (2.1) to compute t k to machine
precision. For the figure, we used 200 terms, although half as many would have
been sufficient. Careful scaling was necessary to avoid overflow while computing
this figure.
Figure
4 plots the condition number of a single random triangular matrix
for each dimension from 1 to 200. The exponential trend at the rate 2 n is clear,
but as in Figure 1, the convergence as n !1 is slow.
Acknowledgements
We thank D. Coppersmith, P. Diaconis, H. Kesten, A. Odlyzko, J. Sethna, and
H. Wilf for helpful discussions. We are especially grateful to Prof. Diaconis for
introducing us to stable distributions.
Figure
3: Illustration of Theorem 4.3. After 100; 000 steps of the random recurrence
has settled to within 1% of its limiting value 2. The
implementation is explained in the text.
-n
Figure
4: Another illustration of Theorem 4.3. Each cross is obtained by computing
the condition number -n for one random triangular matrix of dimension
n with N(0; 1) entries. The solid line represents 2 n .
--R
Handbook of Mathematical Func- tions
Probability and Measure
Functions of One Complex Variable
Eigenvalues and Condition Numbers of Random Matrices
Eigenvalues and condition numbers of random matrices
An Introduction to Probability Theory and Its Applications
Gaussian elimination with partial pivoting can fail in practice
Table of Integrals
Numerical inverting of matrices of high order
Matrix Computations
Some new methods in matrix calculation
Random Matrices and the Statistical Theory of Energy Levels
The smallest eigenvalue of a large-dimensional Wishart matrix
Analysis of direct methods of matrix inversion
A collection of problems for which Gaussian elimination with partial pivoting is unstable
--TR
--CTR
R. Barrio , B. Melendo , S. Serrano, On the numerical evaluation of linear recurrences, Journal of Computational and Applied Mathematics, v.150 n.1, p.71-86, January
Kenneth S. Berenhaut , Daniel C. Morton, Second-order bounds for linear recurrences with negative coefficients, Journal of Computational and Applied Mathematics, v.186 n.2, p.504-522, 15 February 2006 | exponentially nonnormal matrices;matrix condition numbers;random triangular matrices;strong limit theorems |
292837 | Continuous Time Matching Constraints for Image Streams. | Corresponding image points of a rigid object in a discrete sequence of images fulfil the so-called multilinear constraint. In this paper the continuous time analogue of this constraint, for a continuous stream of images, is introduced and studied. The constraint links the Taylor series expansion of the motion of the image points with the Taylor series expansion of the relative motion and orientation between the object and the camera. The analysis is done both for calibrated and uncalibrated cameras. Two simplifications are also presented for the uncalibrated camera case. One simplification is made using an affine reduction and the so-called kinetic depths. The second simplification is based upon a projective reduction with respect to the image of a planar configuration. The analysis shows that the constraint involving second-order derivatives are needed to determine camera motion. Experiments with real and simulated data are also presented. | Introduction
A central problem in scene analysis is the analysis
of 3D-objects from a number of its 2D-images,
obtained by projections. In this paper, the case of
rigid point con-gurations with known correspondences
is treated. The objective is to calculate the
shape of the object using the shapes of the images
and to calculate the camera matrices, which give
the camera movement.
One interesting question is to analyse the multilinear
constraints that exist between corresponding
points in an image sequence. It is well
known that corresponding points in two images
ful-l a bilinear constraint, known as the epipo-
This work has been done within then ESPRIT Reactive
LTR project 21914, CUMULI and the Swedish Research
Council for Engineering Sciences (TFR), project 95-64-222
lar constraint (Stefanovic 1973, Longuet-Higgins
1981). This constraint is conveniently represented
by a 3 \Theta 3 matrix called the essential matrix
in the calibrated case and the fundamental matrix
in the uncalibrated case. The continuous analogue
of this constraint has also been treated in the
literature. In this case corresponding representations
involve the so called in-nitesimal epipole or
the focus of expansion and the axis of rotation.
This has been studied by photogrammetrists in
the calibrated case and recently by Faugeras and
Vieville in the uncalibrated case, cf. (Vieville and
Faugeras 1995). The continuous time analogue
can be derived from the bilinear constraint as the
limit when the time dioeerence tends to zero. Since
the bilinear constraint involves two time instants,
the continuous time analogue involves the Taylor
expansion to the -rst order, the so called one-jet.
2 -str-m and Heyden
The goal of this work is to emphasise the connection
between the discrete time constraints and
corresponding continuous time constraints, and to
derive the continuous time analogue of multilinear
constraints.
The multilinear camera-image motion constraints
have been treated in several recent conference pa-
pers, (Faugeras and Mourrain 1995, Heyden and
#str#m 1995, Triggs 1995). These derivations involve
dioeerent types of mathematical techniques
and are represented using dioeerent mathematical
objects, e.g. Grassman-Cayley algebra, Joint
Grassmannian etc. In this paper we will use a
matrix formulation, such that the image-motion
constraint will be of the type that the rank of a
certain matrix is less than full, i.e. that all subde-
terminants of this matrix are zero. The elements
of this matrix will depend on the motion of the
image point and the relative motion and orientation
between the camera and the object con-gu-
ration as expressed by the so called camera projection
matrix. There are other ways of formulating
the same constraint. However, this particular
choice of formulation has some nice advantages:
ffl Using this formulation, it is easier to show
that all multilinear constraints can be derived
from the bi- and trilinear constraints.
ffl The dioeerence and the similarities between
the uncalibrated and calibrated case is clearer
ffl The parameters in the multilinear constraints
are closely linked to the camera parameters
describing the projection of points in 3D onto
each image plane.
ffl The continuous time case can easily be derived
from the discrete time case.
In this paper we introduce and study the
continuous time analogue of these multilinear
constraints. The nth order constraint involve the
Taylor expansion up to order n of the image point
motion and a similar expansion for the motion of
the camera relative to the scene.
These constraints have the same applications as
their discrete counterparts, i.e. they can be used
as matching constraints to -nd points that are
moving rigidly with respect to the camera. They
can also be used to calculate relative motion with
no a priori knowledge about the scene.
We would like to emphasise the derivation of
these constraints. Some experimental validation
has been included, but needless to say, more experiments
are needed in order to validate the potential
of these types of constraints.
The paper is organised as follows. Some basic
notations are introduced in Section 2. Sections
3, 4 and 5 contain background material on
the ambiguity in the choice of coordinate system,
camera matrix parametrisation and a short derivation
of the discrete time constraints. The continuous
time analogue of these constraints is derived
in Section 6. These constraints are studied
in Sections 7, 8 and 9 with respect to coordinate
ambiguity, motion observability and motion esti-
mation. The latter is illustrated with simulated
and real data in Section 10. Then follows a short
discussion and conclusions in Section 11.
2. Camera Geometry and Notation
The pinhole camera model is used and formulated
using projective geometry.
A point in three dimensions with Euclidean co-ordinates
represented as a point in
the three-dimensional projective space using homogeneous
coordinates
\Theta
U x U y U z 1
T . In
projective geometry two representations are considered
as the same point if one is a multiple of
the other, (Faugeras 1993).
Projection onto the image plane is conveniently
represented in the camera projection matrix for-
mulation
where - is the unknown depth and u is the image
position, also in homogeneous coordinates
\Theta
possibly corrected for the internal
calibration if this is known. A priori knowledge
about the camera and the camera-object motion,
give information about the structure of the camera
projection matrix P .
\Theta
I
uncalibrated, (2)
\Theta
I
known A, (3)
\Theta
I
known
\Theta
T denotes the unknown
position of the camera focus, R a rotation matrix
describing the orientation of the camera, A a
matrix representing the unknown internal calibration
parameters and I the 3 \Theta 3 identity matrix.
Thus in the three situations above there are 11,
6, and 3 degrees of freedom respectively in each
camera matrix.
In the uncalibrated case when the internal calibration
matrix A is unknown it is convenient to
Continuous Time Matching Constraints for Image Streams 3
of the camera. In other words we think of the
camera as having a position T and a generalised
orientation -
Q. The position determines how object
points are projected onto the viewing sphere
and the orientation -
rearranges these directions.
Both orientation and position of the camera will
change over time. In the continuous time case we
will use the notation ( -
for the
orientation and position of the camera at time t,
. In the discrete
time case the camera at time t will be represented
by ( -
Z. The image position u(t) of
a point U at time t is thus
\Theta -
Alternatively, the notation
and T will be used. The projection is then expressed
as
I
Capital U is used to denote object points. Corresponding
lower case letter u is used to denote
the corresponding image point. Subscripts, e.g.
are used to denote dioeerent points. The superscripts
with parenthesis are used to denote co-
eOEcients of the Taylor series expansion, i.e.
Boldface 0 denotes a zero matrix, usually 3 \Theta 1 or
2 \Theta 1.
3. Uniqueness of Solutions
The overall goal is to calculate the camera matrices
P (t) and to reconstruct the coordinates of
the objects U given only the image positions u(t),
such that the camera equation
is ful-lled. The camera motion is assumed to be
smooth. The solution for P (t) and U can only
be given up to an unknown choice of coordinate
system in the world. This is easiest seen in the
uncalibrated case. Let B be any non-singular 4\Theta4
matrix. Change the projection matrices according
to
and change object points according to
Now
U) is another solution with dioeerent
coordinates that also ful-l the camera equation
For the case of uncalibrated cameras there are
15 degrees of freedom in choosing a projective co-ordinate
system.
For the case of known internal calibration there
is an arbitrary choice of orientation, origin and
scale (7 degrees of freedom).
In the case of known external and internal orientation
we are only allowed to make changes of
the type
\Deltay
i.e. one may choose the origin arbitrarily and also
the overall scale.
It is important to keep this problem of non-uniqueness
in mind. The question of choosing a
particular or canonical choice of coordinate sy-
stem, as discussed in (Luong and Vieville 1994),
can be important when designing numerical algo-
rithms. This is commented further upon in Section
7.
4. Simplifications in the Uncalibrated Case
In the uncalibrated case we can simplify the problem
by partially -xing the object and image co-ordinate
systems. Two such simpli-cations are
of special interest. These are explained in detail
in (Heyden and #str#m 1995, Heyden and
#str#m 1997), and will be brieAEy described here.
4.1. Affine Reduction
If the same three points can be seen in a sequence
of images, they can be used to simplify the problem
according to the following theorem.
Theorem 1. Let u 1 (t), u 2 (t) u 3 (t) be the images
of three points U 1
and U 3
. Choose an
object coordinate system where U
Choose image
coordinate systems at time t so that u 1
Then the camera projection matrices can be written
Q(t) is a diagonal
matrix and T (t) is the position of the camera
at time t.
4 -str-m and Heyden
Proof: The choice of coordinate systems gives
three constraints on the camera matrix P (t),
Since P (t)U j is the jth column of the matrix P (t),
it follows that the -rst three columns of P (t) form
a diagonal matrix, i.e.
By aOEne normalisation of three corresponding
points in an image sequence, the analysis of the
remaining points can be made almost as if the cameras
were calibrated and with the same rotation.
The unknown elements in the diagonal matrices
correspond to the so called kinetic depths of the
three image points relative to the camera motion,
cf. (Sparr 1994, Heyden 1995). This is basically
the same idea as the relative aOEne coordinates in
(Shashua and Navab 1996). Using projective geometry
one can think of this as de-ning the plane
through the three points as the plane at in-nity
and also partially locking the orientation of this
plane by these three points.
4.2. Projective Reduction
Further simpli-cations can be obtained if four or
more coplanar points, e.g. points belonging to a
planar curve, are detected in a subsequence of
images.
Theorem 2. Let u 1 (t), u 2 (t), u 3 (t) and u 4 (t) be
the images of four coplanar points U 1
and U 4
so that no three of them are collinear.
Choose an object coordinate system where U
and U Choose image coordinate
systems at time t so that u 1
Then the camera projection matrices can be written
\Theta
I
, where T (t) is the position
of the camera at time t.
Proof: The -rst 3 \Theta 3 block matrix -
Q(t) of P (t)
acts as the identity on
i.e.
where - denotes equality up to scale. According
to the assumptions, the four points
constitute a projective
basis for the projective plane. Therefore, it
follows that -
I .
By projective alignment of the images of at least
four coplanar points the problem can thus be
analysed as if both internal calibration and orientation
of the camera are known at all times.
The motion of the camera can be described by
the pair (Q(t); T (t)) or ( -
(or -
describes the generalised orientation of
the camera and T (t) (or -
describes the position
of the camera at time t. Depending on which
assumptions and simpli-cations we have made,
the matrices Q(t) (or -
Q(t)) lie on dioeerent manifolds
ffl Traditional uncalibrated setting: The matrices
Q(t) are arbitrary but nonsingular and
two matrices are considered equal if one is
a (positive) multiple of the other. There are
eight degrees of freedom.
AOEnely reduced setting: The matrices Q(t)
are diagonal and nonsingular and two matrices
are considered equal if one is a (positive)
multiple of the other. There are two degrees
of freedom.
Projectively reduced setting: The matrices
Q(t) are identity matrices. There are no
degrees of freedom. Alternatively we may
require: The matrices Q(t) are multiples of
the identity matrix and two matrices are considered
equal if one is a (positive) multiple of
the other.
Calibrated setting: The matrices Q(t) are ort-
hogonal. There are three degrees of freedom.
Alternatively we may require: The matrices
are multiples of orthogonal matrices and
two matrices are considered equal if one is a
(positive) multiple of the other.
These manifolds are non-linear and have a so
called Lie group structure under matrix multipli-
cation. The corresponding Lie algebra will be of
importance. Unlike the Lie group the Lie algebra
is a linear subspace of all 3 \Theta 3 matrices q:
ffl Traditional uncalibrated setting: The matrices
q(t) are arbitrary and two matrices are
considered equal if their dioeerence is a multiple
of the identity matrix. There are eight
degrees of freedom.
AOEnely reduced setting: The matrices q(t)
are diagonal and two matrices are conside-
Continuous Time Matching Constraints for Image Streams 5
red equal if their dioeerence is a multiple of
the identity matrix. There are two degrees of
Projectively reduced setting: The matrices
q(t) are zero matrices. There are no degrees
of freedom.
Calibrated setting: The matrices q(t) are an-
tisymmetric. There are three degrees of freedom
These Lie algebras are obtained from the Lie
groups using the exponential map
The dioeerent Lie groups and Lie algebras are summarised
in Table 1. Since two matrices in the Lie
Group are considered to be equal if one is a multiple
of the other, it is often convenient to choose
a speci-c representative. One such choice of representative
is to always scale the matrix so that
the determinant is one. Similarly two matrices in
the Lie algebra are considered to be equal if the
dioeerence is a multiple of the identity. A unique
representative can be chosen by demanding that
the trace of the matrix is zero. This -ts in nicely
with the exponential map since
where we have used the notation det as the determinant
and tr as the trace of a matrix.
5. Multilinear Constraints in the Discrete Time
Case
In order to understand the matching constraints
in the continuous time case, it is necessary to
take a look at the corresponding constraints in
the discrete time case. For a more thorough tre-
atment, see (Heyden and #str#m 1995). Alternative
formulations of the same type of constraints
can be found in (Faugeras and Mourrain 1995,
Triggs 1995).
We start with the de-nition.
De-nition 1. The nth order discrete multilinear
constraint is
Table
1. The Lie-Groups and their corresponding Lie-
Algebras in the four dioeerent settings.
Setting Lie Group Lie Algebra
Uncalibrated Arbitrary Arbitrary
Aoe. red. Diagonal Diagonal
Proj. red.
Calibrated Rotation Antisymmetric
Theorem 3. In a sequence of discrete images
corresponding points with coordinates u(t 0
obey the nth order discrete multilinear constraint.
This means that there exist a solution to (8) for
that holds for every corresponding
point sequence.
Proof: This can be seen by lining up each projection
constraint
in a linear matrix equation66P (t 0
\GammaU
Since this system has a non-trivial solution the
leftmost matrix cannot have full rank.
6. Continuous Time Analogue of Multilinear
Constraints
The multilinear constraints in the continuous time
case can be derived using Taylor series expansion
of the time dependent functions in (5),
Use the Taylor series expansions
Recall that superscript (i) denotes the ith coeOE-
cient in the Taylor series expansion, e.g. - (k) =k!
. Substituting the Taylor series expansions
6 -str-m and Heyden
into (10) gives
Identifying the coeOEcients of t i for
\GammaU
Since this set of equations has a non-trivial solution
the leftmost matrix cannot have full rank.
De-nition 2. The nth order continuous
constraint is
where
Theorem 4. In a continuous sequence of images
the image coordinates u (0) and their derivatives,
up to order n at the same instant of time
obey the nth order continuous constraint.
Proof: Follows from the derivation above.
Remember that -
, and similarly for
the other variables. This means in particular that
dt has the meaning of image point velocity.
The continuous time constraints can be simpli-
-ed somewhat by partially choosing a coordinate
system according to P (which implies
By multiplying the big
matrix M from the right by the matrix
we obtain
MS
Since the matrix S has full rank, it follows that
By elimination of the -rst
three rows and columns of the matrix MS in (16),
the constraint (15) is simpli-ed to
De-nition 3. The continuous analogue of the
bilinear and trilinear constraints are de-ned as
the nth order constraint above with
respectively. The analogue of the bilinear
constraint is the -rst order continuous constraint
rank
\Theta -
and analogue of the trilinear constraint is the second
order continuous constraint
rank
7. Choice of Coordinate System
In the previous sections we have used the option
to partially -x the coordinate system to simplify
the problem. Sometimes it is useful to choose
a canonical object coordinate frame to obtain a
canonical coordinate representation of the reconstructed
object and projection matrices.
In the calibrated case a speci-c coordinate system
can be determined by setting the initial orientation
to Q(t 0
overall scale by jT (1) (t 0
In the uncalibrated case there are four things
to consider when choosing a projective coordinate
system in the reconstruction.
1. The position of the plane at in-nity, 3 d
2. The individual points at the plane at in-nity,
3. The origin, 3 d
4. The scale, 1 d
Continuous Time Matching Constraints for Image Streams 7
One way of doing this in the discrete time case
is to lock individual points at the plane at innity
by lock the origin by T
and the scale by jT 1. The position
of the plane at in-nity can be chosen by
choosing a speci-c Q(t 1 ). This cannot, however,
be done arbitrarily. The matrix Q(t 1 ) ful-lls the
bilinear constraint, as discussed in (Luong and
Vieville 1994). The question of choosing a canonical
coordinate system (and thereby choosing a
speci-c plane at in-nity) is simpler in the aOEnely
and projectively reduced settings. The plane at
in-nity is determined by the three or more points
that are used in the reduction. A canonical coordinate
system can then be chosen by Q(t 0
similar to the
calibrated case.
Similarly in the continuous time case one possible
choice is to take P (which implies
discussed in the
simpli-cation above, and then take j -
tr -
This determines the choice of coordinate
system in the calibrated, aOEnely reduced and
in the projectively reduced settings. In the uncalibrated
case there is a further choice of the plane
at in-nity. This choice can be made by choosing
one -
that ful-lls the -rst order continuous
constraint.
Since Q(t) is undetermined up to a scalar factor
it is possible to enforce uniqueness if we require
det t. This condition is of course
ful-lled for I . A Taylor series expansion
of Q(t) gives
which can be seen by expanding the determinant.
Thus we have tr Q The coeOEcients of t k
in (20) are complicated expressions in Q (i) . Ho-
wever, it can be seen that the coeOEcient of t k can
be written as tr Q (k) plus terms involving Q (i) for
k. This means that we can ensure uniqueness
if we require
The price we have to pay for this simpli-cation is
that det Q(t) depends on t, det
in general for
Assume again that Q (0) has been chosen to be
the unit matrix. It also follows from the matrix
exponential
that Q has the properties
listed in Table 1. However, the relations between
q (i) and Q (i) for i ? 1 are more complicated.
8. Motion Observability
What can be said about the observability of the
full motion of the camera, e.g. is it possible to
determine camera motion uniquely up to a choice
of coordinate system using only the -rst order
continuous constraint?
8.1. Motion Observability from the First Order Continuous
Constraint
Does the -rst order continuous constraint determine
camera motion uniquely up to choice of camera
system? Using the -rst order continuous
constraint we can calculate T (1) (t) up to an unknown
scale factor and Q (1) (t) up to an arbitrary
choice of the plane at in-nity. Since only the
direction of T (1) (t) can be obtained at each time
instant, it is not possible to reconstruct T uni-
quely. Thus, the -rst order continuous constraint
is not enough to determine motion.
8.2. Motion Observability from the Second Order
Continuous Constraint
If T (1) (t) and Q (1) (t) are known and T (1) (t) 6= 0,
then T (2) (t) and Q (2) (t) are uniquely determined
by the second order continuous constraint (19).
One can think of this as T (2) (t) and Q (2) (t)
being a function of T (1) (t), Q (1) (t), T (t), Q(t) and
image motion at this time instant, i.e.
It is a well known fact that this kind of dioee-
rential equations can be solved at least locally,
given a set of initial conditions on T
These initial conditions are determined
by choosing a coordinate system and at
the same time ful-lling the -rst order continuous
constraint at Thus the full motion of
the camera is observable from the second order
continuous constraints if T (1) (t) 6= 0. Since the
8 -str-m and Heyden
full camera motion (T (t); Q(t)) can be calculated
uniquely up to a choice of coordinate system, it
is also possible to calculate derivatives of all or-
ders. Thus all continuous multilinear constraints
follow from the -rst and second order continuous
constraints.
9. Estimation of Motion Parameters using the
Continuous Time Constraints
Study again the simpli-cation of the continuous
time constraint in (17). One use of this constraint
is to calculate ( -
the motion of the
points in the image u (i) . Typically, the relative
noise increases with increasing orders of dioeeren-
tiation. We therefore expect u (2) to be noisier
than u (1) which in turn is noisier than u (0) . It
is therefore natural to estimate motion parameters
in steps. First estimate ( -
using the
-rst order continuous constraint (18). Then try
to estimate ( -
and the
second order continuous constraint (19). Repeat
this as long as the level of noise permits.
There are some numerical diOEculties with using
the continuous time constraint to estimate motion
parameters. First of all it can be quite diOEcult to
obtain good estimates of the image point positions
and their derivatives. Secondly, since these
estimates are noisy, this needs to be modelled and
taken into account. This aoeects the way motion
parameters should be estimated.
9.1. Using the First Order Continuous Constraint
The -rst order continuous constraint (18) involves
the -rst order derivative in camera motion, T (1)
and Q (1) .
One major dioeerence between the discrete time
case and the continuous time case is that the derivatives
of the orientation, Q (1) , lie on a linear
manifold. As an example take the calibrated case.
Whereas in the discrete case the motion parameters
involve a rotation matrix Q(t 1
), the continuous
time analogue involve an anti-symmetric
matrix Q (1) . It is easier to parametrise the set
of anti-symmetric matrices (since this is a linear
subspace of all matrices), than to parametrise the
set of rotation matrices (which is a non-linear ma-
nifold).
The velocity T (1) also lies on a linear manifold.
It can, however, only be determined up to scale.
If T (1) is known, the problem of determining Q (1)
is linear so it can quite easily be solved with linear
methods. On the other hand, if Q (1) is known the
problem of determining T (1) can be formulated
and solved in a linear fashion. This suggests a fast
two-step iterative method. Guess Q (1) . Holding
-xed solve for T (1) . Holding T (1) -xed solve
for Q (1) . This method has been tried and in most
cases it does seem to converge nicely. It should
be noted, however, that there are no guarantees
that this method will converge.
An alternative and more robust method is to
tessellate the sphere of directions T (1) 2 S 2 . For
each T (1) , solve for Q (1) linearly and store the
residual. Choose the pair (T (1) ; Q (1) ) that gave
the lowest residual.
Any of these two methods can be re-ned by
taking the motion parameters as an initial estimate
of (T (1) ; Q (1) ), in a non-linear least squares
minimisation. An advantage of this re-nement is
that it allows more sophisticated error measures,
e.g. the maximum likelihood estimate, that takes
into account the quality of the estimates of u and
u (1) . Another advantage of this re-nement is that
it allows for an analysis of the stability of the solution
through the analysis of the residuals at the
optimum.
9.2. Using the Second Order Continuous Constraint
The second order continuous constraint in (19) involve
the -rst and second order derivatives of the
camera motion. As for the -rst order continuous
constraint, it should be possible to use a two step
iterative method. Guess ( -
Holding
-xed solve for ( -
T (2) ). Holding
-xed solve for ( -
should be noted that the convergence properties
of these methods haven't been studied.
Another approach might by to estimate
using the -rst order continuous
constraint and while holding these -xed, estimate
using the second order continuous
constraint.
Close to a solution the method could be rened
by non-linear maximisation of a Likelihood
function.
10. Experiments
To illustrate the continuous constraints, we have
used iterative methods as described above. We
consider the -rst order continuous constraint
in 3D to 2D projection and the second order
constraint in an industrial application of 2D to
1D projection.
Continuous Time Matching Constraints for Image Streams 9
10.1. First Order Constraints on Image Streams
An image sequence of an indoor scene have been
used, see Figure 1, where one image in the sequence
is shown. The whole sequence contains
more than 200 images. To illustrate the applicability
of the continuous constraints, we have only
used 2 images. Points have been extracted using a
corner detector, and we have used 28 points with
correspondences. The aOEnely reduced coordinates
have been calculated from the images, giving
u(0) and u(h), where h denotes the time increment
between the dioeerent exposures. In this case
We have used u u(0). The derivatives
have been computed from image 1 and
image 2 using the dioeerence approximation
Using the iterative approach and the aOEnely
reduced setting and only 10 iterations, starting
from Q we obtain the following solution
ful-lling the -rst order constraint:
This solution can be compared to the solution obtained
in the discrete time case between u(0) and
Here -
Q(h) should be compared to -
I
and -
T (h) to -
T (1) . The angle
between -
T (1) to -
T (h) is 2:8 degrees.
10.2. Second Order Constraints on Angle Streams
The continuous multilinear constraints have an
interesting practical application to the vision system
of an autonomous vehicle. The vehicle is
equipped with a calibrated camera with a one-dimensional
retina. It can only see speci-c beacons
or points in a horizontal plane. Let (x; y)
be the coordinates of such a beacon and let as
Fig. 1. One image in the sequence used in the continuous
case.
before this point be represented by a vector
\Theta
T . Then the measured image point is a
direction vector according to the
camera equation
where the camera matrix P (t) is a 2 \Theta 3 calibrated
camera matrix
Let the Taylor expansion of the rotation matrix
R(t) and the vector -
T (t) be
ReAEector
Angle meter
a
Fig. 2. a: A laser guided vehicle. b: A laser scanner or
angle meter.
Using this notation the continuous time analogue
of the trilinear constraint (19) becomes
det
det
The matrix R is a 2\Theta2 rotation matrix. Assuming
that the Taylor expansion of the rotation angle
OE(t) is -rst
derivative of R has the form
r 1-
and the second coeOEcient of the Taylor expansion
has the form
R
The second order continuous constraint in (26)
does not, however, involve the term r 2in R (2) .
This can be seen by adding r 2times the last column
to the second column in (26). The second
order continuous constraint in (26) will now be
studied in more detail. Introduce the variables
a 1
a 2
and
The determinant of M can be written
\Theta
a 1
a 2
22
where
\Gammay 0
\Theta
a 1
a 2
and
\Theta
T . A tentative algorithm to -nd w and
z has been investigated.
Algorithm 1
1. Start with a crude estimate of r 1
and r 2
, for
example
\Theta
T .
2. For all image directions u i , calculate the corresponding
. The
vector w should be orthogonal to every vector
, the vector w can be found as
the left null space of the matrix
\Theta
The vector w is found by using a singular value
decomposition of M .
3. Once w is approximately known, z can be
found as the right null space of6wV 1
4. Repeat steps 2 and 3.
The algorithm has been implemented and
tested experimentally. The results are illustrated
with simulated data. In these simulations the
angles to -fteen beacons were calculated during a
period of one second. Gaussian noise of dioeerent
standard deviations 0:1; 0:5;
added to the angle measurements. In the real application
the standard deviation is approximately
0:2 mrad. The angle measurements in a two second
period were used to estimate the Taylor co-
eOEcients of the measured image direction u i using
standard regression techniques. These Taylor co-
eOEcients were then used to calculate the Taylor
coeOEcients of the motion (r using
the algorithm above. The experiment was repeated
100 times. The standard deviation of the
estimated motion parameters are shown in Table
2. The true values of the motion parameters,
in the simulation are r
Continuous Time Matching Constraints for Image Streams 11
Table
2. Standard deviation of estimated motion parameters
for dioeerent noise levels when using Algorithm 1.
11. Discussion and Conclusions
In this paper the simpli-ed formulation of the
multilinear forms that exist between a sequence of
images has been used to derive similar constraints
in the continuous time case. The new formulation
makes it easier to analyse the matching
constraints in image sequences. It has been shown
that these constraints contain information in the
-rst and second order only. This representation
is fairly close to the representation of the motion
and it is easy to generalise to dioeerent settings.
Four such settings calibrated, uncalibrated, aOE-
nely reduced and projectively reduced, are described
in the paper.
The continuous constraints can be used to
design -lters to estimate structure and motion
from image sequences. Using only the -rst order
constraint it is possible to estimate the direction
of the movement of the camera. Having only
this information it is not possible to build up the
whole camera movement. Using the second order
constraint it is possible to estimate the second
order derivative of the camera movement
with a scale consistent with the -rst order deri-
vative. This information can be used to build up
the camera movement. This is analogous to the
discrete time case, where the trilinearities are needed
in order to estimate the camera movement, if
only multilinear constraints between consecutive
images are used.
The -rst example above shows that the -rst order
continuous constraint is comparable to the discrete
case. However, the continuous constraints
are sensitive to noise because estimating the
image velocities ampli-es the noise present in the
images. The higher order continuous constraints
are even more sensitive, because they involve higher
order derivatives. Using -ltration techniques
to estimate the derivatives from image coordinates
in more than two images could reduce the in-
AEuence of noise.
The second example illustrates the applicability
of higher order constraints in an industrial
application. An industrial vehicle is guided by a
one-dimensional visual system. It is interesting
to note that the -rst order constraint in this case
gives no information about the motion. Higher
order constraints are essential.
We believe that the continuous constraints are
important theoretical tools. We also believe that
they are necessary to analyse image sequences
with high temporal sampling frequency.
Acknowledgements
The authors thank Tony Lindeberg and Lars
Bretzner at KTH for help with corner detection
and tracking.
--R
Workshop on Intelligent Autonomous Vehicles
on Computer Vision
the velocity case
Topological and Geometrical Aspects of
PhD thesis
Department of Medical and Physiological Physics
What can be seen in three dimensions
with an uncalibrated stereo rig?
On the geometry
and algebra of the point and line correspon between
Reconstruction from image sequences
by means of relative depths
Computer
for sequences of images
of Visual Scenes
forms for sequences of images.
in Image and Vision Computing.
A computer algorithm for
reconstructing a scene from two projections
Canonic representations
for the geometries of multiple projective views
Conf. on Computer Vision
Relative aOEne
structure: Canonical model for 3d from 2d geometry
and applications
Machine Intell
A common framework for kinetic depth
reconstruction and motion for deformable objects
Motion analysis
with a camera with unknown and possibly varying
--TR | multilinear constraint;calibrated camera;structure;motion;optical flow;uncalibrated |
292853 | The Hector Distributed Run-Time Environment. | AbstractHarnessing the computational capabilities of a network of workstations promises to off-load work from overloaded supercomputers onto largely idle resources overnight. Several capabilities are needed to do this, including support for an architecture-independent parallel programming environment, task migration, automatic resource allocation, and fault tolerance. The Hector distributed run-time environment is designed to present these capabilities transparently to programmers. MPI programs can be run under this environment on homogeneous clusters with no modifications to their source code needed. The design of Hector, its internal structure, and several benchmarks and tests are presented. | INTRODUCTION
AND PREVIOUS WORK
A. Networks of Workstations
Networked workstations have been available for many years. Since a modern network of
workstations represents a total computational capability on the order of a supercomputer, there is
strong motivation to use a network of workstations (NOW) as a type of low-cost supercomputer.
Note that a typical institution has access to a variety of computer resources that are network-inter-
connected. These range from workstations to shared-memory multiprocessors to high-end parallel
"supercomputers". In the context of this paper, a "network of workstations" or "NOW" is considered
to be a network of heterogeneous computational resources, some of which may actually be worksta-
tions. A run-time system for parallel programs on a NOW must have several important properties.
First, scientific programmers needed a way to run supercomputer-class programs on such a system
with little or no source-code modifications. The most common way to do this is to use an architecture-independent
parallel programming standard to code the applications. This permits the same
program source code to run on a NOW, a shared-memory multiprocessor, and the latest generations
of parallel supercomputers, for example. Two major message-passing standards have emerged that
can do this, PVM [1],[2] and MPI [3], as well as numerous distributed shared memory (DSM) systems
Second, the ability to run large jobs on networked resources only proves attractive to workstation
users if their individual workstations can still be used for more mundane tasks, such as word proces-
This work was funded in part by NSF Grant No. EEC-8907070 Amendment 021 and by ONR
Grant No. N00014-97-1-0116.
r
sing. Task migration is needed to "offload" work from a user's workstation and return the station to
its "owner". This ability also permits dynamic load balancing and fault tolerance. (Note that, in this
paper, a "task" is a piece of a parallel program, and a "program" or "job" is a complete parallel pro-
gram. In the message-passing model used here, a program is always decomposed into multiple communicating
tasks. A "platform" or "host" is a computer that can run jobs if idle, and can range from a
workstation to an SMP.)
Third, a run-time environment for NOW's needs the ability to track the availability and relative
performance of resources as programs run, because it needs the information to conduct ongoing performance
optimizations. This is true for several reasons. The relative speed of nodes in a network of
workstations, even of homogeneous workstations, can vary widely. Workstation availability is a
function of individual users, and users can create an external load that must be worked around. Programs
themselves may run more efficiently if redistributed certain ways.
Fourth, fault tolerance is extremely important in systems that involve dozens to hundreds of
workstations. For example, if there is a 1% chance a single workstation will go down overnight, then
there is only a (0.99) chance that a network of 75 stations will stay up overnight.
A complete run-time system for NOW computing must therefore include an architecture-independent
coding interface, task migration, automatic resource allocation, and fault-tolerance. It is
also desirable for this system to support all of these features with no source-code modifications.
These individual components already exist in various forms, and a review of them is in order. It is the
goal of this work to combine these individual components into a single system.
B. Parallel Programming Standards
A wide variety of parallel and distributed architectures exist today to run parallel programs.
Varying in their degree of interconnection, interconnection topology, bandwidth, latency, and scale
of geographic distribution, they offer a wide range of performance and cost trade-offs and of applicability
to solve different classes of problems. They can generally be characterized on the basis of
their support for physical shared memory.
Two major classes of parallel programming paradigms have emerged from the two major classes
of parallel architectures. The shared-memory model has its origins in programming tightly coupled
processors and offers relative convenience. The message-passing model is well suited for loosely
coupled architectures and coarse-grained applications. Programming models that implement these
paradigms can further be classified by their manifestation-either as extensions to existing programming
languages or as custom languages.
Because our system is intended for a network of workstations, it was felt that a message-pass-
ing-based parallel programming paradigm more closely reflected the underlying physical structure.
It was also felt that this paradigm should be expressed in existing scientific programming languages
in order to draw on an existing base of scientific programmers. Both the PVM and MPI standards
support these goals, and the MPI parallel programming environment was selected. Able to be called
from C or FORTRAN programs, it provides a robust set of extensions that can send and receive messages
between "tasks" working in parallel [3].
While a discussion of the relative merits of PVM and MPI is outside the scope of this paper, this
decision was partially driven by ongoing work in MPI implementations that had already been conducted
at Mississippi State and Argonne National Laboratory [4]. A more detailed discussion of the
taxonomy of parallel paradigms and systems, and of the rationale for our decision, can be found in
[5].
One desirable property of a run-time system is for its services to be offered "transparently" to
applications programmers. Programs written to a common programming standard should not have
to be modified to have access to more advanced run-time-system features. This level of transparency
permits programs to maintain conformity to the programming standard and simplifies code development
r
C. Computing Systems
Systems that can harness the aggregate performance of networked resources have been under
development for quite some time. For example, one good review of cluster computing systems, conducted
in 1995 and prepared at Syracuse University, listed 7 commercial and 13 research systems
[6]. The results of the survey, along with a comparison to the Hector environment discussed in this
paper, are summarized in [7]. It was found that the Hector environment had as many features as some
of the other full-featured systems (such as Platform Computing's LSF, Florida State's DQS, IBM's
Load Leveler, Genias' Codine, and the University of Wisconsin-Madison's Condor), and that its
support for the simultaneous combination of programmer-transparent task migration and load-bal-
ancing, fully automatic and dynamic load-balancing, support for MPI, and job suspension was
unique. Additionally, the commitment to programmer transparency has led to the development of
extensive run-time information-gathering about the programs as they run, and so the breadth and
depth of the information that is gathered is unique. It should also be added that its support for typical
commercial features, such as GUI and configuration management tools, was noticeably lacking, as
Hector is a research project.
It should also be mentioned at this point that there are (at least) two other research projects using
the name "Hector" doing work in distributed computing and multiprocessing. The first is the well-known
Hector multiprocessor project at the University of Toronto [8],[9]. The second is a system
for supporting distributed objects in Python at the CRC for Distributed Systems Technology at the
University of Queensland [10]. The Hector environment described in this paper is unrelated to either
Three other research systems can allocate tasks across NOW's and have some degree of support
for task migration. Figure 1 summarizes these systems.
Condor/
CARMI Prospero MIST DQS
Fully dynamic processor
allocation and reallocation
Only stops task under migration Y
User-transparent load balancing Y
User-transparent fault tolerance Y Y Y
Works with MPI Y
modifications
to existing MPI/PVM program
Uses existing operating system Y Y Y Y
Figure
1: Comparison of Existing Task Allocators
One such system is a special version of Condor [11], named CARMI [12]. CARMI can allocate
PVM tasks across idle workstations and can migrate PVM tasks as workstations fail or become
"non-idle". It has two limitations. First, it cannot claim newly available resources. For example, it
does not attempt to move work onto workstations that have become idle. Second, it checkpoints all
tasks when one task needs to migrate [13]. Stopping all tasks when only one task needs to migrate
slows program execution since only the migrating task must be stopped.
Another automated allocation environment is the Prospero Resource Manager, or PRM [14].
Each parallel program run under PRM has its own job manager. Custom-written for each program,
the job manager acts like a "purchasing agent" and negotiates the acquisition of system resources as
additional resources are needed. PRM is scheduled to use elements of Condor to support task migration
and checkpointing and uses information gathered from the job and node managers to reallocate
r
resources. Notice that use of PRM requires a custom allocation "program" for each parallel applica-
tion, and future versions may require modified operating systems and kernels.
The MIST system is intended to integrate several development efforts and develop an automated
task allocator [15]. Because it uses PRM to allocate tasks, the user must custom-build the allocation
scheme for each program. MIST is built on top of PVM, and PVM's support of "indirect commu-
nications" can potentially lead to administrative overhead, such as message forwarding, when a task
has been migrated [16].
The implementation of MPI that Hector uses, with a globally available list of task hosts, does not
incur this overhead. (Note that MPI does not have indirect communications, and so Hector does not
have any overhead to support it.) Every task sends its messages directly to the receiving task, and the
only overhead required after a task has migrated is to notify every other task of the new location. As
will be shown below, this process has very little overhead, even for large parallel applications.
The Distributed Queuing System, or DQS, is designed to manage jobs across multiple computers
simultaneously [17]. It can support one or more queue masters which process user requests for resources
on a first-come, first-served basis. Users prepare small batch files to describe the type of
machine needed for particular applications. (For example, the application may require a certain
amount of memory.) Thus resource allocation is performed as jobs are started. It currently has no
built-in support for task migration or fault tolerance, but can issue commands to applications that
can migrate and/or checkpoint themselves. It does support both PVM and MPI applications.
The differences between these systems and Hector highlight two of the key differences among
cluster computing systems. First, there is a trade-off between task migration mechanisms that are
programmer-written versus those that are supported automatically by the environment. Second,
there is a trade-off between centralized and decentralized decision-making and information-gathering
D. Task Migration in the Context of Networked Resources
Two strategies have emerged for creating the program's ``checkpoint'' or task migration image.
First, checkpointing routines can be written by the application programmer. Presumably he or she is
sufficiently familiar with the program to know when checkpoints can be created efficiently (for ex-
ample, places where the running state is relatively small and free of local variables) and to know
which variables are "needed" to create a complete checkpoint. Second, checkpointing routines can
transfer the entire program state automatically onto another machine. The entire address space and
registers are copied and carefully restored.
By way of comparison, user-written checkpointing routines have some inherent "space" advan-
tages, because the state's size is inherently minimized, and they may have cross-platform compatibility
advantages, if the state is written in some architecture-independent format. User-written routines
have two disadvantages. First, they add coding burden onto the programmer, as he or she must
not only write but also maintain checkpointing routines. Second, checkpoints are only available at
certain, predetermined places in the program. Thus the program cannot be checkpointed immediately
on demand.
It would appear from the Syracuse survey of systems [6] that most commercial systems only support
user-written checkpointing for checkpointing and migration. One guesses that this is so because
user-written checkpointing is much easier for resource management system developers-re-
sponsibility for correct checkpointing is transferred to the applications programmer. As noted in
earlier discussion, research systems such as PRM and DQS, also have user-written checkpointing,
and at least one system (not discussed in the Syracuse report) has used this capability to perform
cross-architecture task migration [18].
Condor and Hector use the complete-state-transfer method. This form of state transfer inherently
only works across homogeneous platforms, because it involves actual replacement of a program's
r
state. It is also completely transparent to the programmer, requiring no modifications or additions to
the source code.
An alternate approach is to modify the compiler in order to retain necessary type information and
pointer information. These two pieces of information (which data structures contain pointers and
what the pointers "point to") are needed if migration is to be accomplished across heterogeneous
platforms. At least one such system (the MSRM Library at Louisiana State University) has been
implemented [19]. The MSRM approach may make automatic, cross-platform migration possible,
at the expense of requiring custom compilers and of increased migration time.
Once a system has a correct and consistent task migration capability, it is simple to add check-
pointing. By having tasks transfer their state to disk (instead of to another task) it becomes possible
to create a checkpoint of the program. This can be used for fault recovery and for job suspension.
Thus both Hector and Condor provide checkpoint-and-rollback capability.
However task migration is accomplished, there are two technical issues to deal with. First, the
program's state must be transferred completely and correctly. Second, in the case of parallel pro-
grams, any communications in progress must continue consistently. That is, the tasks must "agree"
on the status of communications among themselves. Hector's solutions to these issues are discussed
in section II.B.
E. Automatic Resource Allocation
There is a trade-off between a centralized allocation mechanism, in which all tasks and programs
are scheduled centrally and the policy is centrally designed and administered, and a competitive,
distributed model, in which programs compete and/or bid for needed resources and the bidding
agents are written as part of the individual programs. Besides some of the classic trade-offs between
centralized and distributed processing (such as overhead and scalability), there is an implied trade-off
of the degree of support required by the applications programmer and of the intelligence with
which programs can acquire the resources they need. The custom-written allocation approach
places a larger burden on the applications programmer, but permits more well-informed acquisition
of needed resources. Since the overall goal of Hector is to minimize programmer burden, it does not
use any a priori information or any custom-written allocation policies. This is discussed further in
section II.C.
The degree to which a priori applications information can boost run-time performance has been
explored for some time [20]. For example, Nguyen et al. have shown that extracting run-time information
can be minimally intrusive and can substantially improve the performance of a parallel job
scheduler [21]. Their approach used a combination of software and special-purpose hardware on a
parallel computer to measure a program's speedup and efficiency and then used that information
to improve program performance. However, Nguyen's work is only relevant for applications
that can vary their own number of tasks in response to some optimization. Many parallel applications
are launched with a specific number of tasks that does not vary as the program runs. Addition-
ally, it requires the use of the KSR's special-purpose timing hardware.
Gibbons proposed a simpler system to correlate run-times to different job queues [22]. Even this
relatively coarse measurement was shown to improve scheduling, as it permits a scheduling model
that more closely approaches the well-known Shortest Job First (SJF) algorithm. Systems can develop
reasonably accurate estimates of a job's completion time based on historical traces of other
programs submitted to the same queue. Since this information is coarse and gathered historically, it
cannot be used to improve the performance of a single application at run-time. (It can, however,
improve the efficiency of the scheduler that manages several jobs at once.)
Some recent results by Feitelson and Weil have shown the surprising result that user estimates of
run-time can make the performance of a job scheduler worse than the performance with no estimates
at all [23]. While the authors concede that additional work is needed in the area, it does highlight that
r
user-supplied information can be unreliable, which is an additional reason why Hector does not use
it.
These approaches have shown the ability of detailed performance information to improve job
scheduling. However, to summarize, these approaches have several shortcomings. First, some of
them require special-purpose hardware. Second, some systems require user modifications to the
applications program in order to keep track of relevant run-time performance information. Third,
the information that is gathered is relatively coarse. Fourth, some systems require applications that
can dynamically alter the number of tasks in use. Fifth, user-supplied information can be not only
inaccurate but also misleading. The goal of Hector's resource allocation infrastructure is to overcome
these shortcomings.
There is another trade-off in degrees of support for dynamically changing workloads and computational
resource availability. The ideal NOW distributed run-time system can automatically allocate
jobs to available resources and move them around during the program run, both in order to
maximize performance and in order to release workstations back to users. Current clustering systems
support this goal to varying degrees. For example, some systems launch programs when
enough resources (such as enough processors) become available. This is the approach taken by
IBM's LoadLeveler, for example [6]. Other systems can migrate jobs as workstations become busy,
such as Condor [11]. It appears that, as of the time of the Syracuse survey, only Hector attempts to
move jobs onto newly idle resources as well.
F. Goals and Objectives of Hector
Because of the desire to design a system that supports fully transparent task migration, fully automatic
and dynamic resource allocation, and transparent fault tolerance, the Hector distributed run-time
environment is now being developed and tested at Mississippi State University. These requirements
necessitated the development of a task-migration method and a modified MPI implementation
that would continue correct communications during task migration. A run-time infrastructure
that could gather and process run-time performance information was simultaneously created. The
primary aim of this paper is to discuss these steps in more detail, as well as the steps needed to add
support for fault tolerance.
Hector is designed to use a central decision maker, called the "master allocator" or MA to perform
the optimizations and make allocation decisions. It uses a small task running on each candidate
processor to detect idle resources and monitor the performance of programs during execution. These
tasks are called "slave allocators" or SA's.
The amount of overhead associated with an SA is an important design consideration. An individual
SA currently updates its statistics every 5 seconds. (This time interval is a compromise between
timeliness and overhead.) This process takes about 5 ms on a Sun Sparcstation 5, and so corresponds
to an extra CPU load of 0.1% [24]. The process of reading the task's CPU usage adds 581 #s per task
every time that the SA updates its statistics (every 5 s). Adding the reading of detailed usage information
therefore adds about 0.01% CPU load per task. For example, an SA supervising 5 MPI
tasks will add a CPU load of 0.15%.
When an MPI program is launched, individual pieces or "tasks" are allocated to available machines
and migrated as needed. The SA's and the MA communicate to maintain awareness of the
state of all running programs. The structure is diagrammed below in Figure 2.
r
Slave
Allocator
Master Allocator
Performance
Information
System Info
Commands
Commands
Performance
Information
Other Slave
Allocators
Figure
2: Structure of Hector Running MPI Programs
Key design features and the design process are described below in section II and benchmarks and
tests that measure Hector's performance are described in section III. This paper concludes with a
discussion of future plans.
II. GOALS, OBSTACLES, AND ACCOMPLISHMENTS
A. Ease of Use
A system must be easy to use if it is to gain widespread acceptance. In this context, ease of use
can be supported two different ways. First, adherence to existing, widely accepted standards allows
programmers to use the environment with a minimal amount of extra training. Second, the complexities
of task allocation and migration and of fault tolerance should be "hidden" from unsophisticated
scientific programmers. That is, scientific programmers should be able to write their programs and
submit them to the resource management system without having to provide additional information
about their program.
1. Using Existing Standards
Hector runs on existing workstations and SMP's using existing operating systems, currently Sun
systems running SunOS or Solaris and SGI systems running Irix. Several parts of the system, such as
task migration and correctness of socket communications, would be simpler to support if modifications
were made to the operating system. However, this would dramatically limit the usefulness of
the system in using existing resources, and so the decision was made not to modify the operating
system. The MPI and PVM standards provide architecture-independent parallel coding capability
in both C and FORTRAN. MPI and PVM are supported on a wide and growing body of parallel
architectures ranging from networks of workstations to high-end SMP's and parallel mainframes.
these represent systems that have gained and are gaining widespread acceptance, there already
exists a sizable body of programmers that can use it. Hector supports MPI as its coding standard.
2. Total Transparency of Task Allocation and Fault Tolerance
Experience at the Mississippi State NSF Engineering Research Center indicates that most scientific
programmers are unwilling (or unable) to provide such information as program "size", estimated
rrun-time, or communications topology. This situation exists for two reasons. First, such programmers
are solving a physical problem and so programming is a means to an end. Second, they
may not have enough detailed knowledge about the internal workings of computers to provide information
useful to computer engineers and scientists.
Hector is therefore designed to operate with no a priori knowledge of the program to be
executed. This considerably complicates the task allocation process, but is an almost-necessary step
in order to promote "transparency" of task allocation to the programmer and, as a result, ease of use
to the scientific programmer. Not currently supported, future versions of Hector may be able to
benefit from user-supplied a priori information.
A new implementation of MPI, named MPI-TM, has been created to support task migration [25]
and fault tolerance. MPI-TM is based on the MPICH implementation of MPI [4]. In order to run
with these features, a programmer merely has to re-link the application with the Hector-modified
version of MPI. The modified MPI implementation and the Hector central decision-maker handle
allocation and migration automatically. The programmer simply writes a "normal" MPI program
and submits it to Hector for execution.
Hector exists as the MPI-TM library and a collection of executables. The library is linked with
applications and provides a self-migration facility, a complete MPI implementation, and an interface
to the run-time system. The executables include the SA, MA, a text-based command-line interface
to the MA, and a rudimentary Motif-based GUI. Its installation is roughly as complicated as
installing a new MPI implementation and a complete applications package.
3. Support for Multiple Platforms
Hector is supported on Sun computers running SunOS and Solaris and on SGI computers running
Irix. The greatest obstacle under Solaris is its dynamic linker which, due to its ability to link at
run-time, can create incompatible versions of the same executable file. This creates the undesirable
situation that migration is impossible between nearly, but not completely, identical machines, and
has the consequence of dividing the Sun computers into many, smaller clusters.
This situation exists because of the combination of two factors. First, Hector performs automat-
ic, programmer-transparent task migration without compiler modifications. Thus it cannot move
pointers and must treat the program's state as an unalterable binary image. Second, dynamically
linked programs may map system libraries and their associated data segments to different virtual
addresses in runs of one program on different machines. The solution adopted by Condor is to re-write
the linker (more accurately, to replace the Solaris dynamic linker with a custom-written one) to
make migration of system library data segments possible [26]. This option is under consideration in
Hector, but is not currently supported.
B. Task Migration
1. Correct State Transfer
The state of a running program, in a Unix environment, can be considered in six parts. First, the
actual program text may be dynamically linked, and has references to data that may be statically or
dynamically located. Second, the program's static data is divided into initialized and uninitialized
sections. Third, any use of dynamically allocated data is stored in "the heap". Fourth, the program's
stack grows as subroutines and functions are called, and is used for locally visible data and dynamic
data storage. Fifth, the CPU contains internal registers, usually used to hold results of intermediate
calculations. Sixth, the Unix kernel maintains some system-level information about the program,
such as file descriptors. This is summarized below in Figure 3.
r
CPU
User Memory
Kernel Memory
Visible to user
Not visible to user
Text Static Data Heap
Registers
Stack
Kernel Structs
Wrapper functions keep track of kernel
information in a place visible to the user.
Figure
3: State of a Program During Execution
The first five parts of the state can, in principle, be transferred between two communicating user-level
programs. One exception occurs when programs are dynamically linked, as parts of the program
text and data may not reside at the same virtual address in two different instantiations of the
same program. As discussed above, this matter is under investigation.
The sixth part of a program's state, kernel-related information, is more difficult to transfer because
it is "invisible" to a user-level program. This information may include file descriptors and
pointers, signal handlers, and memory-mapped files. Without kernel source code, it is almost impossible
to read these structures directly. If the operating system is unmodified, the solution is to
create "wrapper" functions that let the program keep track of its own kernel-related structures.
All user code that modifies kernel structures must pass through a "trap" interface. (Traps are the
only way user-level code can execute supervisor-level functions.) The Unix SYSCALL.H file documents
all of the system calls that use traps, and all other system calls are built on top of them. One
can create a function with the same name and arguments as a system call, such as open(). The arguments
to the function are passed into an assembly-language routine that calls the system trap proper-
ly. The remainder of the function keeps track of the file descriptor, path name, permissions, and
other such information.
The lseek() function can keep track of the location of the file pointer. Calls that change the file
pointer (such as read() and write() ) also call the instrumented lseek(), so that file pointer information
is updated automatically. This permits migrated tasks to resume reading and writing files at the proper
place.
It was discovered that the MPI environment for which task migration was being added [5] also
uses signals and memory mapping. (The latter is due to the fact that gethostbyname() makes a call to
mmap.) All system calls that affect signal handling and memory mapping are replaced with wrapper
functions as well.
The task migration system requires knowledge of a running program's image in a particular operating
system, the development of a small amount of assembly language, and reliance on certain
properties pertaining to signal-handling, and these all affect the portability of the task migration sys-
tem. The assembly language is needed because this is the only way to save and restore registers and
call traps. Since the task migration routine is inside a signal handler, it is also necessary for the re-started
program to be able to exit the signal handler coherently.
Other systems that perform a similar style of system-supported task migration, such as MIST
and Condor, have also been ported to Linux, Alpha, and HP environments [27],[15]. This seems to
indicate that this style of task migration is reasonably portable among Unix-based operating sys-
tems, probably because these different operating systems have strong structural similarities. It is
r
interesting to note that no system-level migration support for Windows/NT-based systems has been
reported.
The exact sequence of steps involved in the actual state transfer are described in more detail in
[25]. Two tests confirm this method's speed and stability, and are described below in section III.
2. Keeping MPI Intact: A Task Migration Protocol
The state restoration process described above is not guaranteed to preserve communications on
sockets. This is because at any point in the execution of a program, fragments of messages may reside
in the kernel's buffers on either the sending or receiving side. The solution is to notify all tasks
when a single task is about to migrate. Each task that is communicating with the task under migration
sends an "end-of-channel" message to the migrating task and then closes the socket that connects
them. The tasks then mark the migrating task as "under migration" in its table of tasks, and attempts
to initiate communications will block until migration is complete. Once the task under migration
receives all of its "end-of-channel" messages, it can be assured that no messages are trapped in the
buffers. That is, it knows that all messages reside in its data segment, and so it can be migrated
"safely". Once the state has been transferred, another global update is needed so that other
tasks know its new location and know that communications can be resumed with it. Tasks that are not
migrating remain able to initiate connections and communicate with one another.
The MPI-1.1 standard (the "original" MPI standard) only permits static task tables. That is, the
number of tasks used by a parallel program is fixed when the program is launched. (It is important to
note that the static number of tasks in a program is an MPI-1.1 limit, not a limitation of Hector. This
is also one of the important differences between PVM and MPI-1.1.) Thus updates to this table do
not require synchronization with MPI and do not "confuse" an MPI program. The MPI-2 standard
(a newer standard currently in development) permits dynamically changing task tables, but, with
proper use of critical sections, task migration will not interfere with programs written under the
MPI-2 standard.
A series of steps is needed to update the task table globally and atomically. Hector's MA and
SA's are used to provide synchronization and machine-to-machine communications during migration
and task termination. The exact sequence of steps required to synchronize tasks and update the
communication status consistently is described in detail in [25]. It should be noted that if the MA
crashes in the middle of a migration, the program will deadlock, because the MA is used for global
synchronization and to guarantee inter-task consistency.
3. Task Termination Protocol
Task termination presents another complication. If a task is migrating while or after another task
terminates, the task under migration never receives an "end-of-channel" message from the terminated
task. Two measures are taken to provide correct program behavior. First, the MA limits each
MPI program to only one migration or only one termination at a time. It can do this because of the
handshaking needed both to migrate and to terminate. Second, a protocol involving the SA's and
MA's was developed to govern task termination and is described below.
1. A task preparing to terminate notifies its SA. The task can receive and process table updates
and requests for end-of-channel (EOC) messages, but will block requests to migrate. It cannot be
allowed to migrate so that the MA can send it a termination signal.
2. The SA notifies the MA that the task is ready to terminate.
3. Once all pending migrations and terminations have finished, the MA notifies the SA that the
task "has permission" to terminate. It will then block (and enqueue) further termination and migration
requests until this termination has ended.
4. The SA notifies the task.
5. The task sends the SA a final message before exiting.
6. The SA notifies the MA that the task is exiting, and so the MA can permit other migrations
and terminations.
r
Notice that an improperly written program may attempt to communicate with a task after the task
has ended. In the world of message-passing-based parallel programming, this is a programmer's
mistake. Behavior of the program is undefined at this point, and the program itself will deadlock
under Hector. (The program deadlocks, not Hector.)
4. Minimizing Migration Time
The operating system already has one mechanism for storing a program's state. A core dump
creates a file that has a program's registers, data segment, and stack. The first version of state transfer
used this capability to move programs around. There are two advantages to this approach. First, it is
built into the operating system. Second, there are symbolic debuggers and other tools that can extract
useful information from core files.
There are some disadvantages to this approach. First, multiple network transfers are needed if
the disk space is shared over a network. This means that the state is actually copied multiple times.
Second, the speed of transfer is limited further by the speed of the disk and by other, unrelated programs
sharing that disk.
One way around all of these shortcomings is to transfer the state directly over the network. Originally
implemented by the MIST team [15], network state transfer overcomes these disadvantages.
The information is written over the network in slightly modified core-file format. (The only modification
is that unused stack space is not transmitted. There is no other penalty for using the SunOS
core file format.) The information is written over a network socket connection by the application
itself, instead of being written to a file by the operating system. Notice that this retains the advantage
of core-file tool compatibility. Experiments show that it is over three times faster [25], as will be
shown below.
C. Automatic Resource Allocation
1. Sources of Information
Hector's overall goal is to attempt to minimize programmer overhead. In the context of awareness
of program behavior, this incurs the expense of not having access to potentially beneficial program-specific
information. This approach was used based on experiences with scientific programmers
within the authors' research center, who are unwilling to invest time and effort to use new systems
because of the perceived burden of source-code modifications. This approach dictates that
Hector be able to operate with no a priori applications knowledge, which, in turn, increases the requirement
for the depth and breadth of information that is gathered at run-time. This lack of a priori
information makes Hector's allocation decision-making more difficult. However, experiments
confirm that the information it is able to extract at run-time can improve performance, and its ability
to exploit newly idle resources is especially helpful.
2. Structure
There is a range of ways that resource allocation can be structured, from completely centrally
located to completely distributed. Hector's resource allocation uses features of both. The decision-making
portion (and global synchronization) resides in the MA and is therefore completely central-
ized. The advantage of a single, central allocation decision-maker is that it is easier to modify and
test different allocation strategies.
Since the UNIX operating system will not permit signals to be sent between hosts, it is necessary
to have an executive process running on each candidate host. Since it is necessary to have such pro-
cesses, they can be used to gather performance information as well. Thus its information-gathering
and information-execution portions are fully distributed, being carried out by the SA's.
3. Collecting Information
There are two types of information that the master allocator needs in order to make decisions.
First, it needs to know about the resources that are potentially available, such as which hosts to consider
rand how powerful they are. Second, it needs to know how efficiently and to what extent these
resources are being used, such as how much external (non-Hector) load there is and how much load
the various MPI programs under its control are imposing.
The relative performance of each candidate host is determined by the slave allocator when it is
started. (It does so by running the Livermore Loops [28], which actually measure floating point per-
formance.) It is also possible for the slave allocator to measure disk availability and physical
memory size, for example. This information is transmitted to the master allocator, which maintains
its own centralized database of this information.
Current resource usage is monitored by analyzing information from the kernel of each candidate
processor. Allocation algorithms draw on idle time information, CPU time information, and the percentage
of CPU time devoted to non-Hector-related tasks. The percentage of CPU time is used to
detect "external workload", such as an interactive user logging in, which is a criterion for automatic
migration.
The Hector MPI library contains additional, detailed self-instrumentation that logs the amount
of computation and communication time each task expends. This data is gathered by the SA's by
using shared-memory and is forwarded to the MA. A more detailed discussion of this agent-based
approach to information-gathering, as well as testing and results, may be found in [29].
4. Making Decisions
One of the primary advantages of this performance-monitoring-based approach is its ability to
claim idle resources rapidly. As will be shown below, tests on busy workstations during the day show
that migrating to newly available resources can reduce run time and promote more effective use of
workstations. Further implementation and testing of more sophisticated allocation policies are also
under way.
D. Fault Tolerance
The ability to migrate tasks in mid-execution can be used to suspend tasks. In fact, fault tolerance
has historically been one major motivation for task migration. In effect, each task transfers its
state into a file to "checkpoint" the program. When a node failure has been detected, the files can be
used to "roll back" the program to the state of the last checkpoint. While all calculations between the
checkpoint and node failure are lost, the calculations up to the checkpoint are not, which may represent
a substantial time savings. Also, known unrelated failures and/or routine maintenance may occur
or be needed in the middle of a large program run, and so the ability to suspend tasks is helpful.
It can be shown that in order to guarantee program correctness, all tasks must be checkpointed
consistently[13]. That is, the tasks must be at a consistent point in their execution and in their message
exchange status. For example, all messages in transit must be fully received and transmission of
any new messages must be suspended. As was the case with migration and termination, a series of
steps are needed to checkpoint and to roll back parallel programs.
1. Checkpointing Protocol
The following steps are taken to checkpoint a program.
1. The MA decides to checkpoint a program for whatever reason. (This is currently supported
as a manual user command, and may eventually be done on a periodic basis.) It waits until all pending
migrations and terminations have finished, and then it notifies all tasks in the program, via the
SA's, to prepare for checkpointing.
2. The tasks send end-of-channel (EOC) messages to all connected tasks, and then receive
EOC's from all connected tasks. Again, this guarantees that there are no messages in transit.
3. Once all EOC's have been exchanged, the task notifies its SA that it is ready for checkpointing
and informs the SA of the size of its state information. This information is passed on to the MA.
4. Once the MA has received confirmation from every task, it is ready to begin the actual check-pointing
process. It notifies each task when the task is to begin checkpointing.
5. After each task finishes transmitting its state (or writing a file), it notifies the MA. Note that it
is possible for more than one task to checkpoint at a time, and experiments with the ordering of
checkpointing are described below.
6. After all tasks have checkpointed, the MA writes out a small "bookkeeping" file which contains
state information pertinent to the MA and SA's. (For example, it contains the execution time to
the point of checkpointing, so that the total execution time will be accurate if the job is rolled back to
that checkpoint.)
7. The MA broadcasts either a "Resume" or "Suspend" command to all tasks. The tasks either
resume execution or stop, respectively. The former is used to create a "backup copy" of a task in the
event of future node failure. The latter is used if it is necessary to remove a job temporarily from the
system.
2. Rollback Protocol
The following steps are taken to roll back a checkpointed program.
1. The MA is given the name of a "checkpoint file" that provides all necessary information to
restart the program.
2. It allocates tasks on available workstations, just as if the program were being launched.
3. Based on its allocation, it notifies the SA on the first machine.
4. The SA restarts the task from the "state file", the name of which is found in the checkpoint
file and sent to the SA.
5. The task notifies its SA that it restarted properly and waits for a "table update".
6. Once the MA receives confirmation of one task's successful restart, it notifies the SA of the
next task. It continues to do this until all tasks have restarted.
Task rollback is sequential primarily for performance reasons. The file server that is reading the
checkpoints and sending them over sockets to the newly launched tasks will perform more efficiently
if only one checkpoint is sent at a time.
7. As confirmation arrives at the MA, it builds a table similar to that used by the MPI tasks
themselves. It lists the hostnames and Unix PID's of all the tasks in the parallel program. Once all
tasks have restarted, this table is broadcast to all tasks. Note that this broadcast occurs via the Hector
run-time infrastructure and is "invisible" to the MPI program. It does not use an MPI broadcast, as
MPI is inactive during rollback.
8. Each task resumes normal execution once it receives its table update, and so the entire program
is restarted.
3. The Checkpoint Server
As is the case with task migration, there are two ways to save a program's state. One way is for
each program to write directly to a checkpoint file. The other way is to launch a "checkpoint server"
on a machine with a large amount of physically mounted disk space. (The latter concept was first
implemented by the Condor group [30].) Each task transmits its state via the network directly to the
server, and the server writes the state directly to its local disk. The reason each task cannot write its
state to its local disk is obvious-if the machine crashes, the "backup copy" of the state would be lost
as well.
The checkpoint server method is expected to be faster, because it uses direct socket connections
and local disk writes, which are more efficient than writing files over a network. Note that many of
the local disk caching strategies used by systems like NFS do not work well for checkpoints, because
checkpoint files are typically written once and not read back [30]. Different, novel scheduling strategies
for checkpoint service are described and tested below.
4. Other Issues
The MA is not fault-tolerant. That is, the MA represents a single point of failure. The SA's have
been modified to terminate themselves, and the tasks running under them, if they lose contact with
r
the MA. (This feature was intentionally added because total termination of programs distributed
across dozens of workstations can be quite tedious unless it is automated.) If this feature is disabled,
then SA's and their tasks could continue working without the MA, although all task migrations and
job launches would cease, and job termination would deadlock. The checkpoints collected by the
system will enable a job to be restarted after the MA and SA's have been restarted, and so the check-
point-and-rollback-based fault tolerance can tolerate a fault in the run-time infrastructure.
Another approach to solve this problem, and to support more rapid fault tolerance, would be to
incorporate an existing group communication library (such as Isis or Horus [31]) and use its mes-
sage-duplication facility. One possible design is described in [32].
Means of rapid fault detection can also be added to future versions of Hector [32]. Each SA sends
performance information to the MA periodically. (The current period is 5 seconds, which may grow
as larger tests are performed.) If the SA does not send a performance update after some suitable
timeout, it can be assumed that the node is not running properly, and all jobs on that node can be
rolled back. This strategy will detect heavily overloaded nodes as well as catastrophically failed
nodes.
III. BENCHMARKS AND TESTS
A. Ease of Use
As an example, an existing computational fluid dynamics simulation was obtained, because it
was large and complex, having a total data size of around 1 GByte and about 13,000 lines of Fortran
source code. The simulation had already been parallelized, coded in MPI, and tested on a parallel
computer for correctness. With no modifications to the source code, the program was relinked and
run completely and correctly under Hector. This highlights its ability to run "real", existing MPI
programs.
B. Task Migration
Two different task migration mechanisms were proposed, implemented, and tested. The first
used "core dump" to write out a program's state for transfer to a different machine. The second
transferred the information directly over a socket connection. In order to compare their relative
speeds, tasks of different sizes were migrated times each between two Sparcstations 10's connected
ethernet. The tests were run during normal daily operations at the
Engineering Research Center [25]. The results are show below in Figure 4.
Program Size (kbytes)515
Time
toMigrate
(sec) Core File Transfer
Network Transfer
Figure
4: Time to migrate tasks of different sizes
r
Consider, for example, the program of size 2.4 Mbytes. The "core dump" version migrated in
about 12.6 to 13.1 seconds. This is an information transfer rate of roughly 1.5 MBit/sec. The "net-
work state transfer" version migrated in 4.1 seconds, or an information rate of about 4.6 MBit/sec.
The latter is close to the practical limit of ordinary 10 Mbit ethernet under typical conditions, which
shows that task transfer is limited by network bandwidth. Overall, network state transfer was about
3.2 times faster.
The same test program was run with minimal size and with network state transfer. The task migrated
over 400,000 times without error, and only stopped when a file server went down due to an
unrelated fault. (The file server crash had the effect of locking up the local machine, and so Hector
became locked up as well, as the fork-and-exec needed to launch a new task could not be executed.)
The overhead associated with broadcasting table updates was measured by migrating tasks inside
increasingly large parallel programs. This test was run between a Sparcstation 5 and an ether-
net-connected Sparcstation 10 on separate subnets, and was run under normal daily operating conditions
at the Engineering Research Center.
The program did the following. First, one task establishes communications with some number
of other tasks. Second, that task is migrated. As explained above, this entails sending notification,
receiving end-of-channel messages, and transferring the state of the program. Third, the newly migrated
task re-establishes communications with all other tasks. The time from ordering the task to
migrate to re-establishing all connections was measured and the test was repeated 50 times. The total
number of tasks in the program ranged from 10 to 50, and the results are shown below in Figure 5.
The crosses are actual data points, and the line is a linear regression best-fit through the data. A
linear regression was chosen because the time to migrate is a linear function of both the number of
other tasks and the size of the migration image.135
Number of Other Tasks
Time
toMigrate
(sec)
Figure
5: Time to migrate a task as the number of other tasks increases
The slope of the best-fit line corresponds to the incremental amount of time required to notify
one other task of impending migration, wait for the EOC message, and then notify the other task that
migration was completed. Since the program size changed only negligibly as the number of tasks
increased, the time to migrate was effectively constant. The results are show in Figure 5, and the
slope corresponds to about 75 ms per task per migration. That is, the process of notifying, receiving
end-of-channel, and re-establishing communications takes, on the average, about 75 ms over 10
Mbit/s ethernet. The Y-intercept is roughly the time it took the task to migrate, and is about 184 ms.
If the transfer took place at 10 Mbit/s, this corresponds to a migration image size of about 230
Kbytes.
C. Automatic Resource Allocation
In one early test, an MPI program was run under two different scenarios to determine the performance
benefits of "optimal" scheduling over "first come, first served" scheduling [5]. A three-process
matrix multiply program was used, and a 1000 x 1000 matrix was supplied as the test case.
The machines used in the test ranged from a 40 MHz SPARC-based system to a 4-processor 70 MHz
Hyper-SPARC-based system. All were free of external loads, except as noted below. (The matrix
multiply is a commonly used test program because its data size is quite scalable and the program's
structure is simple and easily modified.)
First, tests were run to validate the advantages of using a processor's load as a criterion for task
migration. The third workstation was loaded with an external workload. The runtime dropped from
1316 seconds to 760 seconds when the third task was migrated onto an idle workstation.
Second, tests were run to show the advantages of migrating tasks in the event faster workstations
come available. When a faster machine became available two minutes into the computation, the run-time
dropped from 1182 seconds to 724 seconds. Algorithms used by other task allocation systems
would not have migrated the slow tasks once the program was launched.
More extensive tests were run once the master allocator was more completely developed. The
same matrix multiply program was used, and the allocation policy was switched between two varia-
tions. The first variation only migrated tasks when workstations became busy, usually from exter-
nal, interactive loads. The second variation considered migrations whenever a workstation became
idle. A test system was written that switched the MA between these two allocation policies on alternate
program runs, and a total of 108 runs were made during several days at the research center on 3
fairly well-used Sparcstation 2's.
When the "only-migrate-when-busy" policy was used, the run times varied from 164 to 1124
seconds. When the "also-migrate-when-idle" policy was used, the run times varied from 154 to
282 seconds. In the latter case, the program averaged 0.85 task migrations per run to idle worksta-
tions. While migration to idle workstations had little effect on minimum run times, they dramatically
reduced the maximum run time. The average run time dropped from 359 seconds to 177 seconds
due to this reduction in maximum run time. The distribution of run times is shown below in Figure 6.
Run Times - 1st Policy
Number
of
Trials
Run Times - 2nd Policy
Figure
Distribution of Run Times for Two Task Migration Policies
This improvement in run time was noticed in cases where relatively small programs were being
migrated. As programs increase in size, the penalty for migration increases. For example, runs of a
fairly large electromagnetics simulation showed a noticeable increase in run time as the number of
migrations increased [7].
r
One point of interest is that interactive were sometimes unaware that their machines were also
being used to run jobs under Hector (including this paper's primary author). Most users are accustomed
to a small wait for applications such as word processors to page in from virtual memory. Since
the time to migrate tasks off of their machines was on the order of the time required to page in pro-
grams, the additional interactive delay was not noticed. It should be noted that this was only the case
for small test programs, as truly rapid migration is only practical when the program state fits within
physical memory. As was shown in tests with larger programs, once the program size exceeds physical
memory the process of migration causes thrashing [7].
More extensive testing, showing runs of a variety of applications on both Sun and SGI workstation
clusters and runs alongside actual student use can be found in [7].
D. Fault Tolerance
A series of tests were run to evaluate the relative performance of different means of creating and
scheduling checkpoints. The same matrix multiply program was used to test different combinations
of strategies. The matrices were 2 100X100, 500X500, and 1000X1000 matrices for small, medium,
and large checkpoint sizes, respectively. The total checkpoint sizes were 2 Mbytes, 11 Mbytes, and
Mbytes respectively. The following tests were also run during the day and under normal network
usage conditions.
First, two strategies were tested that involved writing directly to a checkpoint file using NFS. All
tasks were commanded to checkpoint simultaneously ("All-at-Once") or each task was commanded
to checkpoint by itself ("One-at-a-Time").
Second, three server-based strategies were evaluated for overall effectiveness. First, the server
was notified of a single task to checkpoint, and so tasks were checkpointed one at a time. Second, the
server was notified of all tasks and their sizes, but only one task at a time was permitted to checkpoint.
(This is labelled "combined" in the figures below.) Third, the server was notified of all tasks, and
then all tasks were ordered to checkpoint simultaneously.
The individual tasks were instrumented to report the amount of time required to send the data
either to a file (over NFS) or to a checkpoint server. The size of the program state and the time to
checkpoint the state were used to calculate bandwidth usage, and are shown below for both cases-
without (Figure 7) and with (Figure 8) a checkpoint server.
Strategy Small Medium Large
All-at-Once 2.21 1.46 1.45
One-at-a-Time 4.83 5.12 5.55
Figure
7: Bandwidth Usage of Different Strategies with No Server (MBit/sec)
Strategy Small Medium Large
All-at-Once 2.37 1.55 1.71
Combined 6.57 7.15 7.25
One-at-a-Time 7.26 7.47 7.20
Figure
8: Bandwidth Usage of Different Strategies with a Server (MBit/sec)
The results are easy to understand. First, using NFS instead of a server involves some bandwidth
penalty. Using a server permits data to be transferred without additional protocol overheads. Se-
cond, the all-at-once strategy clogs the ethernet and substantially reduces its performance. The
combined and one-at-a-time strategies have essentially identical performance from the point of
view of the parallel program because each task sends its data to the server without interference from
other tasks.
r
The server works by forking a separate process for each task to be checkpointed. The process
"mallocs" space for the entire state, reads the state over a socket, and writes the state to disk. (It
knows the size of the program state because the task notifies its SA of the size when it indicates readiness
to checkpoint in step 3 of the checkpoint protocol. This information is passed, via the MA, to the
checkpoint server.) Data were captured by the checkpoint server itself, and the results are shown
below in Figure 9. The individual points are some of the actual data points, and the lines are linear
regression curves fit through the actual data. Each policy was tested times over the range of sizes
shown on the graph.200 10000 20000 30000200 10000 20000 30000
Total Size of Program Checkpoint (Kbytes)
Time
to
Checkppoint
Program
(sec)
One-at-a-Time
All-at-Once Combined
Figure
9: Time to Checkpoint Tasks under Different Checkpoint Server Policies
The reciprocal of the slope of each line corresponds to the bandwidth obtained by that strategy.
They are summarized in Figure 10.
Bandwidth
(MBit/sec)
One-at-a-Time 5.43
Combined 6.86
All-at-Once 4.87
Figure
10: Impact of Different Server Strategies
The combined strategy makes best use of available bandwidth, and the "all-at-once" strategy
has the worst performance. This is typical for ethernet-based communications. In fact, the penalty
shown here is less than that expected by the authors. The conclusion is that other traffic on the net-work
rat the time of program runs already has a significant impact, and that using the "all-at-once"
strategy compounds the problem.
Note that the time to checkpoint, as measured by the server and shown in Figure 9, includes the
time to fork processes and malloc space, the time to read the data from the network, and the time to
write it to disk. The "bandwidth" is actually a function of all of these times. This is the reason why
the combined strategy shows improved performance-the process of forking new processes and
making space in memory for the state images is done in parallel. Also notice that when the time
required to write to disk and fork tasks is added, the bandwidth of the combined strategy only drops
slightly, which indicates that the overhead of the fork-and-malloc process is small.
IV. FUTURE WORK AND CONCLUSIONS
A. Testing of Dynamic Load-Balancing
Some initial concepts for dynamic load balancing have been implemented. These are undergoing
testing, and will probably be modified as testing proceeds. Since only one function is needed to
optimize tasks, it is very simple to provide alternate optimization strategies for testing.
B. Job Priority
Currently all jobs run at the same priority level. Some concept of "priority" needs to be added in
future versions to permit job scheduling more like that found in industry. The combination of priority-based
decision making and built-in fault tolerance will be explored as the basis of a fault-toler-
ant, highly reliable computing system.
C. Conclusions
Running parallel programs efficiently on networked computer resources requires architecture-independent
coding standards, task migration, resource awareness and allocation, and fault toler-
ance. By modifying an MPI implementation and developing task migration technology, and by developing
a complete run-time infrastructure to gather performance information and make optimiza-
tions, it became possible to offer these services transparently to unmodified MPI programs. The
resulting system features automatic and fully dynamic load-balancing, task migration, the ability to
release and reclaim resources on demand, and checkpoint-and-rollback-based fault tolerance. It
has been demonstrated to work with large, complex applications on Sun and SGI workstations and
SMP's. Tests confirm that its ability to reclaim idle resources rapidly can have beneficial effects on
program performance. A novel checkpoint server protocol was also developed and tested, making
fault tolerance more efficient. Since it combines and supports, to varying degree, all of the features
needed for NOW parallel computing, it forms the basis for additional work in distributed computing.
V.
--R
"Visualization and debugging in a heterogeneous environment,"
PVM: Parallel Virtual Machine
Cambridge Mass: The MIT Press
"Monitors, Message, and Cluster: The p4 Parallel Program System,"
"Hector: Automated Task Allocation for MPI"
"Cluster Computing Review"
"Using Hector to Run MPI Programs over Networked Workstations"
"Experiences with the Hector Multiprocessor"
"Experiences with the Hector Multiprocessor"
"Hector: Distributed Objects in Python"
"The Condor Distributed Processing System"
"Providing Resource Management Services to Parallel Applica- tions"
"Consistent Checkpoints of PVM Applications"
"The Prospero Resource Manager: A Scalable Framework for Processor Allocation in Distributed Systems"
"MIST: PVM with Transparent Migration and Checkpointing"
"MPVM: A Migration Transparent Version of PVM"
DQS User Manual - DQS Version 3.1
"Portable Checkpointing and Recovery"
"Memory Space Representation for Heterogeneous Network Process Migration"
"Theory and Practice in Parallel Job Scheduling"
"Using Runtime Measured Workload Characteristics in Parallel Processing Scheduling"
"A Historical Application Profiler for Use by Parallel Schedulers"
"Utilization and Predictability in Scheduling the IBM SP2 with Backfilling"
"User-Transparent Run-Time Performance Optimization"
"A Task Migration Implementation for the Message-Passing Interface"
"Checkpoint and Migration of UNIX Processes in the Condor Distributed Processing System"
Page, http://www.
The Livermore Fortran Kernels: A Computer Test Of The Numerical Performance Range
"An Agent-Based Architecture for Dynamic Resource Management"
"Managing Checkpoints for Parallel Programs"
"Software-based Replication for Fault Tolerance"
"An Architecture for Rapid Distributed Fault Tolerance"
--TR
--CTR
Samuel H. Russ , Katina Reece , Jonathan Robinson , Brad Meyers , Rajesh Rajan , Laxman Rajagopalan , Chun-Heong Tan, Hector: An Agent-Based Architecture for Dynamic Resource Management, IEEE Concurrency, v.7 n.2, p.47-55, April 1999
Angela C. Sodan , Lin Han, ATOP-space and time adaptation for parallel and grid applications via flexible data partitioning, Proceedings of the 3rd workshop on Adaptive and reflective middleware, p.268-276, October 19-19, 2004, Toronto, Ontario, Canada
Hyungsoo Jung , Dongin Shin , Hyuck Han , Jai W. Kim , Heon Y. Yeom , Jongsuk Lee, Design and Implementation of Multiple Fault-Tolerant MPI over Myrinet (M^3), Proceedings of the 2005 ACM/IEEE conference on Supercomputing, p.32, November 12-18, 2005
Kyung Dong Ryu , Jeffrey K. Hollingsworth, Exploiting Fine-Grained Idle Periods in Networks of Workstations, IEEE Transactions on Parallel and Distributed Systems, v.11 n.7, p.683-698, July 2000 | task migration;load balancing;fault tolerance;resource allocation;parallel computing |
292856 | A Fault-Tolerant Dynamic Scheduling Algorithm for Multiprocessor Real-Time Systems and Its Analysis. | AbstractMany time-critical applications require dynamic scheduling with predictable performance. Tasks corresponding to these applications have deadlines to be met despite the presence of faults. In this paper, we propose an algorithm to dynamically schedule arriving real-time tasks with resource and fault-tolerant requirements on to multiprocessor systems. The tasks are assumed to be nonpreemptable and each task has two copies (versions) which are mutually excluded in space, as well as in time in the schedule, to handle permanent processor failures and to obtain better performance, respectively. Our algorithm can tolerate more than one fault at a time, and employs performance improving techniques such as 1) distance concept which decides the relative position of the two copies of a task in the task queue, 2) flexible backup overloading, which introduces a trade-off between degree of fault tolerance and performance, and resource reclaiming, which reclaims resources both from deallocated backups and early completing tasks. We quantify, through simulation studies, the effectiveness of each of these techniques in improving the guarantee ratio, which is defined as the percentage of total tasks, arrived in the system, whose deadlines are met. Also, we compare through simulation studies the performance our algorithm with a best known algorithm for the problem, and show analytically the importance of distance parameter in fault-tolerant dynamic scheduling in multiprocessor real-time systems. | Introduction
Real-time systems are defined as those systems in which the correctness of the system depends not only
on the logical result of computation, but also on the time at which the results are produced [22]. Real-
This work was supported by the Indian National Science Academy, and the Department of Science and Technology.
time systems are broadly classified into three categories, namely, (i) hard real-time systems, in which the
consequences of not executing a task before its deadline may be catastrophic, (ii) firm real-time systems,
in which the result produced by the corresponding task ceases to be useful as soon as the deadline expires,
but the consequences of not meeting the deadline are not very severe, and (iii) soft real-time systems,
in which the utility of results produced by a task with a soft deadline decreases over time after the
deadline expires [25]. Examples of hard real-time systems are avionic control and nuclear plant control.
Online transaction processing applications such as airline reservation and banking are examples for firm
real-time systems, and telephone switching system and image processing applications are examples for
soft real-time systems.
The problem of scheduling of real-time tasks in multiprocessor systems is to determine when and on
which processor a given task executes [22, 25]. This can be done either statically or dynamically. In
static algorithms, the assignment of tasks to processors and the time at which the tasks start execution
are determined a priori. Static algorithms are often used to schedule periodic tasks with hard deadlines.
However, this approach is not applicable to aperiodic tasks whose characteristics are not known a priori.
Scheduling such tasks require a dynamic scheduling algorithm.
In dynamic scheduling, when a new set of tasks (which correspond to a plan) arrive at the system, the
scheduler dynamically determines the feasibility of scheduling these new tasks without jeopardizing the
guarantees that have been provided for the previously scheduled tasks. A plan is typically a set of actions
that has to be either done fully or not at all. Each action could correspond to a task and these tasks
may have resource requirements, and possibly may have precedence constraints. Thus, for predictable
executions, schedulability analysis must be done before a task's execution is begun. For schedulability
analysis, tasks' worst case computation times must be taken into account. A feasible schedule is generated
if the timing constraints, and resource and fault-tolerant requirements of all the tasks in the new set can
be satisfied, i.e., if the schedulability analysis is successful. If a feasible schedule cannot be found, the
new set of tasks (plan) is rejected and the previous schedule remains intact. In case of a plan getting
rejected, the application might invoke an exception task, which must be run, depending on the nature
of the plan. This planning allows admission control and results in reservation-based system. Tasks are
dispatched according to this feasible schedule. Such a type of scheduling approach is called dynamic
planning based scheduling [22], and Spring kernel [27] is an example for this. In this paper, we use
dynamic planning based scheduling approach for scheduling of tasks with hard deadlines.
The demand for more and more complex real-time applications, which require high computational
needs with timing constraints and fault-tolerant requirements, have led to the choice of multiprocessor
systems as a natural candidate for supporting such real-time applications, due to their potential for high
performance and reliability. Due to the critical nature of the tasks in a hard real-time system, it is
essential that every task admitted in the system completes its execution even in the presence of failures.
Therefore, fault-tolerance is an important issue in such systems. In real-time multiprocessor systems,
fault-tolerance can be provided by scheduling multiple versions of tasks on different processors. Four
different models (techniques) have evolved for fault-tolerant scheduling of real-time tasks, namely, (i)
Triple Modular Redundancy (TMR) model [12, 25], (ii) Primary Backup (PB) model [3], (iii) Imprecise
Computational (IC) model [11], and (iv) (m; k)-firm deadline model [23].
In the TMR approach, three versions of a task are executed concurrently and the results of these
versions are voted on. In the PB approach, two versions are executed serially on two different processors,
and an acceptance test is used to check the result. The backup version is executed (after undoing the
effects of primary version) only if the output of the primary version fails the acceptance test, either
due to processor failure or due to software failure. In the IC model, a task is divided into mandatory
and optional parts. The mandatory part must be completed before the task's deadline for acceptable
quality of result. The optional part refines the result. The characteristics of some real-time tasks can be
better characterised by (m; k)-firm deadlines in which m out of any k consecutive tasks must meet their
deadlines. The IC model and (m; k)-firm task model provide scheduling flexibility by trading off result
quality to meet task deadlines.
Applications such as automatic flight control and industrial process control require dynamic scheduling
with fault-tolerant requirements. In a flight control system, the controllers often activate tasks depending
on what appears on their monitor. Similarly, in an industrial control system, the robot which monitors
and controls various processes may have to perform path planning dynamically which results in activation
of aperiodic tasks. Another example, taken from [3], is a system which monitors the condition of several
patients in the intensive care unit (ICU) of a hospital. The arrival of patients to the ICU is dynamic.
When a new patient (plan) arrives, the system performs admission test to determine whether the new
patient (plan) can be admitted or not. If not, alternate action like employing a nurse can be carried out.
The life criticality of such an application demands that the desired action to be performed even in the
presence of faults.
In this paper, we address the scheduling of dynamically arriving real-time tasks with PB fault-tolerant
requirements on to a set of processors and resources in such a way that the versions of the tasks are
feasible in the schedule. The objective of any dynamic real-time scheduling algorithm is to improve the
guarantee ratio [24] which is defined as the percentage of tasks, arrived in the system, whose deadlines
are met.
The rest of the paper is structured as follows. Section 2 discusses the system model. In Section 3,
related work and motivations for our work are presented. In Section 4, we propose an algorithm for
fault-tolerant scheduling of real-time tasks, and also propose some enhancements to it. In Section 5, the
performance of the proposed algorithm together with its enhancements is studied through simulation,
and also compared with an algorithm proposed recently in [3]. Finally, in Section 6, we make some
concluding remarks.
System Model
In this section, we first present the task model, followed by scheduler model, and then some definitions
which are necessary to explain the scheduling algorithm.
2.1 Task Model
1. Tasks are aperiodic, i.e., the task arrivals are not known a priori. Every task T i has the attributes:
arrival time (a i ), ready time (r i ), worst case computation time
2. The actual computation time of a task T i , denoted as c i , may be less than its worst case computation
time due to the presence of data dependent loops and conditional statements in the task code, and
due to architectural features of the system such as cache hits and dynamic branch prediction.
The worst case execution time of a task is obtained based on both static code analysis and the
average of execution times under possible worst cases. There might be cases in which the actual
computation time of a task may be more than its worst case computation time. There are techniques
to handle such situations. One such technique is "Task Pair" scheme [28] in which the worst case
computation time of a task is added with the worst case computation time of an exception task.
If the actual computation time exceeds the (original) worst case computation time, the exception
task is invoked.
3. Resource constraints: A task might need some resources such as data structures, variables, and
communication buffers for its execution. Each resource may have multiple instances. Every task
can have two types of accesses to a resource: a) exclusive access, in which case, no other task can
use the resource with it or b) shared access, in which case, it can share the resource with another
task (the other task also should be willing to share the resource). Resource conflict exists between
two tasks T i and T j if both of them require the same resource and one of the accesses is exclusive.
4. Each task T i has two versions, namely, primary copy and backup copy. The worst case computation
time of a primary copy may be more than that of its backup. The other attributes and resource
requirements of both the copies are identical.
5. Each task encounters at most one failure either due to processor failure or due to software failure,
i.e., if the primary fails, its backup always succeeds.
6. Tasks are non-preemptable, i.e., when a task starts execution on a processor, it finishes to its
completion.
7. Tasks are not parallelizable, which means that a task can be executed on only one processor. This
necessitates the sum of worst case computation times of primary and backup copies should be less
than or equal to (d that both the copies of a task can be schedulable within this interval.
8. The system has multiple identical processors which are connected through a shared medium.
9. Faults can be transient or permanent, and are independent, i.e., correlated failures are not considered
10. There exists a fault-detection mechanism such as acceptance tests to detect both processor failures
and software failures.
Most complex real-time applications have both periodic and aperiodic tasks. The dynamic planning
based scheduling approach used in this paper is also applicable to such real-time applications as described
below. The system resources (including processors) are partitioned into two sets, one for periodic tasks
and the other for aperiodic tasks. The periodic tasks are scheduled by a static table-driven scheduling
approach [22] onto the resource partition corresponding to periodic tasks and the aperiodic tasks are
scheduled by a dynamic planning based scheduling approach [21, 22, 13] onto the resource partition
corresponding to aperiodic tasks.
Tasks may have precedence constraints. Ready times and deadlines of tasks can be modified such
that they comply with the precedence constraints among them. Dealing with precedence constraints
is equivalent to working with the modified ready times and deadlines [11]. Therefore, the proposed
algorithm can also be applied to tasks having precedence constraints among them.
2.2 Scheduler Model
In a dynamic multiprocessor scheduling, all the tasks arrive at a central processor called the scheduler,
from where they are distributed to other processors in the system for execution. The communication
between the scheduler and the processors is through dispatch queues. Each processor has its own dispatch
queue. This organization, shown in Fig.1, ensures that the processors will always find some tasks (if there
are enough tasks in the system) in the dispatch queues when they finish the execution of their current
tasks. The scheduler will be running in parallel with the processors, scheduling the newly arriving tasks,
and periodically updating the dispatch queues. The scheduler has to ensure that the dispatch queues
are always filled to their minimum capacity (if there are tasks left with it) for this parallel operation.
This minimum capacity depends on the worst case time required by the scheduler to reschedule its tasks
upon the arrival of a new task [24]. The scheduler arrives at a feasible schedule based on the worst case
computation times of tasks satisfying their timing, resource, and fault-tolerant constraints.
The use of one scheduler for the whole system makes the scheduler a single point of failure. The
scheduler can be made fault-tolerant by employing modular redundancy technique in which a backup
scheduler runs in parallel with the primary scheduler and both the schedulers perform an acceptance
test. The dispatch queues will be updated by one of the schedulers which passes the acceptance test. A
simple acceptance test for this is to check whether each task in the schedule finishes before its deadline
satisfying its requirements.
tasks
Task queue
Current schedule
dispatch queues
Dispatch queues
(Feasible schedule) Processors
Scheduler
Min. length of
P2Fig.1 Parallel execution of scheduler and processors
2.2.1 Resource Reclaiming
Resource reclaiming [24] refers to the problem of utilizing resources (processors and other resources) left
unused by a task (version) when: (i) it executes less than its worst case computation time, or (ii) it is
deleted from the current schedule. Deletion of a task version takes place when extra versions are initially
scheduled to account for fault tolerance, i.e., in the PB fault-tolerant approach, when the primary version
of a task completes its execution successfully, there is no need for the temporally redundant backup
version to be executed and hence it can be deleted.
Each processor invokes a resource reclaiming algorithm at the completion of its currently executing
task. If resource reclaiming is not used, processors execute tasks strictly based on the scheduled start
times as per the feasible schedule, which results in making the resources remain unused, thus reducing the
guarantee ratio. The scheduler is informed with the time reclaimed by the reclaiming algorithm so that
the scheduler can schedule the newly arriving tasks correctly and effectively. A protocol for achieving this
is suggested in [24]. Therefore, any dynamic scheduling scheme should have a scheduler with associated
resource reclaiming.
3 Background
In this section, we first discuss the existing work on fault-tolerant scheduling, and then highlight the
limitations of these works which form the motivation for our work.
3.1 Related Work
Many practical instances of scheduling problems have been found to be NP-complete [2], i.e., it is
believed that there is no optimal polynomial-time algorithm for them. It was shown in [1] that there
does not exist an algorithm for optimally scheduling dynamically arriving tasks with or without mutual
exclusion constraints on a multiprocessor system. These negative results motivated the need for heuristic
approaches for solving the scheduling problem.
Recently, many heuristic scheduling algorithms [21, 13] have been proposed to dynamically schedule
a set of tasks whose computation times, deadlines, and resource requirements are known only on arrival.
For multiprocessor systems with resource constrained tasks, a heuristic search algorithm, called myopic
scheduling algorithm, was proposed in [21]. The authors of [21] have shown that the integrated heuristic
used there which is a function of deadline and earliest start time of a task performs better than simple
heuristics such as earliest deadline first, least laxity first, and minimum processing time first.
In [10], a PB scheme has been proposed for preemptively scheduling periodic tasks in a uniprocessor
system. This approach guarantees that (i) a primary copy meets its deadline if there is no failure and
(ii) its backup copy will run by the deadline if there is a failure. To achieve this, it precomputes tree
of schedules (where the tree can be encoded within a table-driven scheduler) by considering all possible
failure scenarios of tasks. This scheme is applicable to simple periodic tasks, where the periods of the
tasks are multiples of the smallest period. The objective of this approach is to increase the number of
primary task executions.
Another PB scheme is proposed in [19] for scheduling periodic tasks in a multiprocessor system. In
this strategy, a backup schedule is created for each set of tasks in the primary schedule. The tasks are
then rotated such that primary and backup schedules are on different processors and they do not overlap.
This approach tolerates up to one failure in the worst case, by using double the number of processors
used in the corresponding non-fault-tolerant schedule.
In [7], processor failures are handled by maintaining contingency or backup schedules. These schedules
are used in the event of a failure. The backup schedules are generated assuming that an optimal
schedule exists and the schedule is enhanced with the addition of "ghost" tasks, which function primarily
as standby tasks. The addition of tasks may not be possible in some schedules.
A PB based algorithm with backup overloading and backup deallocation has been proposed recently
[3] for fault-tolerant dynamic scheduling of real-time tasks in multiprocessor systems, which we call as
backup overloading algorithm. The backup overloading algorithm allocates more than a single backup in
a time interval (where time interval of a task is the interval between scheduled start time and scheduled
finish time of the task) and deallocates the resources unused by the backup copies in case of fault-free
operation. Two or more backups can overlap in the schedule (overloading) of a processor, if the primaries
of these backups are scheduled on different processors. The concept of backup overloading is valid under
the assumption that there can be at most one fault at any instant of time in the entire system. In [3],
it was shown that backup deallocation is more effective than the backup overloading. The paper also
provides a mechanism to determine the number of processors required to provide fault-tolerance in a
dynamic real-time system. Discussion about other related work on fault-tolerant real-time scheduling
can be found in [3].
3.2 Motivations for Our Work
The algorithms discussed in [7, 19] are static algorithms and cannot be applied to dynamic scheduling,
considered in this paper, due to their high complexities. The algorithm discussed in [10] is for scheduling
periodic tasks in uniprocessor systems and cannot be extended to the dynamic scheduling as it expects
the tasks to be periodic. Though the algorithm proposed in [3] is for dynamic scheduling, it does not
consider resource constraints among tasks which is a practical requirement in any complex real-time
system, and assumes at most one failure at any instant of time, which is pessimistic.
The algorithm in [3] is able to deallocate a backup when its primary is successful and uses this
reclaimed (processor) time to schedule other tasks in a greedy manner. The resource reclaiming in such
systems is simple and is said to be work-conserving which means that it never leaves a processor idle
if there is a dispatchable task. But, resource reclaiming on multiprocessor systems with resource constrained
tasks is more complicated. This is due to the potential parallelism provided by a multiprocessor
and potential resource conflicts among tasks. When the actual computation time of a task differs from
its worst case computation time in a non-preemptive multiprocessor schedule with resource constraints,
run-time anomalies may occur [4] if a work-conserving reclaiming scheme is used. These anomalies may
cause some of the already guaranteed tasks to miss their deadlines. In particular, one cannot use a work-conserving
scheme for resource reclaiming from resource constrained tasks. Moreover, the algorithm
proposed in [3] does not reclaim resources when the actual computation times of tasks are less than their
worst case computation times, which is true for many tasks. But, resource reclaiming in such cases is
very effective in improving the guarantee ratio [24].
The Spring scheduling approach [27] schedules dynamically arriving tasks with resource requirements
and reclaims resources from early completing tasks and does not address the fault-tolerant requirements
explicitly.
Our algorithm works within the Spring scheduling approach and builds fault-tolerant solutions around
it to support PB based fault-tolerance. To the best of our knowledge, ours is the first work which addresses
the fault-tolerant scheduling problem in a more practical model, which means that our algorithm
handles resource constraints among tasks and reclaims resources both from early completing tasks and
deallocated backups. The performance of our algorithm is compared with the backup overloading algorithm
in Section 5.5.
4 The Fault-tolerant Scheduling Algorithm
In this section, we first define some terms and then present our fault-tolerant scheduling algorithm which
uses these terms.
4.1 Terminology
1: The scheduler fixes a feasible schedule S. The feasible schedule uses the worst case
computation time of a task for scheduling it and ensures that the timing, resource, and fault-tolerant
constraints of all the tasks in S are met. A partial schedule is one which does not contain all the tasks.
Definition 2: st(T i ) is the scheduled start time of task T i which satisfies r
the scheduled finish time of task T i which satisfies r
Definition 3: EAT s
k ) is the earliest time at which the resource R k becomes available for shared
(exclusive) usage [21].
Definition 4: Let P be the set of processors, and R i be the set of resources requested by task T i . Earliest
start time of a task T i , denoted as EST(T i ), is the earliest time when its execution can be started, which
is defined as: EST (T i
denotes the time at which the processor P j is available for executing a task, and the third term denotes
maximum among the available time of the resources requested by task T i , in which shared
mode and exclusive mode.
Definition 5: proc(T i ) is the processor to which task T i is scheduled. The processor to which task T i
should not get scheduled is denoted as exclude proc(T i ).
Definition is the scheduled start time and f t(Pr i ) is the scheduled finish time of primary copy
of a task T i . Similarly, st(Bk i ) and f t(Bk i ) denote the same for the backup copy of T i .
Definition 7: The primary and backup copies of a task T i are said to be mutually exclusive in time,
denoted as time exclusion(T i
Definition 8: The primary and backup copies of a task T i are said to be mutually exclusive in space,
denoted as space exclusion(T i
A task is said to be feasible in a fault-tolerant schedule if it satisfies the following conditions:
ffl The primary and backup copies of a task should satisfy: r i - st(Pr
This is because both the copies of a task must satisfy the timing constraints and it is assumed
that the backup is executed after the failure in its primary is detected (time exclusion). Failure
detection is done through acceptance test or some other means only at the completion of every
primary copy. The time exclusion between primary and backup copies of a task can be relaxed
if the backup is allowed to execute in parallel [5, 30] (or overlap) with its primary. This is not
preferable in dynamic scheduling as discussed below.
ffl Primary and backup copies of a task should mutually exclude in space in the schedule. This is
necessary to tolerate permanent processor failures.
Mutual exclusion in time is very useful from the resource reclaiming point of view. If the primary
is successful, the backup need not be executed and the time interval allocated to the backup can be
reclaimed fully, if primary and backup satisfy time exclusion, thereby improving the schedulability [15].
In other words, if primary and backup of a task overlap in execution, the backup unnecessarily executes
(in part or full) even when its primary is successful. This would result in poor resource utilization,
thereby reducing the schedulability. Moreover, overlapping of primary and backup of a task introduces
resource conflicts (if the access is exclusive) among them since they have the same resource requirements
and forces them to exclude in time if only one instance of the requested resource is available at that time.
4.2 The Distance Myopic Algorithm
The Spring scheduling [27] approach uses a heuristic search algorithm (called myopic algorithm [21])
for non-fault-tolerant scheduling of resource constrained real-time tasks in a multiprocessor system, and
uses Basic or Early start algorithms for resource reclaiming. One of the objectives of our work here
is to propose fault-tolerant enhancements to the Spring scheduling approach. We make the following
enhancements to the Spring scheduling to support PB based fault-tolerance:
ffl a notion of distance is introduced, which decides the relative difference in position between primary
and backup copies of a task in the task queue.
ffl flexible level of backup overloading; this introduces a tradeoff between number of faults in the
system and the system performance.
ffl use of restriction vector (RV) [15] based algorithm to reclaim resources from both deallocated
backups and early completing tasks.
4.2.1 Notion of Distance
Since in our task model, every task, T i , has two copies, we place both of them in the task queue with
relative difference of Distance(Pr positions. The primary copy of any task always precedes
its backup copy in the task queue. Let n be the number of currently active tasks whose characteristics
are known. The algorithm does not know the characteristics of new tasks, which may arrive, while
scheduling the currently active tasks. The distance is an input parameter to the scheduling algorithm
which determines the relative positions of the copies of a task in the task queue in the following way:
distance for the first (n \Gamma (n mod distance)) tasks
mod distance for the last (n mod distance) tasks,
The following is an example task queue with assuming that the deadlines of
tasks are in the non-decreasing order.
The positioning of backup copies in the task queue relative to their primaries can easily be achieved
with minimal cost: (i) by having two queues, one for primary copies (n entries) and the other for backup
copies (n entries), and (ii) merging these queues, before invoking the scheduler, based on the distance
value to get a task queue of 2n entries. The cost involved due to merging is 2n.
4.2.2 Myopic Scheduling Algorithm
The myopic algorithm [21] is a heuristic search algorithm that schedules dynamically arriving real-time
tasks with resource constraints. It works as follows for scheduling a set of tasks. A vertex in the search
tree represents a partial schedule. The schedule from a vertex is extended only if the vertex is strongly
feasible. A vertex is strongly feasible if a feasible schedule can be generated by extending the current
partial schedule with each task of the feasibility check window. Feasibility check window is a subset of
first K unscheduled tasks. Larger the size of the feasibility check window, higher the scheduling cost and
more the look ahead nature. If the current vertex is strongly feasible, the algorithm computes a heuristic
function, for each task within the feasibility check window, based on deadline and earliest start time of
the task. It then extends the schedule by the task having the best (smallest) heuristic value. Otherwise,
it backtracks to the previous vertex and then the schedule is extended from there using a task which has
the next best heuristic value.
4.2.3 The Distance Based Fault-tolerant Myopic Algorithm
We make fault-tolerant extensions to the original myopic algorithm using the distance concept for scheduling
a set of tasks. Here, we assume that each task is a plan. The algorithm attempts to generate a feasible
schedule for the task set with minimum number of rejections.
Distance Myopic()
1. Order the tasks (primary copies) in non-decreasing order of deadlines in the task queue and insert
the backup copies at appropriate distance from their primary copies.
2. Compute Earliest Start Time EST (T i ) for the first K tasks, where K is the size of the feasibility
check window.
3. Check for strong feasibility: check whether EST (T i is true for all the K tasks.
4. If strongly feasible or no more backtracking is possible
(a) Compute the heuristic function for the first K tasks, where W is an
input parameter.
ffl When Bk i of task T i is considered for H function evaluation, if Pr i is not yet scheduled,
set EST (Bk i
(b) Choose the task with the best (smallest) H value to extend the schedule.
(c) If the best task meets its deadline, extend the schedule by the best task (best task is accepted
in the schedule).
ffl If the best task is primary copy (Pr i ) of task T i
This is to achieve time exclusion for task T i .
This is to achieve space exclusion for task T i .
(d) else reject the best task and move the feasibility check window by one task to the right.
(e) If the rejected task is a backup copy, delete its primary copy from the schedule.
5. else Backtrack to the previous search level and try extending the schedule with a task having the
next best H value.
6. Repeat steps (2-5) until termination condition is met.
The termination condition is either (i) all the tasks are scheduled or (ii) all the tasks are considered
for scheduling and no more backtrack is possible. The complexity of the algorithm is the same the
original myopic algorithm, which is O(Kn).
It is to be noted that the distance myopic algorithm can tolerate more than one fault at any point
of time, and the number of faults is limited by the assumption that at most one of the copies of a task
can fail. Once a processor fault is detected, the recovery is inherent in the schedule meaning that the
backups, of the primaries scheduled on the failed processors, will always succeed. In addition, whether
the failed processors will be considered or not for further scheduling depends on the type of fault. If it is
a transient processor fault, the processor on which the failure has occurred will be considered for further
scheduling. On the other hand, if it is a permanent processor fault, the processor on which the failure
has occurred will not be considered for further scheduling till it gets repaired. If the failure is due to
task error (software fault), it is treated like a transient processor fault.
4.2.4 Flexible Backup Overloading in Distance Myopic
Here, we discuss as how to incorporate flexible level of backup overloading into the distance myopic
algorithm. This introduces a tradeoff between the number of faults in the system and the guarantee
ratio. Before, defining the flexible backup overloading, we state from [3] the condition under which
backups can be overloaded.
If Pr i and Pr j are scheduled on two different processors, then their backups Bk i and Bk j can overlap
in execution on a processor:
The backup overloading is depicted in Fig.2. In Fig.2, Bk 1 and Bk 3 which are scheduled on processor
in execution, whose primaries Pr 1 and Pr 3 are scheduled on different processors P 1 and P 3 ,
respectively. This backup overloading is valid under the assumption that there is at most one failure, in
the system (at any instant of time). This is a too pessimistic assumption especially when the number of
processors in the system is large.
Processor 1
Processor 2
Processor 3
Primary 1
Primary 2 Primary 4
Primary 3
Time
Backups 1 and 3
Fig.2 Backup overloading
We introduce flexibility in overloading (and hence the number of faults) by forming the processors
into different groups. Let group(P i ) denote the group in which processor P i is a member, and m be the
number of processors in the system. The rules for flexible backup overloading are:
Every processor is a member of exactly one group.
ffl Each group should have at least three processors for backup overloading to take place in that
group.
ffl Size of each group (gsize) is the same, except for one group, when (m=gsize) is not an integer.
ffl Backup overloading can take place only among the processors in a group:
ffl Both primary and backup copies of a task are to be scheduled on to the processors of the same
group.
The flexible overloading scheme permits at most d(m=gsize)e number of faults at any instant of time,
with a restriction that there is at most one fault in each group. In the flexible overloading scheme, when
the number of faults permitted is increased, the flexibility in backup overloading is limited and hence
the guarantee ratio might drop down. This mechanism gives the flexibility for the system designer to
choose the desired degree of fault-tolerance. In Section 5.2.5, we study the tradeoff between the number
of faults and the performance of the system.
4.2.5 Restriction Vector Based Resource Reclaiming
In our dynamic fault-tolerant scheduling approach, we have used restriction vector (RV) algorithm for
resource reclaiming. RV algorithm uses a data structure called restriction vector which captures resource,
precedence, and fault-tolerant constraints among tasks in a unified way. Each task T i has an associated
m-component vector, RV i [1::m], called Restriction Vector, where m is the number of processors in the
system. RV i [j] for a task T i contains the last task in T !i (j), where T !i (j) is the set of tasks which
are scheduled to finish before T i begins. Before updating the dispatch queues, the scheduler computes
the restriction vector for each of the tasks in the feasible schedule. For computing RV of a task T i , the
scheduler checks at most k tasks (in the order of latest finish time first) which are scheduled to finish on
other processors before T i starts execution. The latest task on processor j which has resource conflict or
precedence relation with the task T i becomes RV i [j]. If no such task exists, then the k-th task is RV i [j].
The RV algorithm [15] says: start executing a task T i only if processor on which T i is scheduled is
idle and all the tasks in its restriction vector have successfully finished their execution.
Performance Studies
In this section, we first present the simulation studies on various algorithms, and then present an
analytical study based on Markov chains which highlights the importance of distance parameter in
fault-tolerant dynamic scheduling. The simulation experiments were conducted in two parts. The first
part highlights the importance of distance parameter and the second part highlights the importance of
each of the guarantee ratio improving techniques, namely, distance concept, backup deallocation, and
backup overloading. For each point in the performance curves (Figs.4-15), the total number of tasks
arrived in the system is 20,000. The parameters used in the simulation studies are given in Fig.3. The
tasks for the simulation are generated as follows:
1. The worst case computation times of primary copies are chosen uniformly between MIN C and
MAX C.
2. The deadline of a task T i (primary copy) is uniformly chosen between r
2.
3. The inter arrival time of tasks (primary copies) is exponentially distributed with mean 1=(- m)
(MIN C +MAX C)=2.
4. The actual computation time of a primary copy is chosen uniformly between MIN C and its worst
case computation time, if aw-ratio is random (rand). Otherwise, it is aw-ratio times the worst case
computation time.
5. The backup copies are assumed to have identical characteristics of their primary copies.
parameter explanation value taken when
(varied) (fixed)
MIN C minimum computation time of tasks (-) (40)
MAX C maximum computation time of tasks (-) (80)
- task arrival rate (0.2,0.3,.,0.7) (0.5 or 0.4)
R laxity parameter (2,3,.,7) (4)
UseP probability that a task uses (0.1,0.2,.,0.5) (0.4)
a resource
ShareP probability that a task accesses (-) (0.4)
a resource in shared mode
K size of feasibility check window (1,3, ., 11) (3)
W weight of EST(T i ) in the H function (-) (1)
aw-ratio ratio of actual to worst case (0.5,0.6,.,1.0) (rand)
computation times
FaultP probability that a primary fails (0.1,0.2,.,0.5) (0.2)
distance relative difference in positions of primary (1,5,9,13) (5)
and backup copies in the task queue
m number of processors (5,6,.,10) (8)
NumRes number of resources (-) (2)
NumInst number of instances per resource (-) (2)
Fig.3 Simulation parameters
5.1 Experiments Highlighting Distance Parameter
In this section, we present the simulation results obtained for different values of distance parameter
by varying K, -, UseP , and FaultP parameters. For this study, the - value is taken as 0.5 when
fixed. The algorithms studied here reclaims resources both from early completing tasks and deallocated
backups. The actual computation time of a task is chosen uniformly between MIN C and its worst case
computation time.
5.1.1 Effect of Feasibility Check Window
Fig.4 shows the effect of varying distance for different values of K. Note that for larger values of K, the
number of H function evaluations and EST() computations are also more, which means higher scheduling
cost. The interplay between the distance and size of the feasibility check window is described below.
ffl When distance is small, the position of backup copies in the task queue is close to their respective
primary copies and hence the possibility of scheduling these backup copies may get postponed
(we call this, backup postponement) due to time and space exclusions. This makes more and more
unscheduled backup copies getting accumulated. When this number exceeds K, the scheduler is
forced to choose the best task (say T b ) among these backup copies, which results in creation of a
hole 1 in the schedule since EST(T b ) is greater than the available time (avail time) of idle processors.
This hole creation can be avoided by moving the feasibility check window till a primary task falls
into it. However, we do not consider this approach since it increases the scheduling cost.
ffl When distance is large, the position of the backup copies in the task queue is far apart from
their respective primary copies, i.e., tasks (backup copies) having lower deadlines may be placed
after some tasks (primary copies) having higher deadlines. This may lead to backtracks (and hence
rejection, if no backtrack is possible) when the feasibility check window reaches these backup copies
(we call this, forced backtrack).
The guarantee ratio increases with increasing K for a given distance for some time (growing phase)
and then starts decreasing for higher values of K (shrinking phase). From Fig.4, the shrinking phase
starts at K values 7,5,5, and 7, for distance values 1,5,9, and 13, respectively. The reason for this is
that the backup postponement is very high at the beginning of the growing phase, decreases along with
it and reaches the lowest value at the end of it (equivalently, beginning of the shrinking phase), and the
number of forced rejections is very low at the beginning of the shrinking phase and increases along with
it. This reveals two facts: (a) increased value of K (increased look ahead nature) does not necessarily
increase the guarantee ratio and (b) the optimal K for each distance is different. The right combination
of K and distance offers the best guarantee ratio. From Fig.4, the best guarantee ratio is obtained when
9. Suppose two distance values give the same (best) guarantee ratio, the one
with lower K is preferable because of lower scheduling cost.
5.1.2 Effect of Resource Usage, Task Load, and Fault Probability
In Fig.5, the probability that a task uses a resource (UseP ) is varied. For a fixed value of ShareP
(= 0.4), higher UseP means more resource conflicts among tasks. From Fig.5, it can be seen that the
guarantee ratio decreases as UseP increases. This is applicable for all values of distance. From Fig.5,
for most of the values of UseP , better guarantee ratio is obtained when distance is 9.
The task arrival rate has been varied in Fig.6. Higher - means lower inter arrival time and hence
higher task load. From Fig.6, it can be seen that increasing - decreases the guarantee ratio for all values
unusable processor time for scheduling.
of distance. From Fig.6, for most of the values of -, better guarantee ratio is obtained when distance is
5 and 9 compared to other values.
In Fig.7, the probability that a primary copy encounters a failure is varied. As FaultP increases,
the guarantee ratio decreases. This is applicable for all values of distance. From Fig.7, when distance
is 5 and 9, better guarantee ratio is obtained compared to the other values of distance.50602 4
Guarantee
ratio
Size of feasibility check window
Fig.4 Effect of feasibility check window5060708090
Guarantee
ratio
Resource usage probability
Fig.5 Effect of resource usage probability507090
Guarantee
ratio
Task arrival rate
Fig.6 Effect of task load52566064
Guarantee
ratio
Primary fault probability
Fig.7 Effect of primary fault probability
5.1.3 Choice of Distance
Based on the observations of simulation studies, a simple heuristic to select good K and distance value
is based on the number of processors, and the number of resources and their usage. If there are few
resources with high UseP and low ShareP , then there exists more resource conflicts among tasks. In
such cases, the EST() of a task is mostly decided by the resource available time rather than processor
available time or task ready time. Therefore, large value of K might help in such situations. The value
of distance may be approximately equal to a value in the range [m/2, m] since at most m consecutive
primaries or backups can be scheduled in the worst case. The value of K may be less than the distance
since larger K means higher scheduling cost, which might nullify or reduce the gain obtained.
5.2 Experiments Highlighting GR Improving Techniques
In this section, we show through simulation the importance of each of the guarantee ratio (GR) improving
techniques, namely, distance concept, backup deallocation, and backup overloading. For this
experiments, we have taken the - value to be 0.4, when fixed. The actual computation time of a task is
chosen uniformly between MIN C and its worst case computation time. The plots in Figs.8-13 correspond
to the following algorithms:
Myopic. This is a fault-tolerant version of myopic algorithm with distance = 1. This
algorithm reclaims resources only from early completing tasks.
Distance concept. This is same as algorithm A0 except that distance = 5 (this
value for distance is chosen based on the previous experiments and discussions).
deallocation. This is algorithm A1 together with resource
reclaiming from deallocated backups as well.
overloading. This is algorithm A2
together with backup overloading. For this full overloading is considered, i.e., gsize = m. This
algorithm permits at most one failure, whereas the other algorithms can tolerate more than one
failure.
The difference in guarantee ratio between algorithms: (i) A0 and A1 is due to distance concept,
(ii) A1 and A2 is due to backup deallocation, and (iii) A2 and A3 is due to backup overloading. From
Figs.8-13, it can be see that each of the guarantee ratio improving techniques improves the guarantee
ratio of the system, with very minimal increase in scheduling cost. That is, algorithms A0, A1, A2,
and A3 offer non-decreasing order of guarantee ratio. The distance concept and backup deallocation are
more effective compared to backup overloading.
5.2.1 Effect of Task Laxity, Resource Usage, and Task Load
The effect of task laxity (R) is studied in Fig.8. As the laxity increases, the guarantee ratio also
increases. For lower laxities, the difference in guarantee ratio between algorithms is less and increases
with increasing laxity. This is because, for lower values of laxity, the deadlines of tasks are very tight
and due to which the guarantee improving techniques have less flexibility to be more effective.
In Fig.9, the probability that a task uses a resource (UseP ) is varied. The increase in UseP , for
a fixed ShareP , increases the resource conflicts among tasks and hence the guarantee ratio decreases.
This is true for all the algorithms.
The effect of task load is studied in Fig.10. As load increases, the guarantee ratio decreases for all
the algorithms. For lower values of task loads (when to 0:3), the guarantee ratio of all the four
algorithms is more or less the same. This is because, at such low load, the system has enough processors
and resources to feasibly schedule the tasks. When the load increases, the difference in guarantee between
algorithms also increases, which means that the proposed techniques are effective at higher loads.
5.2.2 Effect of Number of Processors
The effect of varying the number of processors (m) is studied in Fig.11. For this, the task load is fixed
to be the load of 8 processors. The increase in number of processors increases the guarantee ratio for all
the four algorithms. The difference in guarantee ratio for two successive values of m (i.e., m and m+ 1)
is very high when m is small, and decreases as m increases. This is because of limited availability of
resources, i.e., the bottleneck is the resources and not the processors. This means that if m is increased
beyond 10, there cannot be much improvement in the guarantee ratio.6670747882
Guarantee
ratio
Task laxity
A3
Fig.8 Effect of task laxity65758595
Guarantee
ratio
Resource usage probability
A3
Fig.9 Effect of resource usage probability
Guarantee
ratio
Task arrival rate
A3
Fig.10 Effect of task load50607080
Guarantee
ratio
Number of processors
A3
Fig.11 Effect of number of processors7072747678
Guarantee
ratio
Actual to worst case computation ratio
A3
Fig.12 Effect of actual to worst case computation
Guarantee
ratio
Primary fault probability
Fig.13 Effect of primary fault probability
5.2.3 Effect of Actual to Worst Case Computation Time Ratio
The ratio between actual to worst case computation time (aw-ratio) of tasks is varied in Fig.12. In
this experiment, the actual computation time of a task is taken to be aw-ratio times the worst case
computation of the task. From Fig.12, an increase in aw-ratio decreases the guarantee ratio for all the
algorithms. When aw-ratio=1.0, the reclaiming is only due to backup deallocation (wherever applicable).
For example, for algorithms A0 and A1, when aw-ratio=1.0, no resource reclaiming is possible. When
aw-ratio=1.0, the difference in guarantee ratio between A0 and A1 is purely due to distance concept,
between A1 and A2 is purely due to backup deallocation, and between A2 and A3 is purely due to
backup overloading.
5.2.4 Effect of Fault Probability
The probability that a primary encounters a fault (FaultP ) is varied in Fig.13. Here, only three algorithms
(A0, A1, and A2) are plotted because the number of faults (for a given FaultP ) generated
while studying A3 is different (very less), because of at most one fault at a time, compared to the other
algorithms. When there is no fault in the system, which means that every backup is
deallocated. The guarantee ratio of algorithms A0 and A1 is flat for all values of FaultP since they do
not deallocate backups. For A2, an increase in FaultP decreases the guarantee ratio.
5.2.5 Performance of Flexible Overloading
The performance of flexible backup overloading has been studied for various parameters. Here, we
present only some sample results. For these experiments, m is taken as 8, and the different algorithms
studied are: (i) no overloading (Algorithm A2), (ii) half overloading (gsize = (say A4), and (iii) full
overloading which is the same as algorithm A3. The tradeoff between performance and
fault-tolerance is studied through this experiment. At any point of time, Algorithm A2 can tolerate
more than one fault, algorithm A4 can tolerate two faults with a restriction that there is at most one
fault within a group, and algorithm A3 can tolerate at most one fault.
The task load and laxity are varied in Fig.14 and 15, respectively. From these figures, the guarantee
ratio offered by full overloading is better than the other two, and half overloading is better than no over-
loading. The gain in guarantee ratio obtained by trading (reducing) the number faults in full overloading
is around 2% to 3% in both the experiments. For lower task loads, the gain is less than 1% and is more
at higher task loads. This reveals that backup overloading is less effective in improving the guarantee
ratio compared to the other techniques such as distance concept, backup deallocation, and reclaiming
from early completing tasks. Thus, the flexible overloading provides a tradeoff between performance and
the degree of fault-tolerance.
Guarantee
ratio
Task arrival rate
A3
Fig.14 Effect of task load70747882
Guarantee
ratio
Task laxity
A3
Fig.15 Effect of task laxity
5.3 Analytical Study
In this section, we show analytically using Markov chains that the distance is an important parameter
in fault-tolerant dynamic scheduling of real-time tasks. Using Markov chains, the possible states of the
system and the probabilities of transitions among them can be determined, which can then be used to
evaluate different dependability metrics for the system. The analysis presented here is similar to the one
given in [3, 17] except for the backup preallocation strategy. To make the analysis tractable, we make
the following assumptions:
1. All tasks have unit worst case computation time, i.e., c
2. Backup slots are preallocated in the schedule based on the distance parameter.
3. FIFO scheduling strategy is used.
4. Size of the feasibility check window (K) is 1.
5. Task deadlines are uniformly distributed in the interval [W min ,W max ] relative to their ready times,
which we call deadline window.
6. Task arrivals are uniformly distributed with mean A av .
7. Backup overloading and resource reclaiming are not considered.
5.3.1 Backup Preallocation Strategy
Since the tasks are of unit length, we reserve slots in the schedule for the backup copies based on the
distance parameter. Let m be the number of processors and d be the distance. Let s
and In our backup preallocation strategy, at any time t, the available number of
primary slots is s 1 if t is odd, s 2 if t is even. Similarly, the available number of backup slots is s 2 if t
is odd, s 1 if t is even. In other words, backups are reserved at time slot the primaries of time
slot t. Fig.16 shows the backup preallocation with 2. Note that the backup preallocation
for m processors with distance (m \Gamma d) is the same as for m processors with distance d. In our backup
preallocation strategy, d should not be equal to (m \Gamma 1) because if a primary is scheduled on slot t (even),
its backup slot is already reserved on the same processor at time slot which is a violation of space
exclusion. Also, d ? m does not have any meaning in the preallocation.
Processors
Time (t) ->
Backup slot
Fig.16 Distance based backup preallocation with
5.4 Analysis
If P ar (k) is the probability of k tasks arriving at a given time, then P ar
If Pwin (k) is the probability that an arriving task has a relative deadline w, then
The arriving tasks (primary copies) are appended to the task queue (Q) and they are scheduled in
FIFO order. Given that s 1 or s 2 tasks can be scheduled on a given time slot t depending on whether t
is odd or even, respectively, then the position of the tasks in the Q indicates their scheduled start times.
If at the beginning of time slot t, a task T i is the k-th task in Q, then T i is scheduled to execute at time
k is the time, from now, at which a task will execute whose position in the Q is k and
is defined as
In equation (3), arrives at time t, its schedulability
depends on the length of Q and on the relative deadline w i of the task. If T i is appended at position q
of Q and w i - g q , then the primary copy, Pr i , is guaranteed to execute before time
the task is not schedulable since it will miss its deadline. Moreover, if w
guaranteed to execute before t +w i . Note that in our backup preallocation strategy, the backup of a task
is scheduled in the immediate next slot of its primary. The dynamics of the system can be modelled using
Markov chain in which each state represents the number of tasks in Q and each transition represents
the change in the length of the Q in one unit of time. The probabilities of different transitions may be
calculated from the rate of task arrival. For simplicity, the average number of tasks executed at any time
t is which is m=2.
If S u represents the state in which Q contains u tasks and u - m=2, then the probability of a transition
from S u to S u\Gammam=2+k is P ar (k) since at any time t, k tasks can arrive and m=2 tasks get executed. If
only u tasks are executed, then there is a state transition from S u to S k with probability
ar (k).
When the k arriving tasks have finite deadlines, some of these tasks may be rejected. Let P q;k be
the probability that one of the k tasks is rejected when the queue size is q. The value of P q;k is the
probability that the relative deadline of the task is smaller than and the extra
one time unit is needed to schedule the backup. Then,
Pwin (w); (4)
Hence, when the queue size is q, the probability, P rej (r; k; q), that r out of the k tasks
are rejected is
r is the number of possible ways to select r out of k elements.
Our objective is to find the guarantee ratio (rejection ratio) for different values of distance. To do
that, we need to compute the number of tasks rejected in each state. This is done by splitting each
state S u in the one-dimensional Markov chain into 2A av av is the maximum
number of task arrivals, and possibly rejected, in unit time. In the two-dimensional Markov chain, the
state S u;r represents the queue size as u and r tasks were rejected when the transition was made into
S u;r . The two-dimensional Markov chain contains (m=2)W
columns (number of arrivals in unit time 1), and the transition probabilities become:
if
if
By computing the steady state probabilities of being in the rejection states, it is possible to compute
the expected value of the number of rejected tasks Rej per unit time. If P ss (u; v) is the steady state
probability of being in state S u;v , then
(vP ss (u; v)): (6)
Then, the rate of task rejection is given by Rej=A av . Note that, P ss (u; 0) is not included in equation (6)
since these are the states corresponding to no rejection.
5.4.1 Results
Figs.17 and show the rejection ratio by varying distance for different values of A av and W max , re-
spectively. The values of the other fixed parameters are also given in the figures. Since the preallocation
of backups for distance d and (m \Gamma d) is identical, their corresponding rejection ratios are also the same.
From the plots, it can be observed that the rejection ratio varies with varying distance. For lower values
of distance, the rejection ratio is more and the same is true for higher values of distance. The lowest
rejection ratio (best guarantee ratio) corresponds to some medium value of distance. From the Figs.17-
19, the optimal value of distance is m=2. Therefore, the distance parameter plays a crucial role on the
effectiveness of dynamic fault-tolerant scheduling algorithms.
Rejection
ratio
Distance
Fig.17 Effect of task load
Rejection
ratio
Distance
Fig.18 Effect of laxity
Rejection
ratio
Distance
Fig.19 Effect of distance
5.5 Comparison with an Existing Algorithm
In this section, we compare our distance myopic algorithm with a recently proposed algorithm by Ghosh,
Melhem, and Moss~e (which we call, GMM algorithm) in [3] for fault-tolerant scheduling of dynamic real-time
tasks. The GMM algorithm uses full backup overloading (gsize = m) and backup deallocation,
and permits at most one failure at any point of time. The GMM algorithm does not address resource
constraints among tasks and reclaims resources only due to backup deallocation. The limitations of this
algorithm have been discussed in Section 3.2.
In the GMM algorithm, the primary and backup copies of a task are scheduled in succession. In
other words, the distance is always 1. The algorithm is informally stated below:
GMM Algorithm():
begin
1. Order the tasks in non-decreasing order of deadline in the task queue.
2. Choose the first (primary) and second (backup) tasks for scheduling:
ffl Schedule the primary copy as early as possible by End Fitting() or Middle Fitting() or Middle
Adjusting().
ffl Schedule the backup copy as late as possible by Backup Overloading() or End Fitting() or
Middle Fitting() or Middle Adjusting().
3. If both primary and backup copies meet their deadline, accept them in the schedule.
4. else reject them.
(a) End Fitting(): Schedule the current task as the last task in the schedule of a processor.
(b) Middle Fitting(): Schedule the current task some where in the middle of the schedule of a
processor.
(c) (b) Middle Adjusting(): Schedule the current task some where in the middle of the schedule
of a processor by changing start and finish times of adjacent tasks.
(d) Backup Overloading(): Schedule the current task on a backup time interval if the primary
copies corresponding to these backup copies are scheduled on two different processors.
Each of steps (b)-(d), the search for fitting, adjusting, and overlapping, begins at the end of the
schedule and proceeds towards the start of the schedule of every processor. The depth of the search is
limited to an input parameter K. Since each of steps (b)-(d) takes time Km, the worst case time taken
to schedule a primary copy is 2Km, whereas it is 3Km for a backup copy.
The performance of distance myopic algorithms is compared with the GMM algorithm. For the sake of
comparison with the GMM algorithm, no resource constraints among tasks are considered. To make the
comparison fair, resource reclaiming only due to backup deallocation is considered, since GMM does not
reclaim resources from early completing tasks. The plots in Figs.20 and 21 correspond to four algorithms:
(i) distance myopic (DM), (ii) distance myopic with full backup overloading (DM
algorithm without backup overloading (GMM - overload), and (iv) GMM algorithm.
The scheduling cost of both the algorithms is made equal by appropriately setting K(= 4) and K(= 1)
parameters in distance myopic and GMM algorithms, respectively. For these experiments, the values
of R, UseP , FaultP , aw-ratio, and distance values are taken as 5, 0, 0.2, 1, and 5, respectively. We
present here only the sample results.
The task load is varied in Fig.20. In this figure, the different algorithms are ordered in decreasing
order of the guarantee ratio offered: DM overloading. In Fig.21,
the number of processors is varied by fixing the task load equal to the load of 8 processors. For lower
number of processors, even DM algorithm is better than GMM. From these simulation experiments, we
have shown that our proposed algorithm (DM + overloading) is better than the GMM algorithm even
for the (restricted) task model for which it was proposed.5070900.5 0.6 0.7
Guarantee
ratio
Task arrival rate
GMM
DM
Fig.20 Effect of task load305070903 4 5 6 7 8
Guarantee
ratio
Number of processors
GMM
DM
Fig.21 Effect of number of processors
6 Conclusions
In this paper, we have proposed an algorithm for scheduling dynamically arriving real-time tasks with resource
and primary-backup based fault-tolerant requirements in a multiprocessor system. Our algorithm
can tolerate more than one fault at a time, and employs techniques such as distance concept, flexible
backup overloading, and resource reclaiming to improve the guarantee ratio of the system.
Through simulation studies and also analytically, we have shown that the distance is a crucial parameter
which decides the performance of any fault-tolerant dynamic scheduling in real-time multiprocessor
systems. Our simulation studies on distance parameter show that increasing the size of feasibility check
window (and hence the look ahead nature) does not necessarily increase the guarantee ratio. The right
combination of K and distance offers the best guarantee ratio. We have also discussed as how to choose
this combination.
We have quantified the effectiveness of each of the proposed guarantee ratio improving techniques
through simulation studies for a wide range of task and system parameters. Our simulation studies show
that the distance concept and resource reclaiming, due to both backup deallocation and early completion
of tasks, are more effective in improving the guarantee ratio compared to backup overloading. The
flexible backup overloading introduces a tradeoff between the number of faults and the guarantee ratio.
From the studies of flexible backup overloading, the gain (in guarantee ratio) obtained by favouring
performance (i.e., reducing the number of faults) is not very significant. This indicates that the backup
overloading is less effective, compared to the other techniques.
We have also compared our algorithm with a recently proposed [3] fault-tolerant dynamic scheduling
algorithm. Although our algorithm takes into account resource constraints among tasks and tolerates
more than one fault at a time, for the sake of comparison, we restricted the studies to independent
tasks with at most one failure. The simulation results show that our algorithm, when it is used with
backup overloading, offers better guarantee ratio than that of the other algorithm for all task and system
parameters. Currently, we are investigating as how to integrate different fault-tolerant approaches
namely, triple modular redundancy, primary-backup approach, and imprecise computation into a single
scheduling framework.
--R
"Multiprocessor on-line scheduling of hard real-time tasks,"
"Computers and intractability, a guide to the theory of NP- completeness,"
"Fault-Tolerance through scheduling of aperiodic tasks in hard real-time multiprocessor systems,"
"Bounds on multiprocessing timing anomalies,"
"Approaches to implementation of reparable distributed recovery block scheme,"
"Distributed fault-tolerant real-time systems,"
"On scheduling tasks with quick recovery from failure,"
"Real-time Systems,"
"Architectural principles for safety-critical real-time applications,"
"A fault tolerant scheduling problem,"
"Imprecise computations,"
"Modular redundancy in a message passing system,"
"An efficient dynamic scheduling algorithm for multiprocessor real-time systems,"
"A new study for fault-tolerant real-time dynamic scheduling algorithms,"
"New algorithms for resource reclaiming from precedence constrained tasks in multiprocessor real-time systems,"
"Real-time System Scenarios,"
"Analysis of a fault-tolerant multiprocessor scheduling al- gorithm,"
"Adaptive software fault tolerance policies with dynamic real-time guarantees,"
"Multiprocessor support for real-time fault-tolerant scheduling,"
"An environment for developing fault-tolerant software,"
"Efficient scheduling algorithms for real-time multiprocessor systems,"
"Scheduling algorithms and operating systems support for real-time systems,"
"Graceful degradation in real-time control applications using (m,k)-firm guarantee,"
"Resource reclaiming in multiprocessor real-time systems,"
"Real-time computing: A new discipline of computer science and engineering,"
"Understanding fault-tolerance and reliability,"
"The Spring Kernel: A new paradigm for real-time operating systems,"
"TaskPair-Scheduling: An approach for dynamic real-time systems,"
"Low overhead multiprocessor allocation strategies exploiting system spare capacity for fault detection and location,"
"Fault-tolerant scheduling algorithm for distributed real-time systems,"
"Determining redundancy levels for fault tolerant real-time systems,"
"Multiprocessor scheduling of processes with release times, deadlines, precedence and exclusion constraints,"
"Scheduling tasks with resource requirements in hard real-time systems,"
--TR
--CTR
Wei Sun , Chen Yu , Xavier Defago , Yuanyuan Zhang , Yasushi Inoguchi, Real-time Task Scheduling Using Extended Overloading Technique for Multiprocessor Systems, Proceedings of the 11th IEEE International Symposium on Distributed Simulation and Real-Time Applications, p.95-102, October 22-26, 2007
R. Al-Omari , A. K. Somani , G. Manimaran, An adaptive scheme for fault-tolerant scheduling of soft real-time tasks in multiprocessor systems, Journal of Parallel and Distributed Computing, v.65 n.5, p.595-608, May 2005
R. Al-Omari , Arun K. Somani , G. Manimaran, Efficient overloading techniques for primary-backup scheduling in real-time systems, Journal of Parallel and Distributed Computing, v.64 n.5, p.629-648, May 2004
Xiao Qin , Hong Jiang, A novel fault-tolerant scheduling algorithm for precedence constrained tasks in real-time heterogeneous systems, Parallel Computing, v.32 n.5, p.331-356, June 2006
Wenjing Rao , Alex Orailoglu , Ramesh Karri, Towards Nanoelectronics Processor Architectures, Journal of Electronic Testing: Theory and Applications, v.23 n.2-3, p.235-254, June 2007 | run-time anomaly;dynamic scheduling;fault tolerance;safety critical application;resource reclaiming;real-time system |
293004 | New Perspectives in Turbulence. | Intermittency, a basic property of fully developed turbulent flow, decreases with growing viscosity; therefore classical relationships obtained in the limit of vanishing viscosity must be corrected when the Reynolds number is finite but large. These corrections are the main subject of the present paper. They lead to a new scaling law for wall-bounded turbulence, which is of key importance in engineering, and to a reinterpretation of the Kolmogorov--Obukhov scaling for the local structure of turbulence, which has been of paramount interest in both theory and applications. The background of these results is reviewed, in similarity methods, in the statistical theory of vortex motion, and in intermediate asymptotics, and relevant experimental data are summarized. | Introduction
In February 1996 I had the privilege of meeting Prof. G.I. Barenblatt, who had just
arrived in Berkeley. In our first extended conversation we discovered that we had been
working on similar problems with different but complementary tools, which, when wielded
in unison, led to unexpected results. We have been working together ever since, and it is
a pleasure to be able to present some of the results of our joint work at this distinguished
occasion.
The present talk will consist of three parts: (i) An application of advanced similarity
methods and vanishing-viscosity asymptotics to the analysis of wall-bounded turbulence,
(ii) a discussion of the local structure of turbulence with particular attention to the higher-order
structure functions, and (iii) a discussion of a near-equilibrium statistical theory of
1 Supported in part by the Applied Mathematical Sciences subprogram of the Office of Energy Research,
U.S. Department of Energy, under contract DE-AC03-76-SF00098, and in part by the National Science
Foundation under grants DMS94-14631 and DMS89-19074.
turbulence, which motivates and complements our reading of the numerical and experimental
data. The basic premise is that, as the viscosity tends to zero and the solutions of
the Navier-Stokes equations acquire poorly understood temporal and spatial fluctuations,
certain mean properties of the of the flow can be seen to take on well-defined limits, which
can be found by expansion in a small parameter that tends to zero, albeit slowly, as the
viscosity tends to zero.
In the case of wall-bounded turbulence, our argument and the data show that the
classical von K'arm'an-Prandtl law should be replaced, when the viscosity is small but
finite, by a Reynolds-number-dependent power law. In the case of local structure, an
analogous argument shows that the Kolmogorov scaling of the second and third order
structure functions is exact in the limit of vanishing viscosity, when the turbulence is most
intermittent and least organized. When the viscosity is non-zero (Reynolds number large
but finite), Reynolds-number-dependent corrections to the Kolmogorov-Obukhov scaling
of the structure functions appear, due to a viscosity-induced reduction in intermittency.
For higher-order structure functions the vanishing viscosity limit ceases to exist because
of intermittency, and the Kolmogorov-Obukhov scaling fails.
The near-equilibrium statistical theory we shall present is the basis of vanishing-viscosity
asymptotics and relates the the behavior of the higher-order structure functions to the
presence of intermittency. All parts of our analysis are heterodox in the context of the
current state of turbulence research, but not in the broader context of the statistical
mechanics of irreversible phenomena.
2. The intermediate region in wall-bounded turbulence
Consider wall-bounded turbulence, in particular fully developed turbulence in the working
section, i.e., far from the inlet and outlet, of a long cylindrical pipe with circular
cross-section. It is customary to represent u, the time-averaged or ensemble-averaged
longitudinal velocity in a pipe in the dimensionless form
where u is the "friction" velocity that defines the velocity scale:
where ae is the density of the fluid and - is the shear stress at the pipe's wall,
d
Here \Deltap is the pressure drop over the working section of the pipe, L is the length of the
working section, and d is the pipe's diameter. The dimensionless distance from the pipe
wall is
where y is the actual distance from the wall and - is the fluid's kinematic viscosity. The
length scale -=u in (2.4) is typically very small - less than tens of microns in some of the
data discussed below. The key dimensionless parameter in the problem is the Reynolds
number
ud
where - u is the velocity averaged over the cross-section. When the Reynolds number Re
is large, one observes that the cross-section is divided into three parts (Figure 1): (1)
the viscous sublayer near the wall, where the velocity gradient is so large that the shear
stress due to molecular viscosity is comparable to the turbulent shear stress, a cylinder
(2) surrounding the pipe's axis the where the velocity gradient is small and the average
velocity is close to its maximum, and the intermediate region (3) which occupies most of
the cross-section and on which we shall focus.
The velocity gradient @ y u, (@ y j @
@y ), in the intermediate region (3) of Figure 1 depends
on the following variables: the coordinate y, the shear stress at the wall - , the pipe diameter
d, the fluid's kinematic viscosity - and its density ae. We consider the velocity gradient
@ y u rather than u itself because the values of u depend on the flow in the viscous sublayer
where the assumptions we shall make are not valid. Thus
Dimensional analysis gives
y
ud
where \Phi is a dimensionless function, or, equivalently,
Outside the viscous sublayer j is large, of the order of several tens and more; in the kind
of turbulent flow we consider the Reynolds number Re is also large, at least 10 4 . If one
assumes that that for such large values of j and Re the function \Phi no longer varies with its
arguments and can be replaced by its limiting value \Phi(1; (this is an assumption
of "complete similarity" in both arguments, see [1]), then equation (2.8) yields
and then an integration yields the von K'arm'an- Prandtl "universal" logarithmic law
where - ("von K'arm'an's constant'') and B are assumed to be ``universal'', i.e. Re independent
constants. The assumption that B is Reynolds-number-independent is an additional
assumption. The resulting law is widely used to describe the mean velocity in the intermediate
region; the values of - in the literature range between :36 and :44 and the values
of B between 5 and 6.3- an uncomfortably wide spread if one believes in the "universality"
of (2.10).
However, there is no overwhelming reason to assume that the function \Phi has a constant
non-zero limit as its arguments tend to infinity, nor that the integration constant remains
bounded as Re tends to infinity. When either assumption fails other conclusions must
be reached. Rather than list alternative assumptions we present a model problem that
exhibits in a simple manner what goes wrong as well as the cure.
Consider the equation
dy
y
for y positive, where u is subject to the boundary condition
positive parameter. One can view ffi as a dimensionless viscosity, and thus ffi \Gamma1 is analogous
to a Reynolds number.
One could reason as follows: For
dy is approximately zero, and thus u is a
constant, which can only be the constant 1. We can derive the same result for small y
and ffi by an assumption of complete similarity: Equation (2.11) is homogeneous in the
dimensions of u and y, and thus one can view both of these variables as dimensionless. By
construction, ffi must be dimensionless. The dimensionless relation between these variables
takes the form
and if one makes an assumption of complete similarity, i.e., assumes that for ffi; y small \Phi
is constant, one finds again that u is the constant 1.
This conclusion is false. Equation (2.11) has the following solution that satisfies the
boundary condition
Note that for any positive value of ffi this solution is a power law and not a constant.
We can obtain this solution for small y and ffi by assuming that the the solution is a
power of the variable y while the form of its dependence on ffi is unknown; this leads to
. (This is an assumption of "incomplete similarity" in y and no similarity in
ffi). A substitution into equation (2.11) yields A(ffi); ff(ffi).
Further, consider the solution (2.13) and, for any non-zero value of y, its limit as
Clearly,
and thus, as e, i.e., the limit of (2.13) for y ? 0 is the constant e. As
deduced from the false assumption of complete similarity, the limit of u is a constant, but
it is not the obvious constant 1. Furthermore, for a finite value of ffi, however small, u is
not uniformly constant; it is not equal to e either for y ! ffi or for y large enough. The
approximate equality u - e holds, when ffi is small but finite, only in an intermediate range
constitutes an example of "intermediate asymptotics" [1].
Now consider subjecting \Phi in (2.8) to an analogous assumption, of incomplete similarity
in j and no similarity in Re when both arguments are large [2]:
Note a clear difference between complete and incomplete similarity. In the first case
the experimental data should cluster in the (ln j; OE) plane on the
single straight line of the logarithmic law; in the second case the experimental points would
occupy an area in this plane. Both similarity assumptions are very specific. The possibility
that \Phi has no non-zero limit yet cannot be represented asymptotically as a power of j has
not been excluded. Both assumptions must be subjected to careful scrutiny. In the absence
of reliable, high-Re numerical solutions of the Navier-Stokes equation and of an appropriate
rigorous theory, this scrutiny must be based on comparison with experimental data.
We now specify the conditions under which (2.15) may hold and narrow down the choices
of A(Re) and ff(Re) (see [4,5,6,7,9,10]). Fully developed turbulence is not a single, well-defined
state with properties independent of Re; there may be such a single state in the
limit of infinite Reynolds number, but experiment, even in the largest facilities, shows that
fully developed turbulence still exhibits a perceptible dependence on Re. Fully developed
turbulence has mean properties (for example, parameters such as A and ff in (2.15)) that
vary with Reynolds number like K 0 are constants and " is a small
parameter that tends to zero as Re tends to infinity, small enough so that its higher powers
are negligible, yet not so small that its effects are imperceptible in situations of practical
interest; the latter condition rules out choices such as We expect A(Re) and
ff(Re) in (2.15) to have the form
are universal constants; we have implicitly used a principle derived
from the statistical theory of section 4, according to which the average gradient of the
velocity profile has a well-defined limit as the viscosity - tends to zero [5,6,10]. This is the
the vanishing-viscosity principle. We expand A(Re); ff(Re) in powers of " and keep
the first two terms; the result is:
Substitution of (2.17) into (2.15) yields
For this expression to have a finite limit as - tends to zero one needs ff must
tend to zero as Re tends to infinity like ( 1
Re ) or faster. The assumption of incomplete
similarity, experiment, and the vanishing-viscosity principle show that the threshold value
Re is the proper choice. Use of this choice in equation (2.18) and an integration
yield
Re
where the additional condition motivated by the experimental data, has been
used.
According to this derivation, the coefficients C are universal constants, the same
in all experiments of sufficiently high quality performed in pipe flows at large Reynolds
numbers [2,3]. In [12] the proposed law for smooth walls (2.19) was compared with the
data of Nikuradze [26] from Prandtl's institute in G-ottingen. The comparison yielded the
coefficients ff
with an error of less than 1%. The
result is
We now wish to use the law (2.20) to understand what happens at larger Reynolds
numbers and for a broader range of values of j than were present in the experiments
reported by Nikuradze. If this extrapolation agrees with experiment, we can conclude that
the law has predictive powers and provides a faithful representation of the intermediate
region. We have already stated that the limit that must exist for descriptions of the mean
gradient in turbulent flow is the vanishing-viscosity limit, and thus one should be able
to extrapolate the law (2.20) to ever smaller viscosities -. This is different from simply
increasing the Reynolds number, as - affects j and - u as well as Re. The decrease in the
viscosity corresponds also to what is done in the experiments: If one stands at a fixed
distance from the wall, in a specific pipe with a given pressure gradient, one is not free to
vary
ud=- and independently because the viscosity - appears in both, and
if - is decreased by the experimenter, the two quantities will increase in a self-consistent
way and - u will vary as well. As one decreases the viscosity, one considers flows at ever
larger j at ever larger Re; the ratio 3
Re tends to 3=2 because - appears in the same way
in both numerator and denominator. Consider the combination 3 ln j=2 ln Re in the form
=\Theta
d
According to [3], at small -, i.e. large Re, -
u=u - ln Re, so that the term ln(-u=u ) in
the denominator of the right-hand side of (2.21) is asymptotically small, of the order of
Re, and can be neglected at large Re; because the viscosity - is small, the first term
in both the numerator and denominator of (2.21) is dominant, as long as the
ratio y=d remains bounded from below, for example by a predetermined fraction. Thus,
away from a neighborhood of the wall, the ratio 3 ln j=2 ln Re is close to 3/2 (y is obviously
bounded by d=2). Therefore
is a small parameter as long as y ? \Delta, where \Delta is an appropriate fraction of d. The
quantity exp(3 ln j=2 ln Re) is approximately equal to
exp
Re
'-
Re
'-
Re
\Gamma2
According to (2.20) we have also
exp
and the approximation (2.22) can also be used in (2.23). Thus in the range of y where
y ? \Delta, but y ! d=2, we find, up to terms that vanish as the viscosity tends to zero,
and
Equation (2.25) is the asymptotic slope condition: As - ! 0 the slope of the power
law tends to a finite limit, the limiting slope, which is the right hand side of (2.25). The
von K'arm'an-Prandtl law also subsumes an asymptotic slope condition, with a limiting
slope 1=-; the limiting slope in equation (2.25),
approximately
larger than a generally accepted value for - \Gamma1 . One can view equation (2.24) as an
asymptotic version of the classical logarithmic law, but with an additive term that diverges
as the Reynolds number tends to infinity, and of course a different slope.
The family of curves parametrized by Re has an envelope whose equation
tends to
with
close to the values of - found in the literature. The corresponding
value of 1
- is exactly
e times smaller than the value on the right-hand side of (2.25). It is
clear that the logarithmic law usually found in the literature corresponds to this envelope;
indeed, if one plots points that correspond to many values of Re on a single graph (as is
natural if one happens to believe the von K'arm'an-Prandtl law (2.10)), then one becomes
aware of the envelope. The visual impact of the envelope is magnified by the fact that the
small y part of the graph, where the envelope touches the individual curves, is stretched
out by the effect of - on the values of ln j. Also, measurements at very small values of
y where the difference between the power law and the envelope could also be noticeable
are missing because of experimental difficulties very near the wall. Thus, if our proposed
power law is valid, the conventional logarithmic law is an illusion which substitutes the
envelope of the family of curves for the curves themselves. The discrepancy of
e between
the slope of the curves and slope of the envelope is the signature of the power law, and
helps to decide whether the power law is valid. The situation is summarized in Figure 2,
which shows schematically the individual curves of the power law, their envelope, and the
asymptotic slope.
Historically, the understanding of the flow in the intermediate region of wall-bounded
turbulence has been influenced by the "overlap" argument of Izakson, Millikan and von
Mises (IMM) (see e.g [25]). This argument in its original form contains an implicit assumption
of complete similarity, and once freed from it yields yet again the asymptotic
slope condition (2.25). For details, see [5,7,10].
Detailed comparisons of the the power law and the von K'arm'an-Prandtl laws with
experimental data are available in refs. [9,10]. For completeness, we exhibit in Figure
3 a set of experimental curves from the Princeton superpipe experiment [34]. Note its
qualitative similarity to Figure 2. In particular, despite a flaw discussed in detail in [10],
these experiments do indeed exhibit a separate curve for each Reynolds number, and a
well-defined angle between the curves and their envelope.
The applicability of the our analysis of the intermediate region of pipe flow to the
velocity profile in a zero-pressure-gradient boundary layer is discussed in [11].
3. Local structure in turbulence
The analogy between the inertial range in the local structure of developed turbulence
and the intermediate range in turbulent shear flow near a wall has been noted long ago (see
e.g. [14,32], and it motivates the extension of the scaling analysis above to the case of local
structure, where the experimental data are much poorer. In the problem of local structure
the quantities of interest are the moments of the relative velocity field, in particular the
second order tensor with components
difference between x and x + r. In locally
isotropic incompressible flow all the components of this tensor are determined if one knows
is the velocity component along the vector r.
To derive an expression for DLL assume, following Kolmogorov, that for
it depends on h"i, the mean rate of energy dissipation per unit volume, r, the distance
between the points at which the velocity is measured, a length scale , for example the
Taylor macroscale T , , and the kinematic viscosity -:
where the function f should be the same for all developed turbulent flows. If r is large,
other variables may appear, as a consequence of external forces or of boundary conditions.
The most interesting and the most important argument in this list is the rate of energy
dissipation ".
Introduce the Kolmogorov scale K , which marks the lower bound of the "inertial"
range of scales in which energy dissipation is negligible:
Clearly, the velocity scale appropriate to the inertial range is
and this yields a Reynolds number
The inertial range of scales is intermediate between the scales on which the fluid is stirred
and the scales where viscosity dissipates energy, and is the analog of the intermediate region
in wall-bounded flow. In this range the scaling law that corresponds to (2.15) is:
r
where as before, the function \Phi is a dimensionless function of its arguments, which have
been chosen so that under the circumstances of interest here they are both large.
If one now subjects (3.6) to an assumption of complete similarity in both its arguments,
one obtains the classical Kolmogorov 2=3 law [21]
from which the Kolmogorov-Obukhov "5/3" spectrum [27] can be obtained via Fourier
transform. If one makes the assumption of incomplete similarity in r= K and no similarity
in Re, as in the case of wall-bounded flow, the result is
r
where C; ff are functions of Re only. As before, expand C and ff in powers of 1
Re
and
keep the two leading terms; this yields
Re
r
(ff 0 has been set equal to zero so that DLL have a finite limit as - ! 0).
In real measurements for finite but accessibly large Re, ff small in comparison
with 2/3, and the deviation in the power of r in (3.9) could be unnoticeable. On the other
hand, the variations in the "Kolmogorov constant" have been repeatedly noticed (see
[25,29,31]). Complete similarity is possible only if A 0 6= 0, when one has a well-defined
turbulent state with a 2/3 law in the limit of vanishing viscosity, and finite Re effects can
be obtained by expansion about that limiting state. In the limit of vanishing viscosity
there are no corrections to the "K-41" scaling if equation (3.9) holds; this conclusion was
reached in [15] by the statistical mechanics argument summarized in section 4 below.
Kolmogorov [21] proposed similarity relations also for the higher order structure functions
repeated p times; the scaling gives D Experiments
in a small wind-tunnel by Benzi et al. [13], show some self-similarity in these
higher-order functions, obviously incomplete, so that D LL:::L is proportional to r ip , with
exponents i p always smaller then p=3 for p - 3, so that i
instead of 1:67, i instead of 2:33, and i
of 2:67. It is tempting to try for an explanation of the same kind as for 2:
Re
in other words, to assume that at Re = 1 the classic "K41" theory is valid, but the experiments
are performed at Reynolds numbers too small to reveal the approach to complete
similarity. If this explanation were correct, the coefficients ff p would be negative starting
with 4, where there would be a reversal in the effect of the Kolmogorov scale (or
whatever scale is used to scale the first argument in \Phi).
As is well-known, for the Kolmogorov scaling is valid with no corrections. For
must proceed with caution. We would like to present a simple argument that
casts doubt on the good behavior of the structure functions for in the vanishing-
viscosity limit. As Re ! 1 the intense vorticity and the large velocities in the fluid
become concentrated in an ever smaller volume [15]. This is what we call "intermittency".
is the fraction of the volume of a unit mass of fluid where the kinetic energy - u 2
is large, then
; one can see that fourth moments such as hu 4 i diverge as
This casts a strong doubt on the good behavior of the fourth-order structure functions as
the viscosity tends to zero; in the absence of such good behavior our expansions in powers
of 1
Re
cannot be justified and the explanation of the experimental data must proceed
along different lines. Note that is the power where the sign of the power of r
in
an expansion in powers of 1
Re
would change.
The analysis just given of the second-order structure function contradicts the conclusions
of the Benzi et al. [13], according to whom the asymptotic exponent in (3.9) is independent
of Re and different from 2=3. We wish to point out however that, as we understand the
discussion in [13], the exponent was found to be different from 2=3 only once it was assumed
that it was not dependent on Re; to the contrary, even a cursory view of Fig. 3 in ref.
[13] shows a marked dependence on Re. We are looking forward to an opportunity to
reexamine these data in the light of our hypotheses.
There is a key difference between a derivation of the Kolmogorov-Obukhov exponent
from an assumption of complete similarity and its derivation as the vanishing-viscosity limit
of an expression derived from an assumption of incomplete similarity. Complete similarity
typically holds in statistical problems that are well-described by mean-field theories, while
incomplete similarity typically applies to problems where fluctuations are significant. This
remark is consistent with our conclusion, presented in [7,10], that the Kolmogorov scaling
already allows for intermittency, and that its application to higher-order structure functions
is limited by this very intermittency.
4. A near-equilibrium theory of turbulence
At large Reynolds numbers Re the solutions of the Navier-Stokes equations are chaotic,
and the slightest perturbation alters them greatly. The proper object of a theory of turbulence
is the study of ensembles of solutions, i.e.,of collections of solutions with probability
distributions that describe the frequency of their occurrence. We now outline a near-equilibrium
theory of ensembles of flows on those small scales where the scaling theory of
the previous section applies. This theory justifies the use of vanishing-viscosity asymptotics
for appropriate moments of the velocity field and of its derivatives and supports the
conclusions of the previous section regarding the behavior of the higher-order moments
and structure functions. It is equivalent to earlier near-equilibrium theories [15], but the
specific approach and the presentation are new; fuller detail can be found in [8].
We describe turbulence in terms of a suitable statistical equilibrium. In statistical
mechanics, statistical equilibrium is what one finds in an isolated system if one waits long
enough. One way of characterizing this equilibrium is by assuming that all states of the
system compatible with the system's given energy can occur with equal probabilities; this
is the "microcanonical ensemble". In turbulence the appropriate energy is the kinetic
energy of the flow. An equivalent characterization is in terms of the "canonical ensemble",
in which the probability of a state is proportional to exp(\GammafiH), where H is the energy of
the state and fi is a parameter. In the canonical ensemble the energy is not fixed, and one
can view the ensemble as describing a portion of an isolated system at equilibrium as it
interacts with the rest of the system. The two ensembles are equivalent in the sense that,
with a proper choice of the parameter fi and in a system with enough degrees of freedom,
averages calculated in either ensemble are close to each other.
The parameter fi is generally called the "inverse temperature" of the system. In many
physical systems indeed proportional to what one intuitively perceives to be
the temperature, as it can be gauged by touching a system with one's finger. However, the
parameter fi can be viewed more abstractly, as the parameter that makes the two ensembles
equivalent; in incompressible turbulence, in which there is no interaction between the
macroscopic flow and the microscopic motion of the molecules of the fluid, the fi that one
obtains cannot be gauged by the sense of touch. In a given system, fi is a function of the
energy E and of whatever other variables are needed to describe the system. Note than
in other realms of physics, for example in the kinetic theory of gases, one is well used to
relating temperature to a suitable kinetic energy.
Turbulence as a whole is generally not an equilibrium phenomenon: For example, if
one stirs a box full of fluid and then isolates the resulting flow, the outcome after a long
time is not turbulence in a statistical equilibrium but a state of rest; an isolated turbulent
flow is one without outside forces or an imposed shear to keep it flowing. However, on the
small scales in which we are interested, the relevant question is whether the motion has
enough time to settle to an approximation of a statistical equilibrium in which one can
assume that all the states with a given kinetic energy are equally likely to appear. The
small scales have enough time when their characteristic time (length/velocity) is short
enough compared to the characteristic time of the large-scale motion. An inspection of
the Kolmogorov scaling given in the previous section shows that the characteristic time of
an eddy of size r is proportional to r 2=3 , and small enough eddies (i.e., vortices) do have
enough time to settle down to an equilibrium distribution. The task at hand is to construct
the statistical equilibrium appropriate for turbulence, in particular specify its states. The
question of how then to perturb it so as to take into account the irreversible aspects of
turbulence has been treated elsewhere [16,17] and will not concern us here. Note that in
most of the turbulence literature one speaks of the small scales reaching "equilibrium"
when the energy distribution among them approximates the Kolmogorov-Obukhov form;
here, for the moment, we mean by "equilibrium" a statistical equilibrium, in which all
states have equal probability; we shall shortly claim that these two meanings are in fact
identical. This is of course possible only if at the statistical equilibrium there are more
states with much of the energy in the larger scales than states with much of the energy in
the smaller scales.
To agree with observation, a a hydrodynamical statistical equilibrium must have a finite
energy density in physical space. To construct an equilibrium with this property, we start
as in the construction employed in ref. [24] for vorticity fields; for simplicity we describe
it in two space dimensions. Consider a unit box
boundary conditions. Let be the velocity field and / a stream function; divide
the box into N 2 squares of side h, in each square define a value of a discrete stream
describe the location of the square; then define a discrete velocity
field u ij by one-sided difference quotients of /, so that that the velocity is divergence-free.
(In three dimensions there is one more index and the stream-function is replaced by a
vector potential). The parameter h is an artificial cut-off, and we now present a procedure
for letting this cut-off tend to zero while producing sensible fluid mechanics in the limit.
Replace the energy
R
square
dxdy by its discrete counterpart
. For
a fixed value of h, pick a value of E h , and, as a first step, assume that the values of u are
equidistributed among all states with use a microcanonical ensemble.
One can check that on the average, each one of the boxes has the same energy ju ij
One may think that if one lets h ! 0 while keeping E h constant, the limit is an ensemble
with a finite energy per unit volume; this is not so. The sequence of ensembles one obtains
as reasonable limit: As h ! 0 the number of degrees of freedom tends to
infinity, and there is no sensible way to divide a finite energy equally among an infinite
number of degrees of freedom; indeed, if the energy per degree of freedom is zero the
limiting ensemble has zero energy and no motion, and if the energy per degree of freedom
is positive the limiting ensemble has an infinite energy (for a more thorough mathematical
discussion, see [6,8]). One can also see that the limit of these microcanonical ensembles is
meaningless by considering the corresponding canonical ensembles: One can check that as
the parameter fi in the sequence of canonical ensembles tends to infinity; one can
show that the only ensembles with infinite fi have either no energy or an infinite energy.
To find a way out of this dilemma one must modify these ensembles as h ! 0 so as to
ensure that the limit exists. We do so by looking at what happens to the parameter fi and
keeping it bounded; furthermore, we do so on the computer. This is the key point: To
obtain a sensible continuum limit, we keep fi bounded by keeping the energy from becoming
equally distributed among the degrees of freedom, and this produces an average energy
distribution among scales that agrees with the Kolmogorov-Obukhov law and produces the
Kolmogorov scaling of the low-order structure functions. One can also show (see [8]) that
this very same procedure is needed to produce ensembles whose members, the individual
velocity fields, do not violate what is known about the solutions of the Navier-Stokes or
Euler equations.
To proceed, we have to be able to calculate fi given h and E h . Averages with respect to
microcanonical ensembles can be calculated numerically by an algorithm known as "mi-
crocanonical sampling" [18]: Introduce an additional variable, a "demon", which interacts
with all the degrees of freedom in some random order. In each interaction, the demon
either absorbs an energy packet of some predetermined magnitude s, s !! E h , or gives
away an energy packet of the same size. If the demon takes in an energy s, it reduces
the energy in the velocity field by modifying / ij so that the integral
R
dxdy over the
unit square is reduced by s; the effect of this reduction modifies the values of u in the
neighborhood of the point (i; j). If the demon gives out energy, it modifies / so as to
increase the energy integral. The demon is constrained so that it cannot give out energy
unless it had acquired energy in its previous history; no "loans" are allowed. The sequence
of states wrought by the demon's actions ranges over even-handedly the configurations of
the system. If one wishes to conserve an additional quantity, as we shall below, one can
do so by allowing the demon to exchange doses of the conserved quantity as it wanders
along, subject to the condition that it never give out what it does not have.
The parameter fi in the equivalent canonical ensemble can be determined in the course
of calculating averages: As the demon interacts with the ensemble it typically has some
energy stored away; the system consisting of the physical system plus the demon is isolated,
and by the equivalence of the canonical and microcanonical ensembles, the probability of
an energy E d being stored by the demon is canonical, i.e., proportional to exp(\GammafiE d );
this observation allows one to estimate fi after the demon has had a sufficient number of
interactions. In addition to its dynamical role in moving the system from state to state
for the purpose of calculating averages, the demon reveals the value of fi; if h and E h are
given, there is a well-defined numerical procedure for finding the corresponding fi.
Rather than keep fi merely bounded, we keep it constant. To do this, one needs a
variable that can be altered and whose variation controls fi. Experience and mathematics
show that one can use as control variable the integral I =
R
j-jdxdy , where - is the vorticity
calculated by finite differences and the integral is approximated by the appropriate sum.
Thus the plan is to determine an I for each h so as to keep fi at a fixed value fi goal common
to all the h. For simplicity and without loss of generality we set E
For a given h, pick a starting guess for I, say I 0 , and then produce a sequence of better
values I n by the formula
where fi without a subscript is the latest estimate of fi available from the demon and K
is a numerical parameter chosen so as to ensure that the I n converge to a limit. Before
calculating a new value I n+1 of I the demon must be allowed at least one energy exchange
with the ensemble, during which the variable I is maintained at its last value I n . Once fi
reaches the desired value fi goal the quantity I remains constant. The resulting ensemble
gives non-zero, equal probabilities to all states compatible with both the given value
and the calculated value of I; when both constraints are satisfied, the energy per degree
of freedom is no longer the same for all the degrees of freedom.
The changes in I needed to keep fi fixed as h is changing are displayed in Figure 4
for several values of . The statistical error is throughout of the order of 2%.
The values of I needed to keep fi fixed increase with As shown in [8, 16], for
small enough h the curves are independent of the value of fi, and this fact is reflected in
the confluence of the several curves in Figure 4. Note that I is calculated on the grid
by taking differences of the values of the velocity u at points separated by h, which by
the Kolmogorov-Obukhov scaling should be proportional to h 1=3 ; then one divides by h,
and takes an average; one expects I to grow with N like N 2=3 . In Figure 5 we plot the
logarithm of I vs. the logarithm of N ; the relation is well approximated by a straight
line whose slope is :65 with an error of \Sigma:05. Within the limitations of the Monte-Carlo
sampling, the Kolmogorov-Obukhov scaling is seen to be applicable in this equilibrium
model. As the Kolmogorov-Obukhov scaling applies to the low-order structure functions
in a flow with a finite but small viscosity, Figure 5 shows that low-order moments structure
function have a limit as the viscosity tends to zero. One can perform a similar analysis of
the small-scale structure of flow near a wall and conclude that the the first-order moments
of the derivatives of the velocity field near walls have well-behaved limits, a fact used above
in the discussion of the scaling of the wall-region in the pipe. Figure 4 defines the limiting
process in which h ! 0 with a limit that provides a meaningful equilibrium ensemble for
the small scales of the flow.
The fact that the construction above is numerical enhances its value rather than detracts
from it, as we expect to use similar constructions in the numerical modeling of turbulence.
An elegant argument, suggested by the work of Kailasnath et al. [19] and presented in detail
in [8], shows that the third-order structure function is calculated in the equilibrium theory
exactly; of greater interest here is what happens to moments of the velocity field of order
four and more. We have argued in the preceding section that the vanishing viscosity limit
is not well-behaved for these higher moments, and thus the good behavior of the structure
functions is unlikely and the expansion in powers of 1= ln Re is invalid. In the equilibrium
theory the fourth-order moments fail to converge to a finite limit as
In
Figure
6 we display the fourth-order moment
R
juj 4 dxdy as a function of N at the
parameter value 5. Higher-order moments diverge even faster. The divergence of the
higher moments corresponds to the formation of concentrated vortical structures, like the
ones explicitly constructed in [16]. We have thus produced Kolmogorov scaling for the
low-order moments in a system that is highly intermittent in the sense that the vorticity
is concentrated on a small fraction of the available volume. The results of the equilibrium
theory are therefore consistent with the scaling analysis of the previous section, according
to which the Kolmogorov scaling of the second-order structure function is exact in the
limit of vanishing viscosity not despite intermittency but because of intermittency, while
its failure for the higher-order moments can be ascribed to the absence of a well-behaved
vanishing-viscosity limit, as a result of which the expansion in the inverse powers of ln Re
is not legitimate.
Note the small number of assumptions made in the equilibrium theory; all that was
assumed was that the fluid was near statistical equilibrium on the small scales, the fluid
was incompressible, the energy density in physical space was finite, and a probability
measure on the ensemble of flows was well-defined. The Navier-Stokes equations did not
enter the argument in the present paper (but see ref. [8]).
Finally, it is worth noting that an analysis of simplified near-equilibrium vortex models
[22,33] has provided an example where an expansion in powers of a parameter analogous
to 1 ln Re can be fully justified without recourse to experimental data.
5. Conclusions
We have reached the following conclusions:
(1) The von K'arm'an-Prandtl law of the wall must be jettisoned, and replaced by a
power law with a Reynolds-number dependent coefficient and exponent, as suggested
by an assumption of incomplete similarity.
(2) The Kolmogorov-Obukhov scaling of low-order structure functions in the local
structure of turbulence admits only viscosity-dependent corrections, which vanish
as the Reynolds number tends to infinity. There are no "intermittency corrections"
to this scaling in the limit of vanishing viscosity. The Kolmogorov scaling of the
higher-order structure functions fails because of intermittency.
(3) These conclusions are consistent with and support the near-equilibrium theory of
turbulence.
Acknowledgement
Prof. Barenblatt and I would like to thank the following persons for
helpful discussions and comments and/or for permission to use their data: Prof. N. Gold-
enfeld, Prof. O. Hald, Dr. M. Hites, Prof. F. Hussain, Dr. A. Kast, Dr. R. Kupferman,
Prof. H. Nagib, Prof. C. Wark and Dr. M. Zagarola.
--R
Cambridge University Press
Reynolds number) in the developed turbulent flow in pipes
Basic hypotheses and analysis
of local structure and for the wall region of wall-bounded turbulence
turbulence theory
asymptotics and intermittency
in preparation
Discussion of experimental data
Sciences USA
the zero-pressure-gradient turbulent boundary layer
Part 2.
Physica D 80
Applications to Physical Systems
law and odd moments of the velocity difference in turbulence
USSR
Series 4
exponents at very high Reynolds numbers
for the two-dimensional XY model
Reynolds number
Figure 1.
Figure 2: Schematic of the power law curves
The individual curves of the power law
The envelope of the family of power law curves (often mistaken for a logarithmic
The asymptotic slope of the power law curves.
Figure 3.
of the experimental data according to their Reynolds numbers and the rise of the curves
above their envelope in the (ln j
at the center of the pipe.
and are incompatible with the von K'arm'an-Pradtl universal logarithmic law
Figure 4.
Figure 5.
given by the Kolmogorov-Obukhov scaling
Figure 6.
--TR | wall-bounded turbulence;local structure;intermittency;statistical theory;turbulence;scaling |
293148 | Boundary representation deformation in parametric solid modeling. | One of the major unsolved problems in parametric solid modeling is a robust update (regeneration) of the solid's boundary representation, given a specified change in the solid's parameter values. The fundamental difficulty lies in determining the mapping between boundary representations for solids in the same parametric family. Several heuristic approaches have been proposed for dealing with this problem, but the formal properties of such mappings are not well understood. We propose a formal definition for boundary representation. (BR-)deformation for solids in the same parametric family, based on the assumption of continuity: small changes in solid parameter values should result in small changes in the solid's boundary reprentation, which may include local collapses of cells in the boundary representation. The necessary conditions that must be satisfied by any BR-deforming mappings between boundary representations are powerful enough to identify invalid updates in many (but not all) practical situations, and the algorithms to check them are simple. Our formulation provides a formal criterion for the recently proposed heuristic approaches to persistent naming, and explains the difficulties in devising sufficient tests for BR-deformation encountered in practice. Finally our methods are also applicable to more general cellular models of pointsets and should be useful in developing universal standards in parametric modeling. | consumer applications, probably has to do more with development and successful marketing of new parametric
("feature-based" and "constraint-based") user interfaces than with the mathematical soundness of solid
modeling systems. These parametric interfaces allow the user to define and modify solid models in terms of
high-level parametric definitions that are constructed to have intuitive and appealing meaning to the user
and/or application (see [16] for examples and references). The success of parametric solid modeling came at
a high price: the new solid modeling systems no longer guarantee that the parametric models are valid or
unambiguous, and the results of modeling operations are not always predictable.
Unpredictable behavior of the new systems is well documented in the literature [11, 13, 15, 34], and we
will consider some simple illustrative examples in the next section. The basic technical problem is that a
parametric solid model corresponds to a class of solids, but there is no formal definition or standard for what
Complete address: 1513 University Avenue, Madison, Wisconsin 53706, USA. Email: [email protected],
ParameterInstance
'persistent naming'
ParameterInstance
Figure
1: The role of persistent naming for a parametric family of solids
this class is [34]. As a result, different modeling systems employ incompatible, ad hoc, and often internally
inconsistent semantics for processing parametric models.
The lack of formal semantics manifests itself clearly through the so called "persistent naming" problem
[15, 34]. In modern systems, every edit of a parametric definition is followed by a regeneration (boundary
evaluation) of the boundary representation (b-rep) for the resulting solid. Since parametric definitions and
constraints explicitly refer to entities (faces, edges, vertices) in the boundary representation, every such
regeneration must also establish the correspondence between the pre-edit and post-edit boundary represen-
tations. The correspondence must persist over all valid edits, hence the term "persistent naming". But since
the semantics of a parametric family is not well-defined, neither is the required correspondence. All systems
seek to establish the correspondence that is consistent with the user's intuition, but no reliable methods for
achieving this goal are known today. The problem is illustrated in Figure 1. Every two instances of parametric
definition are related through a parametric edit of one or more parameters t; each instance has its
own b-rep(t), and the problem of persistent naming is to construct the map g between the two corresponding
b-reps.
Recently, several schemes for persistent naming have been proposed [10, 18, 25, 31] that appear to work
well in a variety of situations. But the proposed techniques are not always consistent between themselves
and none come with guarantees or with clearly stated limitations; they may be able to alleviate the observed
problems in today's systems, but not solve them - because the problems have not even been formulated.
1.2 Main contribution
The diagram in Figure 1 places no constraints on properties of the naming map g. A commonly cited
obstacle to such formal properties stems from the belief that in the most general case parametric edits do
not seem to preserve the basic structure of the boundary representation. For example, parametric edits
may lead to collapsed and expanded entities, (dis)appearance of holes, etc. Yet the naming mechanisms
proposed in [10, 18] clearly rely on the generic topological structure of the boundary representation as a
primary naming mechanism. Certainly we would expect that the naming problem should have a unique
solution when the parametric changes are small and the resulting boundary representation does not change
very much. Accordingly, we propose that any notion of a parametric family should be based on the following
principle of continuity :
ffl Throughout every valid parameter range, small changes in a solid's parameter values result in small
changes in the solid's representation.
This statement will be made precise in Section 2 of the paper. We argue that this essential requirement of continuity
eliminates much of the unpredictable and erratic behavior of modern systems as described in [15, 34].
In this paper, we use this principle of continuity to develop a formal definition for Boundary Representation
(BR-)deformation for solids in the same parametric family. Importantly, "small changes" may include limited
local collapsing of entities in boundary representations; in this case the resulting BR-deformation can actually
change the topology of the boundary representation. A special case of the topological BR-deformation
is a geometric BR-deformation that may modify the shape of the boundary cells, but the topology and the
combinatorial structure of the b-rep remains unchanged.
Our formulation does not solve the difficult problem of constructing the naming map g in Figure 1, but it
leads to necessary conditions that must be satisfied by any naming map g between boundary representations
in the same parametric family. Loosely, the oriented boundary of each cell in the pre-edit b-rep must be
mapped into the closure of the corresponding cell in the post-edit b-rep with the same orientation. These
conditions are powerful enough to identify invalid updates in many (but not all) practical situations, and
the algorithms to check them are simple. In particular, the identified conditions give formal justification to
the naming mechanisms proposed in [10, 18], but also identify techniques that are not acceptable, because
they do not satisfy the assumption of continuity. Finally, our definitions explain the difficulties of devising
sufficient conditions for BR-deformation and the associated naming map g.
A definition for BR-deformation is a prerequisite for extending informational completeness to parametric
solid modeling and for creating an industry-wide standard for data exchange of parametric models. To
our knowledge, there are no competing proposals. A notion of variational class is prominent in mechanical
tolerancing and robustness [29, 36]. Notably, Stewart proposed a definition of variational class in [36], which
was later used in study of polyhedral perturbations that preserve topological form in [3]. The assumptions of
continuity and continuous deformation are also mentioned informally in the naming techniques and discussion
of [11]. Our definitions are consistent with the earlier definitions, but are stronger and stated in algebraic
topological terms, which are more appropriate for dealing with boundary representations and the mappings
between them.
The proposed notion of parametric family and the principle of continuity are stated in terms of a particular
representation scheme (boundary representation in our case). But our formulation and results are applicable
to more general cellular representation of pointsets. In Section 5 we discuss briefly how our approach to
parametric modeling may be extended to deal with arbitrary parametric edits and other representations.
Applying similar principles of continuity to other representations would lead to different definitions of parametric
families, e.g. those based on small changes in CSG representations [30, 31, 34]. We do not consider
such families in this paper because most extant commercial systems are based on b-reps.
1.3 Examples
The following examples illustrate some of the erratic and inconsistent behavior that occurs during parametric
updates of a solid model. These examples were created in several versions of two leading commercial solid
modeling systems. 1
The simple solid S shown in Figure 2(a) is sufficient to illustrate many of the relevant issues. The shown
boundary representation was produced from a parametric definition of S. The parametric definition uses
a number of parameters, including parameters t and d that constrain the locations of features (slot and
hole) with respect to the solid's edges, as shown in the figure. The value of t determines the location of
which in turn determines the location of the hole constrained by parameter d. Consider what
happens during a parametric edit of t. Every time t is changed, the system attempts to regenerate the
boundary representation of the solid and establish which of the edges should be called
. The result of this
computation is easy to determine by simply observing what happens to the hole constrained by d, since it
explicitly references edge e 1
If the change in t is small enough not to produce any changes in the combinatorial structure of the
boundary representation, we would expect to see no abrupt changes resulting from the edit. In other words,
the hole and the constraint d should move continuously with changes in the value of t. Instead, Figure 2(b)
illustrates an example of situations that have been observed in earlier versions of some systems. In some
cases the hole jumps on the face f 0and in other situations the hole surprisingly jumps to the other side
of the edge e 0onto f 0. In either case, the observed behavior would indicate that e 1
was mapped to e 0; in
the latter case the placement of the hole was also affected by the orientation of the mapped edge e 0(which
is opposite to that of e 0with respect to face f 0). In both cases the assumption of continuity is violated,
even though the correct continuous mapping clearly exists. We will formally characterize such a mapping in
section 2.3 as geometrically deforming. Similar problems have been identified and illustrated with "jumping
blends" by Hoffmann [10]. The naming techniques proposed in [10, 11, 18], as well as the latest versions of
1 We do not identify the specific systems, because similar technical problems are easily identified in all parametric modeling
systems that rely on b-rep for internal representation of solids.
(a) Parameter t controls
the b-rep of the solid (b) Erratic behaviour
observed in earlier systems
(c) Continuous change
in t collapses face f1
(d) B-rep is not in
parametric family of (a)
d
d
d
d
d
Figure
2: Parametric edits of a simple solid
many commercial systems, are now able to establish the appropriate mapping in this simple case, but not
in many more general situations.
Figure
2(c) shows another parametric update of the same solid, where the value of t is equal to the radius
of the cylindrical slot, and the edge e 1 becomes coincident with e 3 . Few (if any) of the latest commercial
systems can handle such an update in a consistent and uniform fashion. A typical response in one system is
shown in Figure 2(d), which indicates again that e 1
was mapped to e 0; another system simply signaled an
error indicating that it could not locate the proper edge (the naming map could not be found). When the
slot was constructed as a "hole," the latter system signals no error and simply maps e 1
to e 0, with the smaller
hole reappearing on the other side of e 0(outside the solid). Yet another system might delete the constraint
(and the hole) with an appropriate warning message. Again, intuitively it is clear that continuous change in
should lead to the collapse of face f 1
and identification of edges e 1
and e 3
to e 0. We will characterize such
collapsing maps in section 2.3 as topologically deforming.
These simple examples clearly show that the naming map is usually developed in ad hoc fashion, is
dependent on the way the solid is created, and may be internally inconsistent across different features
and representation schemes. The matching methods proposed in [10, 18] will not always properly handle
collapsing maps, although it appears that the techniques in [10] could be further adapted.
One may also start with the solid shown in Figure 2(d), i.e., create the hole and constrain it with
respect to e 0, and then increase the value of parameter t. One would then expect the face f 0and the hole to
gradually move to the right; we will call the corresponding mapping between pre-edit and post-edit boundary
representations expanding. Unfortunately, a more typical response from a system is the one shown in Figure
2(a), and the result will certainly differ from version to version and from system to system.
(a) Original solid's b-rep K (b) Modified solid's b-rep L
Figure
3: Discontinuous update allowed by the parametric modeling system
Our final example is shown in Figure 3. Parameter t controls the location of the cylindrical feature that
is unioned to the cubical body, and one of the intersection edges is blended. 2 The change in the value of t
moves the cylinder from one side of the block to the other. The system matches the blended edge e 1
with
the edge e 0in the post-edit b-rep as shown. Is there a way to determine if this update is correct? Notice
that there exists a continuous map between pre-edit and post-edit boundary representations: it is a simple
rigid body motion that flips S upside-down. But this map is not consistent with the parametric change in
t, which requires that face f 1
is mapped into face f 0. While some of the proposed naming techniques will
allow such edits, we will see in section 3.4 that no change continuous in t could produce such a boundary
representation.
All boundary representations are cell complexes, and their formal properties, including validity, are rooted
in algebraic topology [14, 27]. Algebraic topology is also the proper setting for formulating BR-deformation.
Below we use a number of standard concepts (complexes, chains, continuous maps, etc.) that can be found
in many textbooks [1, 5, 17, 22]. For brevity, we only state few critical definitions and results that are
necessary for formulating and understanding the concept of BR-deformation.
Blends provide a convenient way to track the names of the edges [10]. Throughout the paper we will ignore the geometric
changes in b-rep resulting from blended edges.
2.1 Boundary representation as a cell complex
The boundary of a 3-dimensional solid is an orientable, homogeneously 2-dimensional cell complex. Internally,
every b-rep consists of an abstract (unembedded) cell complex, that captures all combinatorial relationships
between cells (vertices, edges, and faces), and geometric information that specifies the embedding for every
cell in E 3 . We do not deal with specific data structures in this paper, and therefore it is not necessary to
distinguish between the abstract cell complex and its embedding. Thus, we will assume that the boundary
of a solid S is represented by an embedded complex K (the b-rep of S).
Many different cell complexes have been used for boundary representations, including simplicial, polyhe-
dral, and CW (with cells homeomorphic to disks). Our discussion, definitions, and results are not affected
by any particular choice of a cell complex. As the example in Figure 2 illustrates, b-reps of many commercial
systems may contain more general cells, such as faces with holes, circular edges, and so on; such cell
complexes are also natural from the user's point of view. The corresponding formal definition, which we also
adopt in this work, is given by Rossignac and O'Connor in [23]: a Selective Geometric Complex (SGC) is
composed of cells that are relatively open connected submanifolds of various dimension and are assembled
together to satisfy the usual conditions of a cell complex:
1. The cells in K are all disjoint; and
2. The relative 3 boundary boe of each p-cell oe is a finite union of cells in K.
Both SGCs and our definitions are valid in n-dimensions, but for the purposes of solid modeling, we will
assume that each p-cell oe of K is a p-dimensional subset of E 3 with 3. The union of all embedded
cells is the underlying space jKj of the complex K. In particular, the set of points b(S) of every solid S is
the underlying space of some boundary representation K.
The boundary of a homogeneously p-dimensional complex K can be also characterized algebraically, as
an algebraic topological chain @C(K). The precise difference and the significance of this distinction will be
become apparent in section 3.
2.2 Cell maps and cell homeomorphisms
The cellular structure of b-reps implies the proper mechanism for defining the notion of a variational class:
we will consider the effect of a parametric change on the b-rep cell by cell. Informally, if K is the pre-edit b-
and L is the post-edit b-rep(t 1
), then the naming map g seeks to establish a correspondence between
cells in K and cells in L. So for example, when t 1
, we have and it is natural that g should be
a cell-by-cell identity mapping. When the difference t 1
is so small that none of the cells collapse, the
correspondence between the cells does not change, even though some cells may continuously change their
shapes. For example, in Figure 2(a), small changes in t correspond to a small movement of the hole on face
f 2 and therefore to a change in the shape of f 2 in the post-edit boundary representation. As the difference
larger, so does the difference between the shapes of the corresponding cells; and eventually some
cells may actually continuously collapse into some other cells. This is exactly the case in Figure 2(c) where
face f 1 and all of its bounding edges collapsed into the edge e 3 and its bounding vertices.
In other words, the cells in the post-edit b-rep L are images of continuous variations of the corresponding
cells in the pre-edit b-rep K. This cell-by-cell continuous variation is captured by the notion of a continuous
cell map (modified from [17]).
and L be two b-reps. Map called a continuous cell map
from K to L if:
1. for each cell oe 2 K, g(oe) is a cell in L;
2. dimension(g(oe)) - dimension(oe);
3. for every oe 2 K, g restricted to the closure of oe is continuous.
3 The relative boundary of a cell oe is defined as the set difference between the closure of oe and oe itself [23].
The first two conditions capture the notion of a cell-by-cell map. The third condition implies that g is a
continuous map on every cell that can also be extended to the continuous map on the whole
boundary of the solid [1, 22, 24]. By definition, continuity of g requires that g(closure(oe)) ae closure(g(oe))
for every cell oe [21]. But since K and L are finite, and jKj and jLj are closed and bounded, the stronger
condition
can be shown to hold [22, p.215].
In keeping with the common practice in topology literature, a cell map g is used to denote both a map
between two b-reps and a map between their underlying spaces, depending on the context in which it is used.
In the same spirit, we will at times abuse the notation and use oe to denote both the cell or its underlying
space, again depending on the context. Thus, cell map g plays a dual role: on the one hand, it acts as a
"naming" map and establishes the correspondence between the cells of two boundary representations; on the
other hand it guarantees the continuity of variation in the underlying space (the solid's boundary).
Before we go any further, it may be instructive to check whether naming maps can be constructed for
the examples of the previous section that are also cell maps. Consider the naming map g between the two
boundary representations of the solids in Figure 2(a) and 2(c) defined by:
and so on, i.e. the rest of the cells in Figure 2(a) map to their implied images in Figure 2(c). It is easy to
verify that g is a cell map, because it satisfies all three conditions of the definition given above: every cell
in (a) is continuously mapped to a cell in L of the same or lower dimension (note that face f 1 is mapped to
edge e 0) and the continuity condition is satisfied for every cell. In particular, note that the boundary set of
the collapsed face f 1 is mapped into the closure of e 1 .
Now consider the mapping of the same boundary representation in (a) to the one shown in Figure 2(d):
and the rest of the cells in (a) map to their appropriate images in (d). It is easy to check that g is not a
cell map; for example, the closure of face f 1
contains the edge e 1
, while e 0= g(e 1
) is not in the closure of
Referring to Figure 3, consider the mapping g given by:
where n is the number of faces and m is the number of edges respectively in the boundary representation.
This map g trivially satisfies all three required conditions and is therefore a cell map. However, there is also
another cell map h between the two boundary representations:
This cell map h maps the loop of edges around face f 1 into the loop of edges around f 2 and vice versa,
and corresponds to a rigid body rotation of 180 degrees about the shown x-axis. Thus, given two boundary
representations K and L there may be more than one valid cell map between them.
For sufficiently small parameter changes, we expect that none of the cells in the boundary representation
collapse and that there is also a one-to-one correspondence between cells in K and L. In this case, we can
strengthen the definition of a cell map:
bijective cell map called a cell homeomorphism
between jKj and jLj [22].
Notice that when g is a cell homeomorphism, the second condition in the definition of the cell map can
also be strengthened: the dimension of g(oe) must be equal to that of oe for every cell in K. For example,
both cell maps g and h, constructed above for the transformation in Figure 3, are cell homeomorphisms.
2.3 Topological and geometric deformations
In order for boundary representation L to be in the parametric family of boundary representation K, there
must exist a (not necessarily unique) cell map g from K to L. Furthermore, the principle of continuity
postulated in section 1.2 implies that when two boundary representations are in the same parametric range,
one must be continuously deformable into the other, as illustrated in Figure 4. Unfortunately, the existence
of a cell map from K to L alone is not sufficient to assure that K is deformable into L.
(a)
Figure
4: Continuous deformation from K to L
For example, even though the two boundary representations in Figure 3 are related by a cell homeomorphism
g, K cannot be deformed into L without leaving E 3 , because L is a reflection of K. We will see in
section 4 that such cell maps can be characterized and detected in a straightforward fashion. A more difficult
situation is illustrated in Figure 5. The boundary representation of two linked tori is related to the boundary
representation of the two unlinked tori by a cell homeomorphism; yet it is not possible to deform one into
another without leaving E 3 , because the continuity principle is violated for some intermediate values of the
shown parameter t. If K is a b-rep(t 0
and L is a b-rep(t 1
), then the continuity principle requires that, as t
changes from t 0 to t 1 , K continuously deforms into L. These observations motivate the following definition.
(a) Original cell complex K (b) Modified cell complex L
Figure
5: K is homeomorphic to L, but K cannot be deformed into L without leaving E 3
Definition 3 (BR-deformation) Let K, L be two boundary representations. A map
called a Boundary Representation deformation or BR-deformation of K to L if:
1. F (jKj;
2. F (jKj;
3. F (jKj; t) is a cell map that is also continuous in t 2 [0; 1].
Without loss of generality, we use symbol I in the above definition to denote the closed interval [0; 1], which
corresponds to the normalized range of valid values for the parameter t. In standard algebraic topological
terms, BR-deformation F is a homotopy [1, 17] and can be also viewed as a continuous family of cell maps
F t , for every value t 2 I. In particular, F 0 is the identity cell map and F 1 is the cell (naming) map g from
K to L.
BR-deformation captures the spirit of the postulated requirement of continuity and appears to be the
weakest possible condition. The obvious BR-deformation can be found by examination in Figure 4 for the
parametric updates shown in Figures 2. Let us consider again the typical situation illustrated in Figure 1
with pre-edit b-rep(t 0
denoted by K and post-edit b-rep(t 1
denoted by L. Only the map g between the two
boundary representations is known, and the problem is to determine whether an appropriate BR-deformation
F exists or not. If BR-deformation does exists, then we say that g is a BR-deforming map.
Definition 4 (BR-deforming map) A map g from boundary representation K to boundary representation
L is called BR-deforming if F (jKj; BR-deformation F .
If one accepts the principle of continuity and the implied definitions proposed above, then it may be
reasonable to require that a naming map g for solids in the same parametric family must be BR-deforming.
We constructed the appropriate BR-deformation in Figure 4 by examination, and it is easy to see that
no BR-deformation exists for the example in Figure 5. But our definition of BR-deformation involves
conditions not only on K and L but also on all the b-reps in the parametric range
which are typically
not known. Furthermore, such a BR-deformation need not be (and usually is not) unique. Is there a
general method for checking if a given map g is BR-deforming? For example, it may not be clear why no
appropriate BR-deformation exists for the parametric update shown in Figure 3. We will delay discussing
the sufficient conditions for existence of BR-deformation until section 5. In the next section, we discuss
necessary conditions for the existence of BR-deformation and show through examples how they can be used
in practice to distinguish b-reps which belong to the same parametric family from b-reps which do not.
Observe that boundary representations in Figures 4(a), (b) and (c) are related by a continuous family
F t of cell homeomorphisms. This corresponds to a special but common and important case of geometric
BR-deformation F . The combinatorial structure of the boundary representation K, including the dimension
of all cells, is preserved under geometric BR-deformation. On the other hand, the further deformation of the
solid in Figure 4(d) is not geometric BR-deformation, because one of the faces is continuously collapsed into
the adjacent edge and the combinatorial structure of the resulting b-rep has changed. It should be clear that
geometric BR-deformation is a special case of the more general topological BR-deformation. We shall see in
the next section that the necessary conditions for geometric BR-deformation are simpler and are easier to
test.
Necessary Conditions for BR-Deformation
3.1 Combinatorial closure and star conditions
The third condition in definition 1 of cell map g requires that g restricted to the closure of every cell is
continuous. We already used this condition to test whether the maps for the examples of section 1 are cell
maps. The closure of a cell oe in a complex K consists of oe itself and all lower dimensional cells incident on
oe. Thus, the closure of a two-dimensional face includes its bounding edges (which are often represented by
intersection of surfaces), and the closure of a vertex is the vertex itself.
To facilitate development of more convenient algorithmic tests, we now restate this condition in alternative
combinatorial terms, using the Cauchy's definition of continuity in terms of open neighborhoods [2]. The
combinatorial equivalent of a cell's open neighborhood is the star of the cell. The following definition is
modified from [9, 22] and is also consistent with definitions in [23].
Definition 5 (Star of a cell) The star of a cell oe in a complex K, denoted St(oe), is the union of oe and
all cells in K that contain oe in their boundary.
In other words, the star of a cell is the union of the cell with its neighboring higher-dimensional cells.
For example the star of the edge e 1 of the solid shown in Figure 2(a) is the union of cells
g. The
star of a vertex is a union of the vertex with all its adjacent faces and edges. The star of a face is just the
face itself, as there are no three-dimensional cells in boundary representations.
We can now restate the requirement that g restricted to the closure of oe is continuous, originally expressed
by the condition (1), to mean that every neighboring cell of oe is mapped to a neighboring cell of the image
cell g(oe) or to g(oe) itself. Remembering that some cells oe may collapse into lower-dimensional cells g(oe),
this requirement translates into a simple condition that must be satisfied by every cell[2, 22]:
The strict inclusion applies when the cell oe is mapped to a lower dimensional cell, while condition (2) becomes
an equality when the dimension of oe remains the same under the map g. In the latter case, this condition
simply means that adjacency of all cells is also preserved under the map g.
Let us consider the examples of the previous section to see how the star condition can be verified. The
naming map between solids in Figure 2(a) and Figure 2(c) takes the star of e 1
into the union of their
respective image cells fe 0; f 0g. The star of e 0is the union of cells fe 0; f 0g and the other (hidden) adjacent
face of e 0. This clearly shows that g(St(e 1 )). The star condition (2) is easily verified to hold for
other cells as well.
Now consider the mapping of the same boundary representation in Figure 2(a) to the one shown in
Figure
2(d). We already concluded earlier that this map is not a cell map because the closure condition is
violated, and this can be easily verified by the star condition. The star of e 1 is the union of cells
which are mapped to cells fe 0; e 0; f 0g. Since the star of g(e 1 ) is the union of fe 0; f 0; f 0g, it is clear that
)), and g cannot be a cell map.
3.2 Oriented cell maps
As we have seen above, the requirement on g to be a cell map is a necessary but very weak condition for
BR-deformation. The difficulty with the example in Figure 3 is that the proposed cell map from K to L does
not preserve the relative orientation between certain cells. A simpler two-dimensional example is shown in
Figure
6. A cell map defined by g(oe
0 takes the cell complex (triangle) K into its reflected copy L;
however it is not possible to continuously deform K into L without leaving E 2 .
(a) Original cell complex K (b) Modified cell complex L
f
Figure
Cell map between K and L does not preserve orientation
In order to take advantage of the orientation information, we need to slightly modify the definitions of
a cell complex and a cell map by requiring that all cells are oriented, which is usually the case in most
boundary representations. Orientation of a cell oe 2 K can be visualized as a (positive or negative) sense
of direction. More specifically, every 0-cell (vertex) can be oriented positively or negatively, orientation of
every 1-cell (edge) bounded by two vertices s and t is determined by the order of s and t, and 2-cells (faces)
can be oriented either counterclockwise (positive) or clockwise (negative). Cyclic edges that do not have
vertices can be also directed in one of two ways. Oriented cell complexes are shown in Figures 3 and 6. We
can now modify the definition of the cell map to account for orientation.
Definition 6 (Oriented Cell Map) A map is an oriented cell map if it is a cell map and
for every oriented cell oe 2 K, g(oe) is an oriented cell in L.
It can be shown that orientation is a homotopy invariant and is automatically preserved through continuity
[5], which means that every BR-deforming map is orientation preserving . In other words, the sense of
direction must be preserved at every point of the solid's boundary, and we need to assure this condition in
a cell-by-cell fashion.
Since boundary representation of every solid is orientable [14, 27], the above condition is easily enforced
on 2-cells by requiring that all faces are oriented positively (counterclockwise); we will also postulate that
all 0-cells (vertices) have positive orientation. It is more difficult to state what it means to preserve the
orientation on 1-cells; note however that orientation of a face induces orientation in its bounding edges, and
an edge orientation induces orientation in its bounding vertices. For example, in Figure 6(a) the assumed
orientation of edge e 2 is consistent with the orientation of e 2 induced by the face f . By contrast, in Figure
6(b) the orientation induced in e 0by f is opposite to the assumed orientation of e 0. A similar situation can
be observed in Figure 3 for the edges bounding face f 3 . Additional discussion and examples of assumed and
induced orientation can be found in most texts on algebraic topology, e.g. in [1].
3.3 Induced orientation condition
Informally, orientation of a cell complex K is preserved under a cell map g if g maps the oriented boundary
of every non-collapsed cell oe 2 K into the oriented boundary of the image cell g(oe) 2 L. The sole purpose of
this section is to express this statement precisely and in a computationally convenient form, using algebraic
topological chains. Given a cell complex K; a p-dimensional chain, or simply p-chain, is a formal expression
a an oe n ;
where oe i are p-dimensional cells of K and a i are integer coefficients. Two p-chains on the same cell complex
can be added together by collecting and adding coefficients on the same cells. The collection of all p-chains
on K is a group denoted by C p (K) for 3. Using chains we can replace incidence, adjacency, and
orientation computations with a simple algebra. In particular, we define the (algebraic) oriented boundary
operation in terms of chains using only three coefficients from the set f\Gamma1; 0; +1g: (Such chains are often
called elementary chains [1, 12].)
Definition 7 (Boundary of a cell) The boundary of a p-cell oe is the (p \Gamma 1)-chain consisting of all (p \Gamma 1)
cells that are faces of oe with coefficient +1 if the orientation of oe is consistent with the orientation of the
face and \Gamma1 otherwise. 4
The coefficients are the simple algebraic means to compare the assumed orientation of a cell oe with the
orientation induced in oe by an adjacent higher-dimensional cell (in the star of oe). For example, the oriented
boundary of the 1-cell e 1
in
Figure
6 is a 0-chain: @e 1
which implies that edge e 1
starts at vertex v 1
and ends at vertex v 2
. The boundary of an oriented 2-cell f in the same figure is a 1-chain:
which tells us that the directions of edges e 1
and e 2
are consistent with the counter-clockwise direction of
the face f; while the direction of the edge e 3
is not. The face f 2
in
Figure
7 has a hole and its boundary is
also a 1-chain: @f convention, the boundary of a 0-cell is defined to be 0.
(a) Top view of K (b) Top view of L
v3 v2 v8 v7
Figure
7: Top view of oriented boundary representation of the solid in Figure 2.
The definition of boundary for an individual cell oe extends linearly to the boundary of any p-chain, i.e.
an oe n is a p-chain, then it's boundary is a (p \Gamma 1)-chain
with the usual rules of chain addition. In other words, the boundary operator is a linear function @ :
4 Here we rely on standard terminology in algebraic topology: 'face of oe' refers to any lower-dimensional cell incident on cell
oe.
The above definitions give precise characterization to the informal concept of "oriented boundary of every
cell" as a chain. But the naming map g is a cell map and technically cannot be applied to chains; to see
what g does to boundaries of cells, we need to extend g to maps on chains.
Intuitively, a chain map g p takes individual p-cells (which are elementary p-chains) in K into individual
p-cells in L - just as the cell map g does when the dimension p of the cells stays the same. But g may also
collapse some p-cell oe 2 K onto a lower dimensional cell in L; since these lower dimensional cells do not
belong to any p-chains on L, the chain map is instructed to simply ignore them by setting g p
once again, we require that the action of a chain map on individual cells must extend linearly to arbitrary
p-chains. 5 Together these three conditions give the usual definition of a chain map [17]:
be a orientation preserving cell map. For each dimension p,
define a chain map
1. if oe is a p-cell in K and g(oe) is a p-cell in L, g p
2. if
3. an oe n an
Thus, any BR-deforming naming cell map g induces a family of chain maps g p in every dimension
2. It is also known [1, 5, 17] that every such family of induced chain map satisfies the following
commutative diagram:
\Gamma! C p (L)
Since an oriented cell oe can be viewed as a chain with 0 coefficients attached to all other cells in the cell
complex, the above commutative diagram can be enforced in a cell-by-cell fashion. Specifically, given an
orientation preserving cell map g, the following simple condition is satisfied for each oriented p-cell oe 2 K:
We will refer to this condition as the orientation condition, which is a precise restatement of the requirement
that g maps the oriented boundary of every non-collapsed cell oe 2 K into the oriented boundary of the
image cell g(oe) 2 L.
3.4 Examples revisited
Consider once again the examples from section 1. Technically, every naming map g consists of the vertex,
edge, and face maps. For brevity, we will only specify the action of g on those cells that are necessary for
the presentation purposes. For example, below we assume that vertices are mapped to their implied images
with a positive orientation and omit explicit description of the vertex map, even though the (chain of the)
oriented boundary of every mapped edge is determined by the vertex map.
Consider the naming map between the solids in Figure 2(a) and (c). The cells in the top view of the two
solids are shown in Figure 7. The semantics of the update collapses the face f 1
and its boundary into the
single edge e 0and its vertices. This deformation is reflected in the topological BR-deforming naming map g
as follows:
Notice that g is many-to-one and can be characterized as collapsing map. We already know from section
2.2 that g satisfies the star condition (2). Let's check if it also satisfies the necessary condition (3) in terms
of induced chains maps g 1
and
5 In other words, chain map gp is a homomorphism [17].
A simple check shows that the necessary condition (3) @(g
The rest of the cells remain topologically invariant under the naming map, and it is easy to check that the
necessary condition (3) holds for all other cells in the boundary representation.
Observe that the inverse map g \Gamma1 from the solid in Figure 2(c) to (a) is not BR-deforming; in fact it is
not a valid cell map because it maps a one-dimensional edge e 0to a two-dimensional face f 1
. Nevertheless,
such maps appear useful and are allowed in many systems. Therefore we will say that a one-to-many relation
g is a expanding map if its inverse is a collapsing BR-deforming map.
Now consider the apparent naming map between the solids in Figure 2(a) and (d):
We already know that this map g is not BR-deforming (because it is not a cell map), and it is easy to verify
that the necessary condition (3) does not hold either. The induced chain maps are:
and the left and the right hand side of the equation (3) evaluate to different chains:
Clearly, g is not collapsing map and the inverse map g \Gamma1 cannot be an expanding map.
Next consider the example in Figure 3. We described two distinct naming cell maps g and h in section
2.2 that satisfy the star condition (2). We also argued that neither of them is appropriate - but for different
reasons. We claimed that the naming map g computed by a commercial system is not BR-deforming because
it is not orientation preserving. Indeed, g k (oe) = g(oe) for all cells in the boundary representation, because no
cells have been collapsed, and in fact there is a one-to-one correspondence between the pre-edit and post-edit
boundary representations. Yet, checking the necessary condition (3) for face f 3
, we see that
Thus, @(g 2
same relation can be verified for all other faces in the boundary represen-
tation; this shows that g is orientation reversing and is not BR-deforming.
But now consider the necessary condition (3) for the second naming map h for the same solids:
and so on. Notice that h reverses the assumed orientation of all edges connecting f 1
. Checking the
necessary condition (3) on this oriented cell map:
Similarly, one can check that map h does satisfy the necessary condition (3) for all other cells in the boundary
representation and is a BR-deforming map associated with the 180 degree rotation of the solid about the
X-axis. Furthermore, h is a cell homeomorphism and is geometrically BR-deforming because it preserves
the combinatorial structure of the boundary representation, even though it may alter the geometry of some
cells. Unfortunately, this particular BR-deformation is not consistent with the intended semantics of the
parametric edit. In fact in this example, due to symmetry, there are many other geometric BR-deforming
maps (e.g. rotation about Z-axis by 90 degrees). The guarantee of BR-deformation should not be confused
with the guarantee of the correct semantics of the parametric update.
Persistent naming
4.1 Sufficient conditions for an orientation preserving cell map
In the typical scenario described in section 1, a pre-edit b-rep K and a post-edit b-rep L are given, and the
system attempts to match the cells in K with the cells in L, i.e. to construct the naming map
The matching process may or may not succeed. In terms of our definitions, there are at least three reasons
why matching may fail:
1. A BR-deforming map g may not exist, because L does not belong to the same parametric family defined
by K;
2. A BR-deforming map g does exist, but the system is not able to find it;
3. A BR-deforming map is not unique, and the system is not able to determine which map captures the
semantics of the edit. 6
Whatever the outcome of the matching may be, the correctness of the result depends critically on the
system's ability to determine if a constructed naming map is in fact BR-deforming. The formal machinery
developed above is sufficient to determine if a given map g is an orientation preserving cell map, which
is in turn a necessary condition for g to be BR-deforming. As our examples above have illustrated, these
conditions are powerful enough to detect many (but not all) invalid edits in parametric solid modeling. To
summarize, a map preserving cell map if and only if:
1. K and L are oriented consistently;
2. g is an oriented cell map;
3. if maps the oriented boundary of cell oe 2 K into the oriented boundary
of the image cell g(oe) 2 L.
The first requirement is enforced by assuming positive orientation for all vertices and faces (oriented
counter-clockwise). The second requirement reduces to either the combinatorial closure condition (1) or
the star condition (2). The third requirement is expressed by the orientation condition (3) which enforces
the relative orientation between edges, vertices, and faces. All required conditions can be easily checked
for a given map g; but it is important to understand the applicability and the limitations of the proposed
conditions.
Examples of Figures 3 and 6 may suggest that the orientation condition (3) is unnecessarily complicated
because it detects only "global" orientation reversals as shown above. But imagine the solid in Figure 3 to
be attached to a planar face of another solid (base), and now we can easily construct a local orientation
reversing map which reflects a portion of the solid's boundary. Our experience suggests that such local
reversals in orientation occur frequently in commercial systems, apparently due to partial and incremental
updates in boundary representations. Enforcing the orientation condition for all cells would eliminate this
type of invalid edit.
The orientation condition (3) may appear to imply the star condition (2), since the closure of every cell
includes its boundary points. To see why this is false, let us assume that g is an arbitrary (not necessarily
cell) map that satisfies the first two conditions of the cell map (Definition 1). We can still use g to induce
chain maps g p and use the boundary operator as before. Suppose we determine that the orientation condition
(3) holds. Does this mean that g is an orientation preserving cell map? More specifically, does this mean
that the star condition (2), or equivalently the combinatorial closure condition (1), is satisfied as well? Not
necessarily. Consider the case when g collapses a p-cell oe 2 K to a lower dimensional cell g(oe) 2 L. By
definition of a chain map, g p irrespectively of what g(oe) may be. Therefore,
it is easy to construct a map g such that condition (3) is satisfied and yet g is not a cell map. Thus, it should
be clear that the star condition and the orientation condition are not redundant for BR-deforming maps.
On the other hand, suppose that dim(oe) for all cells in K. Since g preserves the dimension
of every cell oe 2 K, the orientation condition (3) implies that 1-cells bounding every 2-cell oe 2 are in 1-to-1
6 In some cases, this is in fact the correct answer, because semantics of some edits has not been defined unambiguously.
correspondence with 1-cells bounding g(oe 2 ), and 0-cells bounding every 1-cell oe 1 are in 1-to-1 correspondence
with 0-cells bounding g(oe 1 ). In other words, g takes the closure of every cell in K to the closure of the
corresponding cell in L, which implies the closure condition in the definition of cell map and the equivalent
star condition (2).
Furthermore, recall that when none of the cells are collapsed, the (geometrically) BR-deforming map g
is a cell homeomorphism. It follows that the star condition (2) is redundant for geometrically BR-deforming
maps. Thus, to check if a map is geometrically BR-deforming, it is only necessary that none of the cells are
mapped to the cells of lower dimension and that the orientation condition (3) is satisfied for every cell.
4.2 Comparison with previously proposed naming methods
Several heuristic methods have been proposed for constructing the naming maps g [10, 11, 18, 25, 31] in a
variety of situations, including those where BR-deforming maps may not exist. A thorough analysis of all
methods would not be practical in this paper, but it is instructive to check if the proposed techniques are
consistent with our definitions when they do apply.
The naming techniques proposed by Kripac[18] and Hoffmann et al [10, 11] rely on several common
methods for naming cells in boundary representations. In particular, both start with the names of the
primitive surfaces appearing in the parametric definition and construct the names for individual cells using:
ffl the list of adjacent cells in the boundary representation;
ffl the relative orientation of the cell with respect to the adjacent cells.
In general, these two techniques are not sufficient to uniquely match cells in the two b-reps, and a number
of additional techniques must be employed. For example, [18] also uses the history of editing to distinguish
between cells, while [10] uses extended adjacency information and directional information associated with
the particular solid construction method. The two naming techniques used by both [18] and [10] intuitively
correspond respectively to the two types conditions that ensure an orientation preserving cell map. The use
of adjacent cells is a means for extending the continuity of the naming map to the whole boundary of the
solid, which relies on the combinatorial closure condition (1) and/or star condition (2). The matching of
relative orientation of cells is basically equivalent to our orientation condition (3).
On closer examination, it appears that [18] may map a cell oe 2 K to g(oe) 2 L when all cells adjacent
to oe are mapped into cells adjacent to g(oe), without considering the orientation information. But as is
clearly shown by example in Figure 3, such a naming map does not have to preserve orientation and may
not be BR-deforming. By contrast, [11] matches oe to g(oe) only if the adjacent cells also preserve orientation.
This method of naming will never result in orientation-reversing naming map. Both methods will find the
geometrically BR-deforming map, if it exists and every cell has a unique name. It is not clear whether
geomtric BR-deformation is always enforced when heuristic methods are used to resolve multiple matches
between cells with the same names.
Neither of the above methods are able to handle collapsing maps in general as defined in this paper,
because both [18] and [11] require exact match of the adjacent cells, which in turn corresponds to the
special case of exact equality in the star condition (2). For example, edge e 1
in
Figure
2(a) is adjacent
to faces f 1
and f 3
, while face f 1
is no longer present in the post-edit boundary representation of Figure
2(c). Thus, the proposed naming methods will never match e 1
to e 0, as they don't allow the strict inclusion
relation implied by condition (2). More generally, all of the proposed methods assume that the dimension of
the mapped cells will always be preserved, which is not the case for collapsing and expanding maps. Based
on our results, it seems to make sense to redefine the adjacency conditions to correspond to the star condition
(2).
Rossignac observed that when every cell in a b-rep can be represented by a Boolean (set-theoretic)
expression on primitive surfaces and halfspaces, the expression can be used as a persistent name for the
cells [31]. In fact, it appears that some commercial systems use this method of naming in simple cases with
limited success. 7 Boolean set-theoretic representations place no restrictions on the topology of the resulting
7 Constructing Boolean expressions for every cell in boundary representation is not trivial and may require a difficult construction
known as separation [33].
boundary representations, imply a very different parametric family [34], and do not provide a mechanism
for enforcing or even checking BR-deformation.
A common method of matching cells in pre-edit and post-edit b-reps, which can be observed in some
commercial systems, combines Boolean naming with the sequential ordering of the entities with respect to
some external direction. For example, a system may attempt to distinguish between edges e 1
and e 2
in
Figure
2(a) by indexing them along the length of the face (x-axis in the Figure). This method is most likely
responsible for producing the solid in Figure 2(d) instead of (c): the identification of e 1
with e 3
changes the
adjacency information so that e 0appears to be the "first" edge to fit the new name of e 1
5 Conclusion
5.1 Significance and limitations
We have argued that the proposed principle of continuity and the implied notion of BR-deformation should
be accepted as the basis for formally defining the semantics of a parametric family. Notice, however, that
BR-deformation is not an equivalence relation: it is generally not true that boundary representation K can be
deformed into L and vice versa. Thus, a parametric family of boundary representations can be defined in more
than one way. For example, we could define a parametric family to include all those boundary representations
that can be obtained by deforming one special 'master' boundary representation K. Alternatively, we could
define two boundary representations to be in the same parametric family whenever one of them can be
deformed into another. The latter definition may be more appealing because it does specify an equivalence
class of solids, but deciding membership in such a class algorithmically appears to be nontrivial.
The proposed formalism allows precise formulation and partial solution for a number of problems in
parametric modeling. For example, the problem of "persistent naming" amounts to deciding whether two
boundary representations K and L can be related by a BR-deforming map g that is consistent with the
semantics of the parametric edit, and constructing such a map.
A key computational utility for enforcing BR-deformation is the ability to decide if a given naming map
between two b-reps K and L is BR-deforming. We have argued that any such g must satisfy the following
two easily computable necessary conditions for every named (mapped) cell oe:
ffl the combinatorial adjacency (star or closure) conditions expressed by (1) and (2); and
ffl the orientation condition g p\Gamma1
Our formulations and the implied conditions do not completely solve the above problem, but they are
the strongest possible, in the sense that the formulated problem could be solved only if solid modeling
representations are enhanced with additional information that is not being used today. The main difficulty
lies in the need to know what happens to the boundary representation not only for parameter values t 0
and
, but also for all the values of t in the interval [t
]. For example, it is impossible to decide that the
two boundary representations in Figure 5 do not belong to the same parametric range without considering
the deformation process in E 3 . This is the very reason why we defined BR-deformation as a family of maps
F t from E 3 into E 3 , even though the naming map
takes jKj into jLj. To guarantee that a given
naming map g is BR-deforming, we must show that g is an orientation preserving cell map and that it can
be extended to a continuous map on the whole of E 3 . To be more precise, g will satisfy all the requirements
of BR-deforming map if there exists a continuous orientation preserving map h from E 3 to E 3 , such that h
restricted to jKj is a cell map g from jKj to jLj [1].
This sufficient condition for BR-deformation cannot be verified by considering only the structure of
the two-dimensional boundary representations. Possible approaches to constructing BR-deforming maps
include constructing the corresponding map h as a cell map on the three-dimensional decomposition of E 3
as suggested in [32, 34], or ensuring the properties of h by further restricting how g modifies individual cells
[4, 36].
Other criteria for membership in a parametric family are possible. For example, one could require only
that two boundary representations are related by an orientation preserving cell map. This would significantly
enlarge the parametric family and would immediately establish computable sufficient conditions for persistent
naming. These advantages would come at a considerable sacrifice: the loss of the physical principle of
continuous deformation between solids in the same family. Nevertheless, this may be a reasonable and
pragmatic compromise, given the difficulties of establishing BR-deformation as defined in section 2.
5.2 Semantics of more general edits
It may appear that accepting the principle of continuity and BR-deformation as a basis for parametric
modeling is too limiting because a number of practically important parametric edits cannot be described by
BR-deformation. These include merging and splitting entities in boundary representations, (dis)appearance
of holes, and common edits such as shown in Figure 8: as the slot moves to the left, the edge e 1 collapses into
vertex v 1 . But this edit cannot be described by a cell map unless the edge bounded by vertices v 1 and v 3 is
split into two edges, e 2 and e 3 , as shown in Figure 8(a). This would allow constructing a proper cell map
that takes v 5 into v 0, v 4 into v 0, and so on. But of course, vertex v 2 would not be present in most boundary
representations that rely on maximal faces [35]. Below we explain how our approach may be extended to
formally define arbitrary parametric edits.
(a) Original solid's b-rep K (b) Modified solid's b-rep L
Figure
8: A parametric edit is defined by a splitting followed by a collapse
While the notion of BR-deformation is not applicable to splitting and merging of faces and edges (such
as needed in the example of Figure 8), recall that the main purpose of an orientation-preserving cell map
was to enforce the continuity of the map in a cell-by-cell fashion; a natural extension of this
principle would allow structural changes in the cells of K and L. In particular, subdividing or merging cells
in a solid's boundary representation, e.g. as proposed in [23], has no effect on the boundary set itself (the
underlying space). Thus, it seems reasonable that we can define a more general edit that preserves the spirit
of continuous deformation by a sequence of splits, merges, and BR-deforming maps. Importantly, such edits
include significant topological changes, including elimination of holes. Figure 9 illustrates a possible two-step
procedure. First the collapsing map g takes the boundary of the hole c to a vertex c the resulting
cell complex has a 2-cell that contains an isolated 0-cell in its interior. Then the subsequent merging step
eliminates the 0-cell, without changing the underlying space.
Finally, recall that the expanding map is not BR-deforming (only its inverse is), because it is one-to-
many and is not a continuous cell map. Yet it appears that useful parametric edits require using such a
collapse merge
c c'
Figure
9: Holes can be eliminated as a two-step procedure: collapse followed by a merge
map. It may be feasible to define such edits by a sequence of maps such that each map in the sequence is
collapsing, expanding, splitting, or merging. This approach may provide great flexibility in formally defining
the semantics of parametric edits in a piecewise continuous fashion. For example, the reverse sequence of
events shown in Figure 9 would allow the introduction of holes as a two-step procedure consisting of a split
followed by an expanding map. This may also allow formally defining the semantics of feature attachment
operations without using boolean operations.
5.3 Deformations of other representations
The assumptions of continuity and continuous deformations make sense from an engineering point of view
and can serve as a starting point for developing universal standards in parametric modeling. The proposed
definition of BR-deformation and the implied necessary conditions can be employed in numerous other
applications, including shape optimization, tolerancing, and constraint solving. For example, some heuristic
methods proposed in [6] for identifying the correct solutions in constraint solving resemble the necessary
conditions implied by our definitions, and the use of homotopy was also proposed in [19].
The principle of continuity is stated in terms of a particular representation scheme for solids. The
same principle of continuity could be used to develop notions of parametric families with respect to other
representation schemes. For example, it may be feasible to define a parametric family with respect to CSG
representations [34], in which case the Boolean naming [31] may indeed correspond to a class of appropriately
defined continuous deformations.
The obvious and useful extension of our work is to general n-dimensional cellular models of pointsets,
such as those represented by Selective Geometric Complexes [23] and more recently advocated by others
[8, 26]. It should be intuitively clear that such cellular structures can be constructed and transformed
into each other by combinations of expanding, collapsing, splitting, and merging maps. The cellular data
structures (including boundary representations) are commonly supported by a number of so-called Euler
operators [7, 20] that create, modify, and eliminate cells or collection of cells (loops, shells, rings, wedges,
etc. Though the Euler operators are assumed to operate continuously on the underlying space of the cell
complex, the continuity conditions are rarely enforced. The concepts of continuity, orientation preserving
cell map, continuous deformation, and chain map apply to all cellular models without any modifications.
Furthermore, the identified necessary conditions for deformation, including the star condition (2) and the
orientation condition (3), hold as well.
Acknowledgements
This research is supported in part by the National Science Foundation CAREER award DMI-9502728 and
grant DMI-9522806. The authors are grateful to Tom Peters, Malcolm Sabin, Neil Stewart, and Victor
Zandy for reading the earlier drafts of this paper and suggesting numerous improvements.
--R
Algebraic Topology.
Combinatorial Topology
Polyhedral perturbations that preserve topological form.
Basic Topology.
A geometric constraint solver.
Stepwise construction of polyhedra in geometric modeling.
A geometric interface for solid modeling.
Introductory Topology.
Generic naming in generative
On editability of feature-based design
Dover Publications
A road map to solid modeling.
Morgan Kaufman
On semantics of generative geometry representations.
Parametric and Variational Design.
Topology of Surfaces.
A mechanism for persistently naming topological entities in history-based parametric solid models
Solving geometric constraints by homotopy.
An Introduction to Solid Modeling.
Topology A First Course.
Elements of Algebraic Topology.
SGC: A dimension independent model for pointsets with internal structures and incomplete boundaries.
Lectures in Algebraic Topology.
Breps as displayable-selectable models in interactive design of families of geometric objects
The generic geometric complex (GGC): a modeling scheme for families of decomposed pointsets.
Mathematical models of rigid solid objects.
Representations for rigid solids: Theory
Representation of tolerances in solid modeling: Issues and alternative approaches.
Issues in feature-based editing and interrogation of solid models
CSG formulations for identifying and for trimming faces of CSG models.
Maintenance of geometric representations through space decompositions.
Separation for boundary to CSG conversion.
What is a parametric family of solids?
Alternative definitions of faces in boundary representations of solid objects.
Sufficient condition for correct topological form in tolerance specification.
--TR
Introduction to Solid Modeling
Geometric and solid modeling: an introduction
Separation for boundary to CSG conversion
What is a parametric family of solids?
Solving geometric constraints by homotopy
Polyhedral perturbations that preserve topological form
The generic geometric complex (GGC)
Representations for Rigid Solids: Theory, Methods, and Systems
A Road Map To Solid Modeling
--CTR
Hilderick A. van der Meiden , Willem F. Bronsvoort, Solving topological constraints for declarative families of objects, Computer-Aided Design, v.39 n.8, p.652-662, August, 2007
Frank Y. Shih , Vijayalakshmi Gaddipati, Geometric modeling and representation based on sweep mathematical morphology, Information SciencesInformatics and Computer Science: An International Journal, v.171 n.1-3, p.213-231, 4 March 2005
Hilderick A. van der Meiden , Willem F. Bronsvoort, Tracking topological changes in feature models, Proceedings of the 2007 ACM symposium on Solid and physical modeling, June 04-06, 2007, Beijing, China
Hilderick A. van der Meiden , Willem F. Bronsvoort, Solving topological constraints for declarative families of objects, Proceedings of the 2006 ACM symposium on Solid and physical modeling, June 06-08, 2006, Cardiff, Wales, United Kingdom
Srinivas Raghothama , Vadim Shapiro, Models and representations for families of parts, From geometric modeling to shape modeling, Kluwer Academic Publishers, Norwell, MA, 2002
Thomas Convard , Patrick Bourdot, History based reactive objects for immersive CAD, Proceedings of the ninth ACM symposium on Solid modeling and applications, June 09-11, 2004, Genoa, Italy
K. Y. Lee , C. G. Armstrong , M. A. Price , J. H. Lamont, A small feature suppression/unsuppression system for preparing B-rep models for analysis, Proceedings of the 2005 ACM symposium on Solid and physical modeling, p.113-124, June 13-15, 2005, Cambridge, Massachusetts
David Marcheix , Guy Pierra, A survey of the persistent naming problem, Proceedings of the seventh ACM symposium on Solid modeling and applications, June 17-21, 2002, Saarbrcken, Germany
Srinivas Raghothama, Constructive topological representations, Proceedings of the 2006 ACM symposium on Solid and physical modeling, June 06-08, 2006, Cardiff, Wales, United Kingdom
Srinivas Raghothama , Vadim Shapiro, Consistent updates in dual representation systems, Proceedings of the fifth ACM symposium on Solid modeling and applications, p.65-75, June 08-11, 1999, Ann Arbor, Michigan, United States
Guillaume Caumon , Charles H. Sword, Jr. , Jean-Laurent Mallet, Constrained modifications of non-manifold B-reps, Proceedings of the eighth ACM symposium on Solid modeling and applications, June 16-20, 2003, Seattle, Washington, USA
Srinivas Raghothama , Vadim Shapiro, Topological framework for part families, Proceedings of the seventh ACM symposium on Solid modeling and applications, June 17-21, 2002, Saarbrcken, Germany | parametric editing;boundary deformation;cell complex;algebraic topology;persistent naming;soild modeling;boundary representation |
293633 | The edge-based design rule model revisited. | A model for integrated circuit design rules based on rectangle edge constraints has been proposed by Jeppson, Christensson, and Hedenstierna. This model appears to be the most rigorous proposed to date for the description of such edge-based design rules. However, in certain rare circumstances their model is unable to express the correct design rule when the constrained edges are not adjacent in the layout. We introduce a new notation, called an edge path, which allows us to extend their model to allow for constraints between edges separated by an arbitrary number of intervening edges. Using this notation we enumerate all edge paths that are required to correctly model the original design rule macros of the JCH model, and prove that these macros are sufficient to model the most common rules. We also show how this notation alows us to directly specify many kinds of conditional design rules that required ad hoc specification under the JCH model. | Introduction
The technology used to manufacture integrated circuits [8] imposes certain limits on the sizes and relative positioning
of features on the wafer. The resolution of optical lithography equipment, the amount of undercutting during a
wet-etch processing step, the control over lateral diffusion and junction depth in dopant implant steps-all will set
physical limits on how small device features can be, and how closely they can be separated, before the electrical
behavior of the circuit changes. In addition, the interaction of successive processing steps-step-coverage problems
in non-planar processes, or the ability to fill stacked vias, for example-also set limits on the manufacturability of the
integrated circuits.
Process and device engineers carefully characterize a semiconductor manufacturing process to understand these
interactions, seeing them as complex statistical distributions of sizes and feature proximities and their effect on circuit
reliability. Given only this raw data it would be very difficult and time-consuming to assess the overall reliability of a
complete integrated circuit, or for a layout designer to draw a layout with a known level of reliability. Therefore, this
statistical data is commonly codified into a set of much simpler design rules which can be given to the mask designers
to guide them in producing a manufacturable layout [7].
Integrated circuit design rules abstract away the details of the process technology, and instead impose rules on the
sizes and shapes of mask features and the separations and orientations of mask features in relation to one another. For
example, a poly-separation rule might specify that the edges of unconnected rectangles on the polysilicon mask layer
must be at least microns apart. One can also think of the design rules as imposing a set of constraints on the mask
data-if all of the features in a given mask set conform to the design rules, the layout is considered to be design rule
correct, and the designer can have some confidence that it can be manufactured with high yield.
1.
Proof of design rule correctness is required before manufacturing can begin, so an important class of CAD tools,
called design rule checkers, have been developed to check the mask data against the design rules in order to supply
such a proof. In addition, layout generation tools, such as placement and routing tools, layout compactors, and leaf-
less synthesis tools [3], require detailed information about the design rules so that correct layout can be produced.
Therefore, in addition to making sure that the design rules are simple enough for mask designers to follow easily, it is
important that they be specified in a formal and un-ambiguous manner so that they can be codified for use by algorithm
designers.
Two competing methods have evolved for the formal expression and modeling of VLSI design rules. The first
employs a sequence of operations acting on the layout rectangles, such as the expansion and shrinkage of particular
rectangle edges, and the union/intersection/inversion of overlapping rectangles. Design rules are defined by the presence
of an empty or non-empty intersection between rectangles of a specified mask layer, or combination of mask
layers, suitably expanded or shrunk by a given amount. Since these mask operations are global in nature, acting on all
rectangles of a particular mask layer in the layout, we refer to these as mask-based design rule models. Mask-based
modeling forms the basis of the commercial design rule checking software package Dracula [1], and has been formalized
by Modarres and Lomax [5] through the use of set theoretic methods. We refer to the model of Modarres and
Lomax as the ML model for design rules.
The second method developed for the modeling of design rules treats the rules as constraints on the distance
between individual edges of rectangles in the layout. There can potentially be a design rule constraint between any
pair of non-perpendicular edges in the layout. The presence of a constraint, and if one exists the distance at which the
edges are constrained to lie, depends on the layers making up the rectangles involved. Since this notion of design
rules is local in extent, with its definition based on individual rectangle edges, we refer to these approaches as edge-based
models. This second method has been formalized by K. Jeppson, S. Christensson, and N. Hedenstierna [2],
using a modeling approach which we refer to as the JCH model for design rules.
The two applications for design rules mentioned above, design rule checking and layout generation, expose two
facets to the problem of design rule modeling. Mask-based methods seem dominant in the application of design rule
checking, while edge-based models dominate in layout generation systems. In the former, the layout is static and
unchanging, and we only seek to know if any design rule violations exist, and if so, where they are located. The global
nature of the mask-based model suits this application well, and appears to lead the most naturally to elegant and
efficient implementations based on well studied methods on computational geometry [6]. However, in the latter application
one often requires the capability of making local queries in the layout as a new rectangle is added, to ensure
that the rectangle is legally placed with respect to its neighbors. In addition, many layout generation applications,
such as compaction, attempt to optimize the layout by finding optimal positions for each of the layout rectangles, subject
to the design rule constraints. Because of its ability to represent the design rules as constraints between individual
edges in the design, the edge-based model appears ideally suited to this application.
In the course of our research into optimization based approaches to VLSI layout generation, we have studied the
details of the JCH model, and have encountered several situations in which that model appears incomplete. In this
paper we revisit the JCH model and provide some extensions which we feel contribute toward its universal application
The basic notion of the edge-based design rule model, i.e. the definition of a design rule as a constraint on the relative
positions of two non-perpendicular rectangle edges, is fairly simple. The complexity arises in attempting to
characterize the specific rule, if any, which will apply to a given pair of edges. In the JCH model, each rectangle edge
is assigned a type based on the layers which are present on either side. Given all of the different edge types, they
exhaustively enumerate all pairs of edges which can appear adjacent to one another, and show how each possibility
maps into a particular design rule. The strength of the JCH model is its ability to support this exhaustive analysis,
which provides a proof of its correctness and completeness. This type of analysis has never been attempted for mask-
based models, and it would probably be very difficult given the fact that all design rules are defined as global operations
on the entire layout.
In our study of the JCH model we have observed a shortcoming in the model as it was presented in [2]. The root of
the problem is that, in their exhaustive enumeration of all possible design rules, the authors only examined the case of
rules between adjacent edges in the layout. They show one example in which two edges, which should be constrained,
are separated by an intermediate edge. Because of this intervening edge their model is unable to detect the proper
constraint. Under several different interpretations of their definition for a design rule, they solve this problem by placing
constraints on the values of the design rule parameters. When these constraints are satisfied the design rules that
are missed will always be covered by other design rules, so there is no danger of missing a violation. However, we
show one example in which their constraints are not strong enough, and we feel that constraining the model in this
way is an inelegant solution to the problem. Instead, we feel it should be possible to correctly express these design
rules so that they can always be detected.
In this paper we extend the JCH model to cover, without ambiguity, situations under which design rule constraints
can exist between edges with an arbitrary number of intermediate edges. To support the specification of design rules
under this model we develop a simple grammar, called the edge-path grammar, which can be used to describe a more
complete set of design rules. Using this grammar we describe the design rule macros of the JCH model and extend
these macros to cover design rules which were un-representable in the notation used in [2].
2. Previous Work-The JCH Model for Design Rules
In the VLSI layout environment as defined by Mead and Conway [4], an integrated circuit is represented symbolically
by a collection of geometric primitives on several different layout layers. The masks used to manufacture the
circuit can later be derived from this collection of layout layers 1 . Formally, a layout layer represents a set of geometric
primitives. All geometric primitives in a given set are said to be "on" that layout layer. One can also form derivative
layers by performing various logical set operations on the layout layers. We will use the general term layer to
refer either to a primary layout layer or a derivative layer.
In the JCH model for design rules, the basic layout primitive is the rectangle. Each rectangle has four edges marking
its boundaries. Where rectangles overlap, their edges are partitioned into disjoint intervals such that all points in a
given interval have the same combination of layers on both sides. The rectangle partitioning operator
defined by Modarres and Lomax [5] can be used for this purpose.
Each edge interval (usually just called an edge) in a rectangle can be assigned a type based on the different layers
that it separates. Furthermore, since each edge is normally part of only one layout rectangle (except in the case of
"touch-edges", which we will address later), the two sides of the edge can be unambiguously defined as the inside
layer and the outside layer. The type of an edge can therefore be expressed with the following notation.
Edge-type: inside-layer/outside-layer (2-1)
Figure
2-1 shows an example of two overlapping rectangles, one on layer A and one on layer B. The right-hand
edge of the rectangle on layer A has been partitioned into three edge intervals with different edge types because of its
overlap with the rectangle on layer B. To label the edges, the JCH model introduces the notation shown in the figure.
We consider A to be a boolean variable indicating whether one side of an edge is on layer A or not. If A appears uncomplemented
it indicates that side is on layer A. Conversely, if A appears complemented (written ) it indicates
that side is not on layer A. Therefore the edge-interval labeled , marked with a bold line, indicates that its
inside-layer consists of both layer A and layer B, while its outside-layer consists only of layer B.
1. In the interest of reducing the number of layout layers, it may not be necessary to maintain a direct one-to-one correspondence between
the layout layers and the masks. For instance, it is common to use only a single layout layer to represent interconnect vias, as the specific
oxide-etch mask that a via will appear on can be inferred from the surrounding layers.
layer A
AB/AB edges
AB/AB edges
AB/AB edges
edge-interval
Figure
Two overlapping rectangles and the resulting edge partitions and types.
AB/AB edges
A
Edges and edge intervals are represented geometrically by a line in the plane. In manhattan layouts, edges are
either vertical or horizontal. Vertical edges have a single x-axis coordinate, labelled , and a pair of y-axis coor-
dinates, labelled and , where by convention . Similarly, horizontal edges have a single
y coordinate and a pair of x coordinates. For simplicity, all of our examples will assume vertical edge, but they are
trivially extended to horizontal edges as well.
In the JCH model, there are two classes of design rules, both involving rectangle edges. Forbidden-edge rules simply
forbid a particular edge-type from occurring. Restricted-separation rules specify a separation value to be enforced
between two parallel edges 2 .
Restricted-separation rules can be viewed as a constraint enforcing either a minimum, maximum, or exact separation
between two edges, edge1 and edge2, and can be written as follows in the case of vertical edges.
Here we are specifying that the distance between the two edges, in the direction perpendicular to the edges, is constrainted
by some value d. The variable d is the design rule parameter, and is in general a conditional quantity whose
value may depend on any properties of the two edges that can be extracted from the layout. Common examples of
these properties are:
. the edge-types of edge1 and edge2
. the presence of rectangles overlapping edge1 or edge2
. the lengths of edge1 and edge2
. the widths of the rectangles formed by edge1 or edge2
. the edge-types of edges that lie between edge1 and edge2
. the degree of overlap of the two edges in the direction perpendicular to the edges
This could imply a semantically complex set of rules, but most common cases tend to be simple. The JCH model
advocates the creation of macros to express the common cases, and the treatment of more complex cases as further
conditions that can be placed on the design rules when required.
The JCH macros involve only the properties in the first bullet, the edge types of the two edges involved. Rectangles
overlapping one or both of the edges can conditionally form a special class of rules called "conjunctive" design rules,
and properties of the edges such as their length form classes of so-called "conditional" design rules in the JCH model.
We will review these in Section 2.3. In this paper, we will show several reasons to include the property of the fifth
bullet, the edges types of all edges lying between the two edges in the layout.
The property mentioned in the last bullet is a common implementation detail worth mentioning. Often, it is only
necessary to enforce a design rule between two edges that overlap somewhere along their length (i.e. vertical edges
must overlap in the y direction) as shown in Figure 2-2(a). However, in some design rules (typically the spacing rules)
the notion of overlap must be implemented carefully in order to ensure that the rule is enforced correctly at rectangle
corners. In these cases, for the purpose of detecting the edge overlap, the ends of the edges must be extended by the
design rule distance d, as shown in Figure 2-2(b). In the JCH model this edge extension is controlled by the "con-
cave" modifier to their "edge function". At these extended rectangle corners, some foundry design rules require the
design rule constraints to enforce the manhattan distance between the edges, and in some cases they allow the shorter
euclidean distance, as shown In Figure 2-2(c). The latter case can be supported by making the design rule parameter d
conditional on the y coordinates of vertical edges. The parameter d can be reduced as the edges cease to overlap in the
y direction, and the constraint is eliminated altogether when the edge-extensions cease to overlap.
In order to describe a design rule, the value of the design rule parameter must be given along with the conditions
under which it applies. As we mentioned above, in the JCH model the value of the design rule parameter in restricted-
2. Here we are assuming a manhattan model for layout in which all edges must be parallel to one euclidean axis. The model can easily be
extended to non-manhattan layouts by allowing constraints between all edges that are not perpendicular.
edge x
edge y1 edge y2 edge y1 edge y2
separation rule
- maximum separation rule
exact separation rule
separation constraints is commonly conditional only on the edge-types of the pair of edges involved. This condition is
given a notational representation which we refer to as an edge-pair, written in the following way:
Since the two edges in the edge-pair notation share a common outside layer, this implies that constraints can only
be placed between adjacent edges. At the end of Section 2. we will show that restricting the edges to be adjacent
proves to be a serious limitation. We will expand the definition of the design rule to include not only the edge-types of
both edges, but also the types of edges that may appear between these two edges. For brevity in our expanded defini-
tion, and in order to emphasize that the two edges must share a common outside-layer, we introduce the following
notation to represent the same edge-pair shown in (2-3).
We call this expression an edge-path, as it represents a sequence of edges between which the constraint lies. We
will develop the grammar for edge-paths more fully in Section 3.
In the remainder of this section we will describe the design rules of the JCH model using the edge-path notation.
These rules can be split up into two categories, one which contains rules involving only a single layer, and one which
contains rules involving two layers. The former are referred to as intra-layer rules, the latter inter-layer rules. These
will be reviewed in Sections 2.1. and 2.2. Section 2.3. will discuss the author's method for addressing design rules
which involve more than two layers, and Section 2.4. will address the limitations of the JCH model.
2.1. Intra-Layer Design Rules
Some design rules involve only a single layer. These are referred to as the intra-layer rules. When checking intra-
layers rules, one layer is considered at a time and the presence of all other layers is ignored. If we refer to this layer as
layer A, the layer type on either side of an edge can take on only the values A and . There are thus four possible
edges that can occur in this situation:
Of these, the patterns and do not represent actual edges, as the layer is the same on both sides. Therefore
d
constraint
euclidean
distance
manhattan
distance
d
Figure
Conditions placed on the design rule constraint by the degree of vertical overlap
of two vertical edges. In (a) a constraint is required because the rectangles overlap in the y
direction. In (b) a constraint is still required to prevent design rule violation at the rectangle
corners until the rectangles are separated in y by more than the distance d. In (c) is demonstrated
two possible interpretations of design rule constraints at rectangle corners.
(b) (c)
d
d overlap
(a)
A
A A
A A
A A
only the remaining two edges will appear in the design rule constraints.
Since the design rule constraints must share a common outside-layer, there are only two intra-layer constraints:
In (2-6) each row represents one design rule. The expressions WIDTH and SPACING represent two different
design rule parameters which correspond to the parameter d in (2-2), and the associated edge-path indicates the conditions
under which this design rule is applied. These two intra-layer rules are diagrammed in Figure 2-3.
2.2. Inter-Layer Design Rules
The case when two different layers can be involved in a design rule is more complex than the single-layer case. In
the JCH model these are referred to as the inter-layer design rules. For any pair of layers present in the design, if we
refer to these layers as layer A and layer B, the layer types on either side of an edge can take on the values , ,
, or . There are therefore 16 possible edges that can be formed, as shown below.
As in the intra-layer case, edges with the same layer type on both sides are not considered to be true edges, and we
can therefore eliminate four of these "non-edges" from the list shown in (2-7). In addition, there are four edges in
which both layers undergo a transition, which can also be eliminated. These edges are called "touch edges" in the
JCH model, and they will not normally be encountered during inter-layer design rule checks. We elaborate on this
issue in Section 4.3. Only the eight edge types shown in bold in (2-7) will be retained for the following analysis.
There are four possible layer types that can represent type1 in a design rule constraint. In order to form a valid pair
of edges there are only two possible values for the type2 layer given type1, and only two possible values for type3
given a value for type2. This results in 16 different inter-layer constraints, as shown below.
A A
A A
layer A layer A
layer A
WIDTH SPACING
Figure
2-3: The two intra-layer design rule parameters of the JCH model
In (2-8) notice that in the SPACING and WIDTH rules the variables for either layer A or layer B appear with the
same inversion in all three of the type1, type2 and type3 layer fields. Thus, this layer is either absent on both sides of
both edges, or always present on both sides. These edge paths will also be matched by the intra-layer WIDTH and
SPACING rules, and the JCH model assumes that they therefore need not be checked here.
With the WIDTH and SPACING rules eliminated, only the remaining four rules require inter-layer design rule
parameters. These four rules are illustrated in Figure 2-4. Note that in the JCH model the A-EXTENSION-OF-B rule
is given the name "extension" while the B-EXTENSION-OF-A rule is given the name "margin". We feel that these
names are ambiguous and have chosen more descriptive names.
Here we would like to point out that neglecting the SPACING and WIDTH constraints in the inter-layer macros
makes sense in the case of the first column in which the unchanging layer is always absent. However the case of the
second column, in which the unchanging layer is present and overlaps both edges, is an example of a "conjunctive"
rule, as discussed in Section 2.3. Conjunctive SPACING and WIDTH rules may be different in some processes than
the non-conjunctive rules, in which case a user may wish to retain these edge-paths as legitimate design rules.
2.3. Conditional and Conjunctive Design Rules
Two classes of design rules which cannot be directly represented in the JCH model are examined by its authors.
They refer to these two classes as conditional and conjunctive design rules.
Conditional design rules are ordinary intra-layer or inter-layer design rules in which the constraint parameter is
conditional on some outside factor. An example of this is the common metal-halation rule which requires a larger separation
between metal wires when one of the wires is wider than some threshold. This rule is a result of processing
effects due to the high reflectivity of interconnect metal and its effect on photoresist exposure times. The specification
of this rule requires that the SPACING rule for a particular layer be made conditional on the width of the rectangle to
which one or both of the edges belongs. This width would have to be extracted from the layout prior to design rule
constraint generation.
In conjunctive design rules, the constraint parameter value depends on the presence or absence of an otherwise
unrelated layer. As an example, step coverage problems in non-planar metallization processes may require the SPACING
between two metal-2 wires to be increased if either wire overlaps metal1. This is an example of the conjunctive
CLEARANCE A-EXTENSION-OF-B B-EXTENSION-OF-A OVERLAP
Figure
2-4: The four inter-layer design rule parameters of the JCH model
inter-layer SPACING rule discussed in the previous section. More complex conjunctive design rules may involve a
third layer, as in a conjunctive CLEARANCE rule between metal-1 and metal-2 wires which is increased if either
wire overlaps a polysilicon wire. This latter case, in which an inter-layer design rule between two layers A and B is
conditional on a third layer C is the only situation in which a design rule may involve edges between more than two
layers.
The authors suggest that some conditional and conjunctive design rule checks can be accomplished by forming
carefully chosen derivative layers. When this fails they suggest an ad-hoc method in which the design rules are
extended with special conditional constraints tailored to the specific problem.
Conditional design rules are difficult to characterize, as there can be an almost endless variety, and it may be the
case that one will always have to resort to ad-hoc methods. However, the extensions to the JCH model that we propose
in Section 3. are capable of expressing any conceivable conjunctive design rule.
2.4. Limitations of the JCH model
Recall that, in the JCH model, a design rule is represented by a constraint between a type1/type2 edge and a type3/
edge. This constraint can be interpreted and implemented in a number of ways. The authors choose to interpret
the constraint as a region of width d extending from the type1/type2 edge from which the type3/type2 edge is forbidden
from appearing, and vice versa.
For reasons of efficiency, the authors have chosen to implement their system with a simplified version of this inter-
pretation. Instead of searching for the type3/type2 edge within the constraint region, they search for any instances of
the layer type3. This would initially appear to be equivalent, but they discuss a case, shown in Figure 2-5(a), in which
these interpretations are not consistent. In this case, their simplified model would search for the layer inside the
constraint region of the edge, and would identify the clearance constraint shown. This should not be flagged
as an error since it does not comply with the original definition of clearance. They state that, in the case of the clearance
constraint, their simplified interpretation is only consistent with their original definition if the following constraint
holds.
A
A
spacing
extension
overlap
clearance
"correct"
clearance
A
A
spacing
extension
overlap
clearance
"correct"
clearance
(a) (b)
Figure
2-5: Examples of a clearance rule as identified by two different constraint implementa-
tions. Figure (a) shows the result of searching for the type3 layer, layer , inside the constraint
region of the type1/type2 edge. Figure (b) shows the result of searching only for the
layer that undergoes a transition across the rule definition's type3/type2 edge, in this case layer
. In both figures we show the "correct" clearance rule that we identify by eye.
{
When saying that the simplified interpretation is "consistent" under these conditions, they mean that edges identified
as constraints by the simplified interpretation, but not the original design rule definition, will never be flagged as
errors unless they violate other design rules as well. Similar constraints can be derived for the other three inter-layer
parameters.
However, we would like to point out that under their simplified interpretation, what most designers would consider
the correct clearance rule, marked in Figure 2-5(a), is never checked. This could lead to a design rule check passing
when in fact an error exists. Furthermore, and potentially more serious, the correct clearance rule is not identified by
the authors original definition of design rules either. The reason for this failure is the presence of an intermediate edge
between the edges that should be checked, which causes the outside layers of the two edges to be different. The following
constraint, more stringent than that of (2-9), is required to guarantee that clearance violations are not missed in
this example. Again, similar constraints are required for the other three inter-layer parameters.
Interestingly, the authors present a second simplified interpretation of the design rule constraints which actually
detects the correct edges in this case. An example of this interpretation in the context of the same clearance rule used
in
Figure
2-5(a) is shown in Figure 2-5(b). Instead of searching the constraint region of the type1/type2 edge for
instances of the layer type3, they search only for the layer that undergoes a transition across the type3/type2 edge
(recall that, in the absence of touch edges, only one layer will transition across the edge.) In this example it is equivalent
to searching for instances of layer B inside the constraint region of the edge. In this situation, the check
does in fact identify the correct clearance constraint, as shown. The authors indicate that this second simplified interpretation
for the design rules is only consistent with their original definition for design rules given the same constraint
written in (2-10). What they failed to observe is that when this constraint is not met, it is the simplified interpretation
that is correct and not the original interpretation. The adoption of this second simplified interpretation may seem to
solve the inaccuracy problem, but in Section 3.3. (Figure 3-4) we will show an example of a clearance rule that would
not be located by this method either.
As a final comment on this second simplified interpretation of the design rule constraints, we note that, unlike the
check in Figure 2-5(a), the check in Figure 2-5(b) is not reflexive. It is not possible to identify the same constraint by
searching for layer A from the edge, as the only inter-layer constraint that begin on edges is the
A-EXTENSION-OF-B constraint. Until this point, all constraints have had duals which allowed the constraint to be
identified starting from either edge involved. When all rules are reflexive, design rule checking can be simplified
because the layout only has to be scanned in one direction along each manhattan axis.
In the following section we develop our edge-path grammar and show how it provides an unambiguous definition
for the design rules under all circumstances. We also show how the edge-path grammar can be used to extend the JCH
design rule model to include a straightforward and consistent model for conjunctive design rules.
3. Extending the JCH model
The true nature of the limitations which have been demonstrated in the JCH model is its inability to model situations
in which one or more intermediate edges lie between the two which must be constrained. When this occurs, the
two edges no longer share a common outside layer, and the rule cannot be written as in (2-3). What is required is a
more general view of the design rule constraints that spans multiple edges. In order to express these, we will extend
the notation which we presented in (2-4) of Section 2., which we have called the edge-path grammar.
3.1. The Edge-Path Grammar
When defined in the most general way possible, a design rule constraint can exist between any two non-perpendicular
edges in a circuit layout. There can be an arbitrary number of intermediate edges between the two constrained
edges, and the value of the constraint can depend on the exact pattern of layer boundaries that make up these edges. If
{
we choose a point on each of the pair of edges we wish to check, and connect them with a line as shown in Figure 3-
1, this line will originate at one edge, cross a specific sequence of layers as we move from one edge to the other, and
terminate on the second edge. We call this sequence an edge-path, which is written as follows.
In an edge-path, type1, type2, ., and typen represent boolean products of layer variables which indicate the presence
or absence of each layer in that particular region. We will now present this in a more formal way.
Definition 3-1: l is a layer in an integrated circuit layout. The layer l can represent either a primary layout layer, or
a derivative layer formed by a sequence of logical set operations on the layout layers.
Definition 3-2: L is the set of all layers present in a particular integrated circuit layout.
Definition 3-3: p is a boolean variable representing a layer . The range of p is the set , in which the
value indicates the absence of layer l, and the value indicates the presence of layer l, in a particular region of
the layout. Alternatively, the range of p can be written where and . If a variable for a particular
layer does not appear in an expression, it can be considered a don't-care condition.
3-4: P is an expression representing the conjunction of one or more boolean variables . P
represents the combination of layers from the subset which are present in a particular region of a
layout. Since the variables p have a binary range, P can be thought of as a cube in an n-dimensional boolean lattice.
Definition 3-5: An edge-path E is a totally-ordered set , written , which represents
a sequence of layers encountered along a line drawn in the plane of the layout. Each pair of adjacent members in
the set, for all , represents an edge at which one or more layers undergoes a transition.
Definition 3-6: The length of an edge-path E is defined as , the number of edges encountered when following
a particular edge-path.
Using the edge-path grammar we have a more powerful notation for representing VLSI design rules than that of
the JCH model's edge-pair. We have already demonstrated that we can represent all of the JCH model intra-layer and
inter-layer rules as edge paths of length two. However, we have also demonstrated that this set of edge-paths is inadequate
to describe the design rules under all circumstances. We will show that a complete set of design rules, including
conjunctive rules and rules involving touch edges, can be modeled under the edge-path notation if the edge paths are
allowed to become arbitrarily long.
Fundamentally, the design rules for a semiconductor process can be described as a large set of edge-path/parame-
ter pairs. However, in order to simplify design rule entry, it is advantageous to follow the methodology of the JCH
model and classify these edge-paths into sets which effectively represent the same design rule. This facilitates the use
of design rule macros that free the user from an intimate knowledge of the full model. Many such classifications are
possible. We will demonstrate that any edge-path can either be classified as one of the six parameters of the JCH
model (SPACING, WIDTH, CLEARANCE, A-EXTENSION-OF-B, B-EXTENSION-OF-A, and OVERLAP), or it
corresponds to a configuration of geometry that does not normally require a design rule.
In some situations this simple classification will be inadequate. It neglects the conjunctive design rules and rules
involving touch edges. In some cases it may actually place a design rule on a geometrical pattern that should be forbidden
from occurring. Obviously the edge-path classification that we present should only be considered a typical
default. The user should be allowed to override the macros and forbid particular configurations, or add new edge-
paths for situations that are not covered, and we will discuss several situations in which it will be common to do so.
Figure
3-1: A edge-path between two edges with an arbitrary number of intervening edges
l L
{ }{ } 1
{ }
{
{ }
{ }
{ } L
{ }
3.2. Edge-Paths of Length Three
We begin by studying all edge-paths of length three. For simplicity, we have decided to forbid touch-edges for the
same reasons as stated earlier, so there are 32 edge paths of length three. It is easy to show that all non-conjunctive
intra-layer design rules can be captured in sequences of length two, so we will only concern ourselves with the inter-layer
design rules.
Recall that our methodology is to examine all possible edge-paths and show that each one either matches a specific
JCH model parameter, or it corresponds to a situation that is not a traditional design rule. Half of the 32 edge-paths of
length three fall under this latter case and do not match any of the six JCH model parameters. These are shown below,
and several examples are diagrammed in Figure 3-2.
One thing that each of the 16 edge-paths in (3-2) have in common is that, like the examples in Figure 3-2, they
begin on an edge of a rectangle, cross the near-side of a second rectangle, and terminate on the far edge of the second
rectangle, or vice versa. These two edges are usually constrainted by the sum of a width and a separation/clearance/
extension rule, and we know of no design rules in use that specify a more stringent rule in these circumstances. For
this reason, we choose to ignore these patterns when they are matched in a layout, essentially making them non-con-
straints. However, we emphasize that a designer could override this decision and specify design rule parameters for
each of these patterns if desired.
Each of the remaining 16 edge-paths can be matched to a specific intra-layer or inter-layer design rule parameter
as defined in the JCH model. These are shown below and diagrammed in Figure 3-3.
A A
Figure
3-2: Examples of edge-paths of length three that do not correspond to any inter-layer
rule
AB AB AB AB
AB AB AB AB
AB AB AB AB
AB AB AB AB
AB AB AB AB
AB AB AB AB
AB AB AB AB
AB AB AB AB
AB AB AB AB
AB AB AB AB
AB AB AB AB
AB AB AB AB
AB AB AB AB
AB AB AB AB
AB AB AB AB
AB AB AB AB
Upon examining the 16 edge-paths in (3-3), we note that, just as in the case of the inter-layer edge-paths of length
two, the SPACING and WIDTH rules would be matched by the corresponding intra-layer design rules, and we can
eliminate them from the default inter-layer design rule specification. However, as before, these patterns represent conjunctive
design rules and the user may wish to specify these edge-paths to the system manually.
The remaining eight edge-paths should be generated by the inter-layer design rule macro in addition to the edge
paths in (2-8). They represent layout patterns which may exist in the layout but which are not matched by any of the
design rules in the JCH model. This will eliminate the constraints under which the JCH model will either indicate a
false positive or false negative during a design rule check as a result of layout situations like those addressed in [2],
and repeated in Figure 2-5. Of course some of these patterns, especially the new EXTENSION edge-paths, may correspond
to geometry which should not occur in the layout. If this is the case, the designer should specify these edge-
paths as forbidden edge types instead of allowing them to be matched to the corresponding inter-layer design rule.
3.3. Edge Paths of Length Four and Above
There are 64 edge paths of length four, 128 edge paths of length five, with the number continuing to double as the
length increases. For brevity we will only summarize our results in this section. It turns out that we can identify conjunctive
WIDTH and SPACING constraints with edge-paths of arbitrary length. We show the four layer A WIDTH
and SPACING design rules of length four in Figure 3-4. It is clear from these examples that we can insert an arbitrary
number of layer B edges between the layer A edges whose width or spacing is to be constrained, and vice versa. How-
ever, all of these cases can probably be covered by a single conjunctive intra-layer macro, if not by the original non-
conjunctive intra-layer macro.
Also shown in Figure 3-4 are the only inter-layer design rules of length four or larger, two CLEARANCE rules.
These two edge-paths, shown below in (3-4), should also be generated by the design rule compiler when an inter-layer
design rule macro is specified by the user, or specified as forbidden edge-paths by the user if they correspond to
an illegal configuration.
It is interesting to note that the CLEARANCE rules shown here would not be detected by the JCH model's second
simplified interpretation for the design rule constraints, which was discussed in Section 2.4. Under this interpretation
the CLEARANCE rule is detected by searching for instances of layer A within the constraint region of
edges and instances of layer B within the constraint regions of edges. However, there are no regions
along this edge path.
A
A
A
A-WIDTH
A
B-WIDTH
A
A
A
A
Figure
3-3: The 8 inter-layer design rules for edge-paths of length three
A
A
Just as was done in Equation 2-9, we can write the following constraint on the design rule parameters under which
the original JCH model will still be valid for these two CLEARANCE rules. This constraint is more strict than the
constraints described by the authors, and it corresponds to a situation in which their model could potentially have
generated false positive design rule checking results.
As a final result, we state that, with the exception of the conjunctive intra-layer design rules mentioned above,
which continue to appear in edge-paths of any length, there are no inter-layer design rules of length greater than four.
It should be obvious that OVERLAP rules cannot exist with path lengths greater than two (i.e. there can be no intermediate
edges.) In the case of the CLEARANCE and EXTENSION rules, except in the cases we have already dem-
onstrated, long edge-paths always contain shorter sub-strings that also match the same rule. This shorter sub-path
therefore covers the longer edge-path, guaranteeing that it will never be in violation. Examples of this effect are presented
in Figure 3-5.
A
A
A-WIDTH
A
A
A
A
A
Figure
3-4: The SPACING and WIDTH design rules for layer A (the rules for B are the same but
with A and B reversed), plus the two CLEARANCE rules, for edge-paths of length four.
A A
A
A-WIDTH
{
A
A
Figure
3-5: Example of cases in which a design rule with a path-length of four is blocked by
an identical rule which is a sub-expression of the same edge-path
A A
4. Implementation Suggestions
In this section we discuss several implementation details which are important if this model is to be incorporated
into a practical design system. In Section 4.1. we present a simple state machine representation of the design rule
edge paths which neatly summarizes the results of Section 3. As mentioned previously, there are no conjunctive rules
in the default specification, and by default touch edges are forbidden between layers that are to be checked for viola-
tions. In Sections 4.2. and 4.3. we show how this system can be extended to specify conjunctive inter-layer design
rules, and how touch-edges can be handled in an elegant way.
4.1. State Machine Representation of Design Rules
Section 3. presented a large collection of edge-paths and the design rule parameter that is associated with each.
These can be neatly summarized by the state diagram shown in Figure 4-1. We can view an edge-path as a straight-line
drawn in the layout plane, beginning at a point on a rectangle edge (the source edge), and projecting in a direction
perpendicular to that edge. As this line is traversed from the source-edge outward, each new edge that is encountered
will cause a state transition that maintains a record of the previous edges that have been encountered, and indicates
when a design rule check is needed. The line traversal can end when a terminal vertex in this state diagram is reached.
Design rule constraints are required when certain states are reached, as shown in the state diagram.
This conceptual view, in addition to providing a compact summary of the edge-path checks, leads to a direct
implementation of the common shadow algorithm used for constraint-generation in one-dimensional compaction, a
good summary of which is presented in [9].
begin
BEA
BEA C
O
BEA
BEA
O
done
done
done
done
done
done
done
done
done
done
done
done
done
done
Figure
4-1: State machine that summarizes the edge-paths resulting in design rule checks.
Key:
4.2. Conjunctive Design Rule Specification
We have already demonstrated how four of the inter-layer edge-paths of length two correspond to rules that can be
considered conjunctive versions of the intra-layer width and spacing rules. These edge-paths are ignored in the JCH
model because they are covered by the intra-layer rules. However, if conjunctive versions of these rules are required,
some method must be present to allow them to be checked.
Conjunctive versions of the inter-layer rules are also common. These can be specified simply by allowing edge-
path expressions to contain three or more boolean layer variables. As an example, Figure 4-2 shows a conjunctive
CLEARANCE rule in which the separation between layers A and B changes because of their overlap of layer C.
If conjunctive rules are required, the JCH model outlines a strategy which makes use of derived-layers obtained
through logical set operations on the existing layout layers. However, in the design rule system that we have outlined,
these rules can be specified directly by adding their edge-paths as additional checks. Of course, when three or more
layers are involved in the edge path expressions, this will result in a large number of possible edge-paths that a user
would be required to enter into the system, and that must be added to the state machine in Figure 4-1.
4.3. Touch Edges
In the JCH model, "touch-edges" are edges at which more than one layer undergoes a transition. Figure 4-3 shows
the two types of touch-edges which can occur during inter-layer checks. Outside-touch edges ( and
are formed when rectangles touch but don't overlap, and inside-touch edges ( and
are formed when rectangles overlap and touch on one edge.
The authors of the JCH model chose not to include touch edges in their definitions of the inter-layer rules. Instead
they introduce two flags into the inter-layer macro that, if set, simply forbid them from appearing for each pair of lay-
ers. Their reasoning is not given, but straightforward. It is usually the case that if any design rules exist between two
layers, touch edges will be forbidden from occurring. If the two layers interact during processing so as to require
design rules between them, the overlap, extension, and clearance rules ensure that rectangle edges are separated. It is
inconsistent under these circumstances to allow two rectangles to touch without overlapping. Conversely, if there are
no design rules between two layers, implying that they don't interact with each other during processing, there is no
reason for touch edges to be forbidden.
Figure
4-2: A conjunctive CLEARANCE rule involving layers A, B, and C.
A A
OUTSIDE_TOUCH INSIDE_TOUCH
Figure
4-3: Examples of OUTSIDE_TOUCH and INSIDE_TOUCH edges
It is unclear exactly how the design rules should be defined when touch edges are allowed on layers between
which design rule are defined. One valid option is that the designer could enter edge-path expressions to specify the
design rules between touch-edges and other edges. However, this could result in a large number of new rules.
The JCH model deals with touch edges by considering the inter-layer rules to be "minimum distance or touch
rules", though they fail to elaborate precisely what this means. We interpret this as meaning that when a touch-edge is
encountered, constraints are generated to the touch-edge exactly as if it were two separate edges of each layer individ-
ually. Thus, there can potentially be two constraints generated between a normal edge and a touch-edge, and there
may be up to four constraints generated between two touch edges.
If we make use of the state-machine description of the design rules shown in Figure 4-1, there is a particularly elegant
way of viewing constraint generation for touch edges under this "minimum distance or touch rule" interpreta-
tion. If a touch edge is encountered as an edge-path is being traced through the layout, the state machine simply
explores both branches out of the current state.
5. Conclusions
In Section 2.4. we discussed several limitations of the JCH model in the context of several examples provided by
its authors. We showed how the original definition of the design rules, a constraint between a type1/type2 and a type3/
type2 edge, can fail to match some edge patterns to the correct design rule. Two alternative interpretations for the
design rule definitions are presented by the authors. The first, which they chose to implement, searches for any
regions of type3 material inside the constraint region of the type1/type2 edge, and vice versa. The second interpretation
presented by the authors searches the type1/type2 constraint region for the presence of the layer in type3 that
makes a transition across the type3/type2 edge. As the authors point out, all three interpretations can yield different
results under certain conditions. All three methods can report false design rule violations under certain conditions
given by the authors, and more importantly, it can fail to report design rule violations under certain conditions, as we
have shown.
The authors of the JCH model have presented some constraints under which their various implementations of the
design rules are correct, in the sense that a design rule violation will not be reported between edges which are incorrectly
flagged with constraints by their model. However, we have demonstrated that their constraints are insufficient,
causing legitimate design rules to be missed. With tighter constraints their model can easily be corrected, but we do
not think that creating conditions under which the model does not apply is an elegant solution.
The root of the limitations of the JCH model are its inability to recognize design rules between rectangle edges
which are not adjacent. We propose an extension to the JCH model in the form of a new syntax for expressing the lay-out
patterns that define the different design rules, which we call an edge-path.
Edge-paths can express a constraint between two edges that are separated by a sequence of edges of arbitrary
length. With this syntax, we can perform an exhaustive examination of all possible edge-paths and check each pattern
to see if it corresponds to a design rule that should be enforced. We have demonstrated that we can characterize all
edge paths through the use of the six original JCH model design rule parameters. The two intra-layer parameters can
be expressed with only four edge-paths, all of length two. The four inter-layer parameters can be expressed using
edge paths with a length of at most four. We have also shown how this new syntax can be used to elegantly express
design rules involving "touch-edges" as well as the conjunctive design rules mentioned by the authors of the JCH
model.
Acknowledgments
The authors would like to thank Ronald Lomax and Jeff Bell for reading early drafts of this paper and for their
insightful comments.
--R
Cadence Design Systems
Formal definitions of edge-based geometric design rules
Combinatorial Algorithms for Integrated Circuit Layout.
Introduction to VLSI Systems.
A Formal Approach to Design-Rule Checking
Computational Geometry: An Introduction.
A statistical design rule developer.
Silicon Processing
Symbolic layout and compaction.
--TR
Computational geometry: an introduction
Combinatorial algorithms for integrated circuit layout
Introduction to VLSI Systems
Magic
Lyra | design rules;layout verification;design rule checking |
293906 | Discrete Lotsizing and Scheduling by Batch Sequencing. | The discrete lotsizing and scheduling problem for one machine with sequence-dependent setup times and setup costs is solved as a single machine scheduling problem, which we term the batch sequencing problem. The relationship between the lotsizing problem and the batch sequencing problem is analyzed. The batch sequencing problem is solved with a branch & bound algorithm which is accelerated by bounding and dominance rules. The algorithm is compared with recently published procedures for solving variants of the DLSP and is found to be more efficient if the number of items is not large. | Introduction
In certain manufacturing systems a significant amount of setup is required to change production from
one type of products to another, such as in the scheduling of production lines or in chemical engineering.
Productivity can then be increased by batching in order to avoid setups. However, demand for different
products arises at different points in time within the planning horizon. To satisfy dynamic demand, either
large inventories must be kept if production is run with large batches or frequent setups are required if
inventory levels are kept low. Significant setup times, which consume scarce production capacity, tend
to further complicate the scheduling problem. The discrete lotsizing and scheduling problem (DLSP) is a
well-known model for this situation.
In the DLSP, demand for each item is dynamic and back-logging is not allowed. Prior to each production
run a setup is required. Setup costs and setup times depend on either the next item only (sequence
independent), or on the sequence of items (sequence dependent). Production has to meet the present or
future demand, and the latter case also incurs holding costs. The planning horizon is divided into a finite
number of (short) periods. In each period at most one item can be produced, or a setup is made ("all
or nothing production"). An optimal production schedule for the DLSP minimizes the sum of setup and
holding costs.
The relationship between the DLSP and scheduling models in general motivated us to solve the DLSP as a
batch sequencing problem (BSP). We derive BSP instances from DLSP instances and solve the DLSP as a
BSP. Demand for an item is interpreted as a job with a deadline and a processing time. Jobs corresponding
to demand for the same item are grouped into one family. Items in the DLSP are families in the BSP.
All jobs must be processed on a single machine between time zero and their respective deadlines, while
switching from a job in one family to a job in another family incurs (sequence dependent) setup times and
setup costs. Early completion of jobs is penalized by earliness costs which correspond to holding costs.
As for the DLSP, an optimal schedule for the BSP minimizes the sum of setup costs and earliness costs.
The DLSP was first introduced by Lasdon and Terjung [10] with an application to production scheduling
in a tire company. Complexity results for the DLSP and its extensions are examined in Salomon et al. [14],
where the close relationship of the DLSP to job (class) scheduling problems is emphasized. A broader view
on lotsizing and scheduling problems is given in Potts and Van Wassenhove [13]. An approach based on
lagrangean relaxation is proposed by Fleischmann [6] for the DLSP without setup times. Fleischmann [7]
utilizes ideas from solution procedures for vehicle routing problems to solve the DLSP with sequence
dependent setup costs. The DLSP with sequence independent setup times and setup costs is examined
by Cattrysse et al. [4]. In a recent work, Salomon et al. [15] propose a dynamic programming based
approach for solving the DLSP with sequence dependent setup times and setup costs to optimality. The
results of [4], [7] and [15] will serve as a benchmark for our approach for solving the BSP.
The complexity of scheduling problems with batch setup times is investigated by Bruno and Downey [2]
and Monma and Potts [12]. Bruno and Downey show the feasibility problem to be NP-hard if setup times
are nonzero. Solution procedures for scheduling problems with batch setup times are studied in Unal and
Kiran [17], Ahn and Hyun [1] and Mason and Anderson [11]. In [17] the feasibility problem of the BSP is
addressed and an effective heuristic is proposed. In [1] and [11], algorithms to minimize mean flow time
are proposed. Webster and Baker [20] survey recent results and derive properties of optimal schedules for
various batching problems.
The contribution of the paper is twofold. First, we solve the DLSP as a BSP and state the equivalence
between both models such that we can solve either the DLSP or the BSP. Second, we present an algorithm
that solves the BSP faster than known procedures solving the DLSP.
The paper is organized as follows: we present the DLSP and the BSP in Section 2 and provide a numerical
example in Section 3. The relationship of both models is analyzed in Section 4. Section 5 presents a
timetabling procedure to convert a sequence into a minimum cost schedule, and in Section 6 we describe a
branch & bound algorithm for solving the BSP. A comparison of our algorithm with solution procedures
solving variants of the DLSP is found in Section 7. Summary and conclusions follow in Section 8.
Model Formulations
The DLSP is presented with sequence dependent setup times and setup costs, we refer to this problem as
SDSTSC. SDSTSC includes the DLSP with sequence independent setups (SISTSC), sequence dependent
setup costs but zero setup times (SDSC), and the generic DLSP with sequence independent setup costs
and zero setup times (cf. Fleischmann [6]) as special cases.
Table
1: Parameters of the DLSP
i index of item (=family), denotes the idle machine
t index of periods,
q i;t demand of item i in period t
holding costs per unit and period of item i
st g;i setup time from item g to item i,
g;i setup costs per setup period from item g to item i,
sc g;i setup costs from item g to item i,
st g;i g
Table
2: Decision Variables of the DLSP
Y i;t 1, if item i is produced in period t, and 0 otherwise. Y
time in period t
if the machine is setup for item i in period t, while the previous item
was item g, and 0 otherwise
I i;t inventory of item i at the end of period t
The DLSP parameters are given in Table 1. Items (families) in the DLSP (BSP) are indexed by i, and
holding costs per unit of item i and period. Production has to fulfill the demand q i;t for item i
in period t. Setup costs sc g;i are "distributed" over maxf1; st g;i g setup periods by defining per-period
setup costs sc p
g;i . The decision variables are given in Table 2: we set Y production takes place
for item i in period t. V a setup from item g to item i in period t, and I i;t denotes the
inventory of item i at the end of period t.
In the mixed binary formulation of Table 3, the objective (1) minimizes the sum of setup costs sc p
(per
setup period st g;i ) and inventory holding costs. Constraints (2) express the inventory balance. The "all or
nothing production" is enforced by constraints (3): in each period, the machine either produces at full unit
capacity, undergoes setup for an item, or is idle, i.e. Y for an idle period. For st
instantiate V g;i;t appropriately. Constraints (5) couple setup and production whenever st g;i ? 0: if item i is
produced in period t and item g in period t\Gamma- \Gamma1 then the decision variable V st g;i .
Constraints (6) enforce the correct length of the string of setup variables V g;i;t\Gamma- for st g;i ? 1. However,
for st g;i ? 0 we have to exclude the case Y setting any V this is done
by constraints (7). Constraints (8) prevent any back-logging. Finally, the variables I i;- , V g;i;- , and Y i;-
are initialized for - 0, by constraints (11). Due to the "all or nothing production", we can write down
a DLSP schedule in terms of a period-item assignment in a string - specifies the
action in each period, i.e. - st g;i ? 0).
The BSP is a family scheduling problem (cf. e.g. Webster and Baker [20]). Parameters (cf. Table related
to the N families are the index i, the number of jobs n i in each family, and the total number of jobs J .
Table
3: Model of the DLSP
Min
(1)
subject to
I
st
st
st g;i
st g;i ?
st g;i
st g;i ? 0;
I
I
Table
4: Parameters of the BSP
number of jobs of family i,
number of jobs
denotes the j-th job of family
processing time for the j-th job of family i
d (i;j) deadline for the j-th job of family i
w (i;j) earliness weight per unit time for the j-th job of family i
number
Table
5: Decision variables of the BSP
- sequence of all jobs,
denotes the job at position k
C (i;j) completion time of job (i;
there is idle time between the jobs (i
and 0, otherwise
Table
Model of the BSP
Min ZBSP
subject to
st i [k\Gamma1] ;i [k]
(st 0;i [k]
st i [k\Gamma1] ;i [k]
st i [k\Gamma1] ;i [k]
one unit of family i in inventory for one period of time. Setup times st g;i and setup costs sc g;i are given
for each pair of families g and i. The set of jobs is partitioned into families i, the j-th job of family i is
indexed by the tuple (i; j). Associated with each job (i; are a processing time p (i;j) , a deadline d (i;j) ,
and a weight w (i;j) . Job weights w (i;j) are proportional to the quantity (=processing time) of the job
(proportional weights), they are derived from h i and p (i;j) . We put the tuple in brackets to index the job
attributes because the tuple denotes a job as one entity.
The decision variables are given in Table 5. The sequence - denotes the processing order of the jobs,
denotes the job at position k; together with completion times C (i;j) of each job we obtain the
schedule oe. A conceptual model formulation for the BSP is presented in Table 6. ZBSP (oe) denotes the
sum of earliness and setup costs for a schedule oe, which is minimized by the objective (12). The earliness
weighted by w (i;j) , and setup costs sc i [k\Gamma1] ;i [k]
are incurred between jobs of different
families. Each job is to be scheduled between time zero and its deadline, while respecting the sequence
on the machine as well as the setup times. This is done by constraints (13). Constraints
to one if there is idle time between two consecutive jobs. We then have a setup time st 0;i [k]
from the idle
machine rather than a sequence dependent setup time st i [k\Gamma1] ;i [k]
. Initializations of beginning and end of
the schedule are given in (16) and (17), respectively.
Remark 1 For the BSP and DLSP parameters we assume that:
1. setup times and setup costs satisfy the triangle inequality, i.e. st g;i - st g;l st l;i and sc g;i -
2. there are no setups within a family, i.e. st no tear-down times and costs, i.e.
st
3. there is binary demand in the DLSP, i.e. q i;t 2 f0; 1g.
4. jobs of one family are labeled in order of increasing deadlines, and deadlines do not interfere, i.e.
Remark 1 states in (1.) that it is not beneficial to perform two setups in order to accomplish one. Mason
and Anderson [11] show that problems with nonzero tear-downs can easily be converted into problems
with sequence dependent setups and zero tear-downs, which motivates (2. With (1.) and (2.) we have
st 0;i - st g;i analogously for setup costs. Thus, the third term in the
objective (12) is always nonnegative. Assumption (3.) anticipates the "all or nothing production" for
each item i (cf. also Salomon et al. [14]) and is basically the same assumption as (4.): if only jobs of one
family i are considered, they can be scheduled with C
The main observation that motivated us to consider the DLSP as a special case of the BSP is that the
(q i;t )-matrix is sparse, especially if setup times are significant. The basic idea is to interpret items in
the DLSP as families in the BSP and to regard nonzero demand in the DLSP as jobs with a deadline
and a processing time in the BSP. In order to solve the DLSP as a special case of the BSP we derive
BSP instances from DLSP instances in the following way: setup times and setup costs in the BSP and
Comparison
Equivalence
BSP solution procedure
DLSP instances
DLSP solution procedures
BSP(DLSP) or BSPUT(DLSP)
Solution
Transformation
Figure
1: Comparison of DLSP and BSP
DLSP are identical, and the job attributes of the BSP instances are derived from the (q i;t )-matrix by
Definitions 1 and 2.
defined as a BSP instance with unit time jobs derived from a DLSP
instance. For each family i there are n
jobs. An entry q
denotes a job (i;
defined as a BSP instance derived from a DLSP instance. A sequence of
consecutive "ones" in the (q i;t )-matrix, i.e. q
. The number of times that a sequence of
consecutive ones appears for an item i defines n i .
Figure
1 provides the framework for the BSP-DLSP comparison: after transforming DLSP instances
into BSP instances, we compare the performance of solution procedures and the quality of the solutions.
The difference between the approaches is as follows: in the DLSP, decisions are made anew in each
individual period t, represented by decision variables Y i;t and V g;i;t (cf. Table 2). In the BSP, we decide
how to schedule jobs, i.e. we decide about the completion times of the jobs. BSP and DLSP address
the same underlying planning problem, but use different decision variables. Br-uggemann and Jahnke [3]
make another observation which concerns the transformation of instances: a DLSP instance may be not
polynomially bounded in size while the size of the BSP(DLSP) instance is polynomially bounded. On that
account, in [3] it is argued, that the (q i;t )-matrix is not a "reasonable" encoding for a DLSP instance in
the sense of Garey and Johnson [9] because BSP(DLSP) describes a problem instance in a more concise
way.
3 Numerical Example
In this section, we provide an example illustrating the generation of BSPUT(DLSP) and BSP(DLSP). We
will also refer to this example to demonstrate certain properties of the BSP.
In
Figure
2 we illustrate the equivalence between both models. The corresponding parameters setup times,
setup costs and holding costs are given in Table 7. Figure 2 shows the demand matrix (q i;t ) of DLSP and
the jobs at their respective deadlines of BSPUT(DLSP) and BSP(DLSP).
Table
7: Numerical Example: Setup and Holding Costs
st
a 3 a 3 3 3 a 2
1 a
a
a 1 a 2
BSPUT(DLSP), oe ut
a
BSPUT(DLSP), oe ut
BSP(DLSP), oe d
Figure
2: DLSP, BSPUT(DLSP) and BSP(DLSP)
Table
8: BSPUT(DLSP) Instance and Solution
a
For BSPUT(DLSP), we interpret each entry of "one" as a job (i; j) with a deadline d (i;j) . Processing
times p (i;j) are equal to one for all jobs. We summarize the BSPUT(DLSP) parameters in Table 8. An
optimal DLSP schedule with h is the string - a in Figure 2 (with entries f0; a; 1; 2; 3g for idle or
time or for production of the different items, respectively). This schedule is represented by oe ut
a for
BSPUT(DLSP), and is displayed in Table 8. Both schedules have an optimal objective function value
of ZBSP (oe ut
a
In BSP(DLSP), consecutive "ones" in the demand matrix (q i;t ) are linked to one job. The number of jobs is
thus smaller in BSP(DLSP) than in BSPUT(DLSP). For instance, jobs (1; 2) and (1; 3) in BSPUT(DLSP)
are linked to one job (1; 2) in BSP(DLSP), compare oe ut
b and oe b . However, a BSP(DLSP) schedule cannot
a in
Figure
2 since there we need unit time jobs. For BSP(DLSP) we now let the cost
parameters costs for all families) and sc is the
optimal DLSP schedule and oe b the optimal BSP(DLSP) schedule. Again, the optimal objective function
value is ZBSP (oe b
The example shows that the same schedules can be obtained from different models. In the next section
we formally analyze the equivalence between the DLSP and the BSP.
4 Relationship Between BSPUT(DLSP), BSP(DLSP) and DLSP
In the BSP we distinguish between sequence and schedule. A BSP schedule may have inserted idle
time so that the processing order does not (fully) describe a schedule. In the following we will say that
consecutively sequenced before job (i is sequenced at the next position. If
we consecutively schedule job (i there is no idle time between both jobs, i.e. the term
in brackets in constraints (14) equals zero. A sequence - for the BSP consists of groups, where a group is
an (ordered) set of consecutively sequenced jobs which belong to the same family. On the other hand, a
schedule consists of (one or several) blocks. Jobs in one block are consecutively scheduled, different blocks
are separated by idle time (to distinguish from setup time). Jobs in one block may belong to different
families, and both block and group may consist of a single job. As an example refer to Figure 2 where
both oe c and oe d consist of five groups, oe c forms two blocks, and oe d is only one block.
For a given sequence -, a BSP schedule - oe is called semiactive if C (i;j) is constrained by either d (i;j) or
the start of the next job; no job can be scheduled later or rightshifted in a semiactive schedule - oe. We can
derive - oe from a sequence - if constraints (13) are equalities and P k is set to zero. The costs ZBSP (-oe) are
a lower bound for costs ZBSP (oe) of a BSP schedule oe because -
oe is the optimal schedule of the relaxed
BSP in which constraints (14) are omitted. However, in the semiactive schedule there may be idle time
and it may be beneficial to schedule some jobs earlier, i.e. to leftshift some jobs to save setups (which will
be our concern in the timetabling procedure in Section 5).
In both models we save setups by batching jobs. In the DLSP, a batch is a non-interrupted sequence of
periods where production takes place for the same item i 6= 0, i.e. Y . In the BSP, jobs
of one group which are consecutively scheduled without a setup are in the same batch. A batch must not
be preempted by idle time. In Figure 2, the group of family 3 forms two batches in schedule oe c whereas
this group is one batch in oe b .
We will call a sequence - (schedule oe) an EDDWF sequence (schedule) if jobs of one family are sequenced
(scheduled) in nondecreasing order of their deadlines (where EDDWF abbreviates earliest deadline
within families). Ordering the jobs in EDDWF is called ordered batch scheduling problem in Monma
and Potts [12]. By considering only EDDWF sequences, we reduce the search space for the branch & bound
algorithm described in Section 6.
We first consider BSPUT(DLSP) instances. The following theorem states, that for BSPUT(DLSP) we
can restrict ourselves to EDDWF sequences.
Theorem 1 Any BSPUT(DLSP) schedule oe can be converted into an EDDWF schedule ~ oe with the same
cost.
Proof: Recall that jobs of one family all have the same weights and processing times. In a schedule
oe, let A; B; C represent parts of oe (consisting of several jobs), and
(processing) times of the parts. Consider a schedule where jobs are not ordered in EDDWF,
i.e. . The schedule ~
has the same objective function value
because w (i;j 1 . The completion times of the parts A; B; C do not change because
Interchanging jobs can be repeated until oe is an EDDWF schedule, completing
the proof. 2
A DLSP schedule - and a BSPUT(DLSP) schedule oe are called corresponding solutions if they define the
same decision. A schedule schedule oe are corresponding solutions if for each
point in time the following holds: (i) - in oe the job being processed at t belongs to
family a and a setup is performed in oe, and (iii) - and the machine is idle in oe.
Figure
2 gives an example for corresponding solutions: - a corresponds to oe ut
a , and - b corresponds to oe ut
b .
We can always derive entries in - from oe, and completion times in oe can always be derived from - if oe
is an EDDWF schedule.
Theorem 2 A schedule oe is feasible for BSPUT(DLSP) if and only if the corresponding solution - is
feasible for DLSP, and - and oe have the same objective function value.
Proof: We first prove that the constraints of DLSP and BSPUT(DLSP) define the same solution space.
In the DLSP, constraints (2) and (8) stipulate that
. For each q i;t ? 0
(2) and (8) enforce a Y t. The sequence on the machine - the sequence dependent
setup times taken into account - is described by constraints (3) to (7). In the BSP this is achieved by
constraints (13). We schedule each job between time zero and its deadline. All jobs are processed on a
single machine, taking into account sequence dependent setup times.
Second, we prove that the objective functions (1) and (12) assign the same objective function value to
corresponding solutions - and oe: the cumulated inventory for an item i (over the planning horizon
equals the cumulated earliness of family i, and job weights equal the holding costs, i.e. h
for BSPUT(DLSP). Thus, the terms
I i;t and
are equal for corresponding
solutions - and oe.
Some more explanation is necessary to show that corresponding solutions - and oe have the same setup
costs. Consider a setup from family g to i (g; i 6= 0) without idle time in oe: we then have sc
st g;i g and we have st g;i consecutive "ones" in V g;i;t , which is enforced by (6). On the other
hand, in the case of inserted idle time we have a setup from the idle machine (enforced by the decision
variable P k of the BSP) and there are st 0;i consecutive "ones" in V 0;i;t . Thus, the terms
and
are equal for corresponding solutions - and oe. Therefore,
corresponding solutions - and oe incur the same holding and the same setup costs, which proves the
theorem. 2
As a consequence of Theorem 2, a schedule oe is optimal for BSPUT(DLSP) if and only if the corresponding
solution - is optimal for DLSP, which constitutes the equivalence between DLSP and BSP for BSP-
UT(DLSP) instances. We can thus solve DLSP by solving BSPUT(DLSP).
In general, however, the more attractive option will be to solve BSP(DLSP) because the number of jobs
is smaller.
Definition 3 In a schedule oe, let a production start of family i be the start time of the first job in a
batch. Let inventory for family i build between C (i;j) and d (i;j) . The schedule oe is called regenerative if
there is no production start for a family i as long as there is still inventory for family i.
The term "regenerative" stems from the regeneration property found by Wagner and Whitin [19] (for
similar ideas cf. e.g. Vickson et al. [18]). Each regenerative schedule is also an EDDWF schedule, but
the reverse is not true. If a schedule oe is regenerative, jobs (i; are in the same batch if
holds. Furthermore, in a regenerative BSPUT(DLSP) schedule oe, jobs from
consecutive "ones" in (q i;t ) are scheduled consecutively (recall for instance oe ut
b and oe b in Figure 2); hence a
regenerative BSPUT(DLSP) schedule represents a BSP(DLSP) schedule as well. In Figure 2, schedule oe d
is not regenerative: a batch for family is started at there is still inventory for
We first show that we do not lose feasibility when restricting ourselves to regenerative schedules only.
Theorem 3 If oe is a feasible BSPUT(DLSP) or BSP(DLSP) schedule then there is also a feasible regenerative
schedule ~ oe.
A
oe
~
oe
A
Figure
3: Regenerative Schedule
Proof: In a schedule oe, let i B (i A ) be the family to which the first (last) job in part B (A) belongs.
Consider a non-regenerative schedule oe, i.e. are not
in one batch though C
Consider schedule ~
where (i; are interchanged and (i; are in one batch. ~ oe is feasible because
~
leftshifting B we do not violate feasibility. Furthermore, due to the triangle inequality
we have st i A ;i B - st i A ;i st i;i B . Thus, B can be leftshifted by p (i;j) time units without affecting CA .
Interchanging jobs can be repeated until oe is regenerative which proves the theorem. 2
An illustration for the construction of regenerative schedules is depicted in Figure 3. Interchanging (i;
and B, we obtain from oe the regenerative schedule ~
oe.
Unit processing times are not needed for the proof of Theorem 3, so we have in fact two results: first, to
find a feasible schedule we may consider BSP(DLSP) instead of BSPUT(DLSP). Second, for BSP(DLSP)
we only need to search over regenerative schedules to find a feasible schedule. Theorem 3 is a stronger
result than the one found by Salomon et al. [14] and Unal and Kiran [17] who only state the first result.
Moreover, if holding costs are equal, the next theorem extends this result to optimal schedules.
Theorem 4 If oe is an optimal BSPUT(DLSP) or BSP(DLSP) schedule and h i is constant 8i, then there
is also an optimal regenerative schedule ~ oe.
Proof: Analogous to the proof of Theorem 3 we now must consider the change of the objective function
value if (i; are interchanged. Without loss of generality, let h
ZBSP (oe) (ZBSP (~oe)) denote the objective function value of oe (~oe).
For part B, which is leftshifted, we have wB - pB because processing time in part B is at most pB , but B
may contain setups as well. Interchanging B and (i; j), the objective changes as follows:
Due to the triangle inequality, setup costs and setup times in oe are not larger than in ~
oe, i.e. \Gammasc i A ;i \Gamma
explains (i). We leftshift B by p (i;j) and rightshift (i; j) by pB with wB - pB ,
which explains (ii). Thus ZBSP (~oe) - ZBSP (oe), which proves the theorem. 2
Considering regenerative schedules, we again achieve a considerable reduction of the search space. To
summarize we have so far obtained the following results:
1. DLSP and BSP are equivalent for BSPUT(DLSP).
2. Feasibility of BSP(DLSP) implies feasibility of DLSP.
3. For equal holding costs an optimal BSP(DLSP) schedule is optimal for DLSP.
When instances with unequal holding costs are solved, the theoretical difference between BSP(DLSP) and
DLSP in 3. has only a small effect: computational results in Section 7.3 will show that there is almost
always an optimal regenerative BSPUT(DLSP) schedule to be found by solving BSP(DLSP).
5 A Timetabling Procedure for a Given Sequence
For a given sequence - the following timetabling procedure decides how to partition - into blocks, or
equivalently, which consecutively sequenced jobs should be consecutively scheduled. In the BSP model
formulation of Table 6, we have the job at position k starts a new block, or P
blocked with the preceding job. By starting a new block at position k, we save earliness costs at the
expense of additional setup costs. In Figure 2, earliness costs of oe b are higher than for oe c but we save
one setup in oe b .
In the timetabling procedure, we start with the semiactive schedule and leftshift some of the jobs to find a
minimum cost schedule. Consider the example in Figure 2: for the sequence
(3; 3); (2; 2); (1; 2)) the semiactive schedule -
oe is given in Figure 4. We first consider two special cases.
If we omit constraints (14) of the BSP (so that each group is a batch and idle time may preempt the
batch) timetabling is trivial: the semiactive schedule is optimal for a given sequence because no job can
be rightshifted to decrease earliness costs (and because setup costs are determined by - and not by oe).
Timetabling is also trivial if earliness weights are zero (i.e. h
case, we can leftshift each job (without increasing earliness costs) until the resulting schedule is one block
(e.g. schedule oe d in Figure 2 is one block and no job can be leftshifted). We have sc g;i - sc i;0 +sc
setup costs are minimized if jobs are scheduled in a block, and there is an optimal schedule which consists
of one block.
In the general case, we need some definitions: block costs bc k1;k2 are the cost contribution of a block from
position k 1 to k 2 , i.e.
bc k1
The block size bs k is the number of jobs which are consecutively scheduled after job (i
included). For instance, in Figure 4 we have bs
Let denote f k (b) the costs of a schedule from position k to J if bs
k the costs of the
minimum cost schedule and bs
k the corresponding block size at position k. The recurrence equation for
determining f
k and bs
f
b=1;:::;bs
oe
Position k
Figure
4: Semiactive Schedule
Table
9: Computations of Equation (18) for the Example in Figure 4
f
In equation (18) we take the minimum cost for bs
is the maximum
block size at position k (and a new block starts at k+bs
1). For a given block size b, f k (b) is the sum
of block costs from position k to position k
to the next block and the minimum
cost f
k+b . Basically, equation (18) must be computed for every sequence. However, some simplifications
are possible:
if two jobs can be consecutively scheduled in the semiactive schedule, it is optimal to increment bs
bs
so that equation (18) needs not to be evaluated. Consequently, if the
semiactive schedule is one block, timetabling is again trivial: each group in - oe equals one batch, the whole
schedule forms a block, and -
If setups are sequence independent, a minimum cost schedule can be derived with less effort as follows: let
the group size gs k at position k denote the number of consecutively sequenced jobs at positions r ? k that
belong to the same family as job (i [k] ; j [k] ). Then, for sequence independent setups, equation (18) must
be evaluated only for gs k . The reasoning is as follows: jobs of different groups are leftshifted
to be blocked only if we can save setup cost. Then, for consecutive groups of families g and i we would
need sc which does not hold for sequence independent setups; therefore, we only
need to decide about the leftshift within a group.
An example for the computations of equation (18) is given in Table 9 for the semiactive schedule in
Figure
4 (see the cost parameters in Table 7). The schedule - oe contains idle time, and we determine f
and bs
k for each position k, k - J .
Consider jobs (2,2), (3,3) and (3,2) at positions 6,5 and 4 in Figure 4. Up to position 4 the semiactive
schedule is one block, and we increment bs
k , which is denoted by entries (-) in Table 9. After job (3,1),
oe has inserted idle time between positions 3 and 4 (d different block sizes must be
considered to find the minimum cost schedule. In Table 9, we find f
i.e. we start a new block
after position split the group into two batches, as done for oe c in Figure 2. For the objective
function value, we add sc 0;2 to f
1 and obtain a cost of ZBSP (oe c
6 Sequencing Algorithm
In this section we present a branch & bound algorithm for solving the BSP to optimality, denoted as
SABSP. Jobs are sequenced backwards, i.e. at stage 1 a job is assigned to position J , at stage 2 to
position stage s to position assigns s jobs to the last
s positions of sequence -, in addition to that an s-partial schedule oe s also assigns completion times to
each job in - s . A partial schedule ! s is called completion of oe s if ! s extends oe s to a schedule oe which
schedules all jobs, and we write
We only examine EDDWF sequences, as if there were precedence constraints between the jobs. The
precedence graph for the example in Figure 2 is shown in Figure 5. Using the EDDWF ordering, we
Figure
5: EDDWF Precedence Graph for Backward Sequencing
Table
10: Attributes of Partial Schedules
under consideration at stage s
UB upper bound, objective function value of the current best schedule
c(oe s ) cost of oe s without the setup for (i s
AS s (US s ) set of jobs already scheduled (unscheduled) in the s-partial schedule oe s
UI s set of families to which jobs in US s belong to
of jobs which form the first block of oe s
of earliness weights of jobs in G 1 (oe s ), i.e.
decide in fact at each stage s which family to schedule. A job is eligible at stage s if all its (precedence
related) predecessors are scheduled. An s-partial schedule (corresponding to a node in the search tree) is
extended by scheduling an eligible job at stage s + 1. We apply depth-first search in our enumeration and
use the bounding, branching, and dominance rules described in Sections 6.1 and 6.2 to prune the search
tree.
Each (s-partial) sequence - s uniquely defines a minimum cost (s-partial) schedule oe s by the timetabling
procedure. Enumeration is done over all sequences and stops after all sequences have been (implicitly)
examined; the best solution found is optimal. The implementation of SABSP takes advantage of the fact
that equation (18) needs not be recalculated for every oe s and, in the case of backtracking, the computation
of equation (18) has already been accomplished for the partial schedule to which we backtrack.
Table
lists attributes of s-partial schedules. For each scheduling stage s we identify the job (i s
under consideration, and the start time t(oe s ) and costs c(oe s ) of the s-partial schedule. The set of currently
scheduled (unscheduled) jobs is denoted by AS s (US s ). UI s denotes to which families the jobs in US s
belong; UB is the current upper bound.
6.1 Bounding and Branching Rules
The feasibility bound states that for a given oe s , all currently unscheduled jobs in US s must be scheduled
between time zero and t(oe s ) and, furthermore, we need a setup time for each family in UI s . More formally,
define
min
fst
(i;j)2US s
Then, oe s has no feasible completion ! s if t(oe s
The cost bound states that costs c(oe s ) of an s-partial schedule are a lower bound for all extensions of oe s ,
and for any completion ! s at least one setup for each family in UI s must be performed. We define
min
fsc g;i g:
Then, oe s cannot be extended to a schedule improves UB if C s
Both bounds are checked for each s-partial schedule oe s . Clearly, T s and C s can be easily updated during
the search. We also tested a more sophisticated lower bound where all unscheduled jobs were scheduled
in EDD order without setups. In this way we were able to derive a lower bound on the earliness costs as
well and check feasibility more carefully, but computation times did not decrease.
If only regenerative schedules need to be considered to find the optimal schedule (cf. Theorem 4), we
employ a branching rule as follows: scheduling (i s
eligible in the EDDWF precedence graph. If t(oe s
we batch (i s and do not consider any other job as an extension of oe s . We need not
enumerate partial schedules where oe s is extended by a job (g; because then the resulting
schedule is non-regenerative.
6.2 Dominance Rules
The most remarkable reduction of computation times comes as a result of the dominance rules. The
dominance rules of SABSP compare two s-partial schedules oe s and oe s , which schedule the same set of
jobs, so that AS
s
. In this notation, schedule oe s denotes the s-partial schedule currently under
consideration, while oe s denotes a previously enumerated schedule which may dominate oe s . A partial
schedule oe s dominates oe s if it is more efficient in terms of time and cost: oe s starts later to schedule the
job-set, i.e. t(oe s incurs less cost, i.e. c(oe s ) - c(oe s ). If the family i s of the job scheduled
at stage s differs in oe s and oe s we make both partial schedules "comparable" with a setup from i s to
we compare time and cost but subtract setup times and setup costs appropriately.
If a schedule oe s is not dominated, we store for the job set AS s and family i s the pair t(oe s ) and c(oe s )
which is "most likely" to dominate other s-partial schedules. Note that the number of partial schedules is
exponential in the number of items N so that storage requirements for the dominance rules grow rapidly
if N increases.
For a formal description of the dominance rules we need several definitions (cf. Table 10): all jobs
which form a block with (i s belong to the set G 1 (oe s ), and the sum of earliness weights in G 1 (oe s ) is
denoted as w 1 (oe s ). The dominance rules take into account the block costs for all extensions of oe s and oe s :
we consider for oe s the maximum, for oe s the minimum costs incurred by blocking; oe s then dominates oe s
if c(oe s ) plus an upper bound on block costs is less or equal c(oe s ) plus a lower bound on block costs.
An upper bound on the block costs for oe s is given by sc 0;i s (recall that sc 0;i - sc g;i ). Then, oe s starts a
new block. But a tighter upper bound can be found for start times close to t(oe s in order to save costs
we can leftshift all the jobs in G 1 (oe s ) (but only these), because after G 1 (oe s ) we perform a new setup from
the idle machine. G 1 (oe s ) is the largest block which may be leftshifted. Let pbt(oe s ) denote the time where
the cost increase due to a leftshift of G 1 (oe s ) exceeds sc 0;i s . We then have w 1 (oe s )(t(oe s
and define the pull-back-time pbt(oe s ) of an s-partial schedule oe s as follows:
Consequently, for time t, pbt(oe s an upper bound on block costs is given by leftshifting G 1 (oe s );
are bounded by sc 0;i s .
A lower bound on the block costs for oe s is given in the same way as for oe s , but now we consider the
smallest block that can be leftshifted, which is simply job (i s
We can now state the dominance rule: we differentiate between i (Theorem 5) and i s
orem 6).
Theorem 5 Consider two s-partial schedules oe s and oe s with AS
Proof: Any completion ! s of oe s is also a feasible completion of oe s because of (i); if (! s ; oe s ) is feasible,
too. Due to (ii), for any ! s , the schedule (! s ; oe s ) has lower costs than (! s ; oe s ).
In the following we consider the cost contributions of oe s and oe s due to leftshifting, when we extend oe s
and oe s . Consider Figure 6 for an illustration of the situation in a time-cost diagram. We have
due to EDDWF also (i s line represents the upper bound on block costs for oe s .
For pbt(oe s expensive to leftshift G 1 (oe s ), while for t ! pbt(oe s ) a setup from the
idle machine to i s is performed. The broken line represents the lower bound on block costs for oe s . The
smallest block that can be leftshifted is the job (i s
In order to prove that oe s will never have less costs than oe s due to blocking, we check the costs at points (ii)
and (iii): at (ii) we compare the costs at t(oe s ) while at (iii) we compare them at pbt(oe s ). Between (ii)
and (iii) costs increase linearly, and for t ! pbt(oe s ) we know that there is a monotonous cost increase
for oe s , while costs of oe s no longer increase. Thus, if (ii) and (iii) are fulfilled, cost contributions of oe s are
less than those of oe s , i.e. there is no completion ! s such that ZBSP (! s ; oe s completing
the proof. 2
For the example in Figure 2, Figures 7 and 8 illustrate Theorem 5 with 3-partial schedules oe 3 and oe 3 . In
Figure
7, we have G 1 (oe 3
cost
oe s
oe s
Figure
Illustration of Theorem 5
oe 3t
Figure
7: Theorem 5: oe 3 dominates oe 3
Checking (ii), we have 22 - while for (iii) we have 22
so that oe 3 is dominated. Figure 8 illustrates the effect of block costs, but with modified data as follows:
10=3. Checking Theorem 5, we have (ii)
not fulfilled. Thus, oe 3 does not
dominate oe 3 , though c(oe 3 Figure 8 shows that c(oe 4
leftshifted.
In the second dominance rule for the case i s must consider sc 0;i instead of sc g;i to take block
costs into account.
Theorem 6 Given two s-partial schedules oe s and oe s with AS
st
denote the family of the (last) job in a completion ! s of oe s . Then st st st
analogously for setup costs due to the triangle inequality. Thus any completion ! s of oe s is also a feasible
completion of oe s because of (i); if (! s ; oe s ) is feasible, (! s ; oe s ) is feasible, too. Due to (ii), for any ! s , the
schedule (! s ; oe s ) has lower costs than (! s ; oe s ).
Figure
8: Theorem 5: oe 3 does not dominate oe 3
The difference is that now also block costs are taken into account in (ii): when leftshifting G 1 (oe s ) in an
extension of oe s , we have c(oe s as an upper bound for the cost contribution. A trivial lower bound
for the cost contribution of oe s is c(oe s ). Thus oe s dominates oe s as any ! s completes oe s at lower costs,
completing the proof. 2
Finally, an alternative way to solve the BSP is a dynamic programming approach. We define the job-sets
as states and apply the dominance rules in the same way. An implementation of this approach was less
efficient and is described in Jordan [8].
7 Comparison with Procedures to solve Variants of the DLSP
From the analysis in Section 4 we know that we address the same planning problem in BSP and DLSP,
and that we find corresponding solutions. Consequently, in this section we compare the performance of
algorithms solving the BSP with procedures for solving variants of the DLSP. The comparison is made
on the DLSP instances used to test the DLSP procedures; we take the instances provided by the cited
authors and solve them as BSP(DLSP) or BSPUT(DLSP) instances (cf. Figure 1). An exception is made
for reference [7] where we use randomly generated instances.
The different DLSP variants are summarized in Table 11. For the DLSP, in the first column the reference,
in the second the DLSP variant is displayed. The fourth column denotes the proposed algorithm, the third
column shows whether computational results for the proposed algorithm are reported for equal or unequal
holding costs. Depending on the holding costs, the different DLSP variants are solved as BSP(DLSP)
or BSPUT(DLSP) instances. With the exception of reference [15], the DLSP procedures are tested with
equal holding costs, so that regenerative schedules are optimal in [4] and [7].
7.1 Sequence Independent Setup Times and Setup Costs (SISTSC)
In Cattrysse et al. [4], a mathematical programming based procedure to solve SISTSC is proposed.
Cattrysse et al. [4] refer to their procedure as dual ascent and column generation procedure (DACGP).
The DLSP is first formulated as a set partitioning problem (SPP) where the columns represent the
production schedule for one item i; the costs of each column can be calculated separately because setups
are sequence independent. DACGP then computes a lower bound for the SPP by column generation, new
Table
Solving Different DLSP Variants as a BSP
Author Variant Holding Costs Algorithm Instances Properties of
Schedules
Cattrysse
et al. [4]
regenerative
Fleisch-
mann
regenerative
Salomon
et al. [15]
one block
columns can be generated solving a single item subproblem by a (polynomial) DP recursion. In DACGP
a feasible schedule, i.e. an upper bound, may be found in the column generation step, or is calculated
by an enumerative algorithm with the columns generated so far. If in neither case a feasible schedule is
found, an attempt is made with a simplex based procedure.
The (heuristic) DACGP generates an upper and a lower bound, SABSP solves BSP(DLSP) to optimality.
DACGP is coded in FORTRAN, SABSP is coded in C. DACGP was run on an IBM-PS2 Model 80
PC (80386 processor) with a 80387 mathematical coprocessor, we implemented SABSP on the same
machine to make computation times comparable.
Computational results for the DACGP are reported only for identical holding costs
items. Consequently, we solve DLSP as BSP(DLSP) and only need to consider regenerative schedules,
Theorem 4. Furthermore, the timetabling procedure requires fewer computations in equation (18) as
setups are sequence independent.
The DLSP instances with nonzero setup times are provided by the authors of [4]. They generated instances
for item-period combinations f(N; T 60)g. We refer only to
instances with smaller instances are solved much faster by SABSP than by DACGP.
The DLSP instances have setup times st g;i of either 0, 1 or 2 periods. The average setup-time per
item (over all instances) is (approximately) 0.5, making setup times not very significant. For each item-
period combination instances with different (approximate) capacity utilizations ae were generated: low (L)
capacitated (ae ! 0:55), medium (M) (0:55 - ae ! 0:75) and high (H) capacitated instances (ae - 0:75).
Approximate capacity utilization is defined as ae = 1=T
were generated for each
combination, amounting to 3 instances in total.
In
Table
12, we use #J to denote the average number of jobs in BSP(DLSP) for the instance size (N; T )
of the DLSP. For DACGP we use 4 avg to denote the average gap (in percent) between upper and lower
bound. # inf is the number of instances found infeasible by the different procedures and R avg denotes the
average time (in seconds) needed for the instances in each class. For DACGP, all values in Table 12
are taken from [4].
Table
12: Comparison of DLSP and BSP Algorithms for SISTSC
DACGP SABSP
(4;
(386 PC with coprocessor)
In the comparison between DACGP and SABSP, the B&B algorithm solves problems with
much faster; the number of sequences to examine is relatively small. For computation times
of SABSP are in the same order of magnitude than for DACGP. In (6; 60;M) the simplex based procedure
in DACGP finds a feasible integer solution for one of the 10 instances claimed infeasible by DACGP.
Thus, in (6; 60;M), 9 instances remain unsolved by DACGP, whereas SABSP finds only 7 infeasible
instances. DACGP also fails to find existing feasible schedules for (N; T; ae) =(2,60,H), (4,60,M). Recall
that SABSP takes advantage of a small solution space, keeping the enumeration tree small and thus
detecting infeasibility or a feasible schedule quite quickly. DACGP tries to improve the lower and upper
bound, which is difficult without an initial feasible schedule. Therefore the (heuristic solution procedure)
DACGP may fail to detect feasible schedules if the solution space is small.
For the same problem size (N; T ) in DLSP, the number of jobs J in BSP(DLSP) may be very different.
Therefore, solution times differ considerably for SABSP. Table 13 presents the frequency distribution of
solution times. In every problem class the majority of instances is solved in less than the average time for
DACGP.
7.2 Sequence Dependent Setup Costs (SDSC)
An algorithm for solving SDSC is proposed by Fleischmann [7]. Fleischmann transforms the DLSP into
a traveling salesman problem with time windows (TSPTW), where a tour corresponds to a production
schedule in SDSC. Fleischmann calculates a lower bound by lagrangean relaxation; the condition that
each node is to be visited exactly once, is relaxed. An upper bound is calculated by a heuristic, that first
constructs a tour for the TSPTW and then tries to improve the schedule using an Or-opt operation. In
Or-opt, pieces of the initial tour are exchanged to obtain an improved schedule. Or-opt is repeated until
no more improvements are found. We refer to Fleischmann's algorithm as TSPOROPT. TSPOROPT
Table
13: Frequency Distribution of Solution Times of SABSP
Number of instances solved faster than
(4;
28
(386 PC with coprocessor)
was coded in Fortran, experiments were performed on a 486DX2/66 PC with the original code provided
by Fleischmann.
Fleischmann divides the time axis into micro and macro periods. Holding costs arise only between macro
periods, and demand occurs only at the end of macro periods. Thus a direct comparison of TSPOROPT
and SABSP using Fleischmann's instances is not viable; instead, we use randomly generated BSP instances
which are then transformed into DLSP instances. We generated instances for
low (L) (ae - 0:75) or high (H)(ae - 0:97) capacity utilization. Note that for zero setup times, ae does
not depend on the schedule; the feasibility problem is polynomially solvable. In BSP, we have an average
number of jobs with a processing time out of the interval [1; 4]. In DLSP, we have an average
for high (H) and low (L) capacitated instances. Holding costs are identical, and
we solve BSP(DLSP). From [7] we select the 2 setup cost matrices S4 and S6 which satisfy the triangle
inequality: in S4 costs equal 100 for g ! i and 500 for g ? i. For S6 we have only two kinds of setups:
items f1; 2; 3g and f4; 5g form two setup-groups, with minor setup costs of 100 within the setup-groups
and major setup costs of 500 from one setup-group to the other.
In
Table
14 results are aggregated over the instances in each class. We use 4 avg to denote the average
gap between lower and upper bound in % for TSPOROPT and R avg ( ~
R avg ) to denote the average time
for TSPOROPT (SABSP) in seconds. We denote by 4Z best the average deviation in % of the objective
function value of the heuristic TSPOROPT from the optimal one found by SABSP. Table 14 shows
that 4 avg can be quite large for TSPOROPT. Solution times of SABSP are short for high capacitated
instances and long for low ones. For S4, TSPOROPT generates a very good lower bound, we have
best and the deviation from the optimal objective is due to the poor heuristic upper bound.
On the other hand, for S6 both the lower and the upper bound are not very close to the optimum. It is
well to note that SABSP does not solve large instances of SDSC with 8 or 10 items whereas Fleischmann
reports computational experience for instances of this size as well. The feasibility bound is much weaker
for zero setup times, or, equivalently, the solution space is much larger, making SABSP less effective. For
Table
14: Comparison of DLSP and BSP Algorithms for SDSC
setup cost TSPOROPT SABSP
best R avg ~
R avg
the instances in Table 14, however, SABSP yields a better performance.
7.3 Sequence Dependent Setup Times and Setup Costs (SDSTSC)
In Salomon et al. [15], Fleischmann's transformation of the DLSP into a TSP with time windows (TSPTW)
is extended for nonzero setup times in order to solve SDSTSC. Nodes in the TSP network represent positive
demands, and all nodes must be visited within a certain time window. The transformed DLSP is solved
by a dynamic programming approach designed for TSPTW problems (cf. Dumas et al. [5]), we refer to
the procedure in [15] as TSPTWA. Paths in the TSP network correspond to partial schedules. Similar to
the dominance rule for SABSP, in TSPTWA paths may dominate other paths via a cost dominance, or
they may be eliminated because they cannot be extended, which corresponds to the feasibility bound.
TSPTWA is coded in C and run on a HP9000/730 workstation (76 mips, 22 M flops). SABSP runs on a
486DX2/66 PC.
In order to test TSPTWA Salomon et al. [15] use randomly generated instances, in which, similar to [4],
setup times st g;i 2 f0; 2g. Unfortunately, the setup times do not satisfy the triangle inequality. A
"triangularization" (e.g. with the Floyd/Warshall algorithm) often results in setup times equal to zero.
So we adjusted the setup times "upwards" (which is possible in this case because st g;i 2 f0; 1; 2g) and
as a result, setup times are rarely zero. We added 4 (8) units to the planning horizon for
in order to obtain the same (medium) capacity utilization as in [15]. In this way, instances
are supposed to have the same degree of difficulty for TSPTWA and SABSP: the smaller solution space
due to correcting st g;i upwards is compensated by a longer planning horizon.
In [15] instances are generated for and we take the (largest) instances for the item-period
combination f(N; T 60)g. The instances have a medium (M) capacity
utilization 0:5 - ae - 0:75 because setup times are nonzero. For each (N; T ) combination,
with and without holding costs are generated. Holding costs differ among the items. Consequently, we
solve BSPUT(DLSP) if Furthermore, we need not apply the timetabling
procedure in the latter case because the optimal schedule is one block. In Table 15, #F (# ~
the number of problems solved by TSPTWA (SABSP) within a time limit of 1200 sec (1200 sec) and
a memory limit of 20 MB (10 MB). #J denotes the average number of jobs for the BSP. ~
R avg ( ~
Table
15: Comparison of DLSP and BSP Algorithms for SDSTSC
F ~
R avg ~
denotes the average time SABSP requires to solve the instances (considering only regenerative schedules).
The average time is calculated over all instances which are solved within the time limit, ~
R avg is put in
brackets if not all instances are solved. The last column shows the results if we consider only regenerative
schedules during enumeration for
provides the maximal deviation in % from the optimal
schedule (which may be non-regenerative).
Table
15 demonstrates that SABSP succeeds in solving some of the problems which remained unsolved
by TSPTWA. Solution times of SABSP are relatively short compared with TSPTWA for
5. Solution times increase for and instances can only be solved if the number of jobs is
relatively small. Instances become difficult for nonzero, especially for unequal holding costs. If we only
enumerate over regenerative schedules, solution times for SABSP decrease. Moreover, only one instance
is not solved to optimality for (N; T Thus, even for unequal holding costs optimal schedules
are regenerative in most cases. Furthermore, for (N; T instances would have
been solved within the time limit of 1200 sec if only regenerative schedules would have been considered.
8 Summary and Conclusions
In this paper, we examined both the discrete lotsizing and scheduling problem (DLSP) and the batch
sequencing problem (BSP).
We presented model formulations for the DLSP and for the BSP. In the DLSP, decisions regarding what
is to be done are made in each individual period, while in the BSP, we decide how to schedule jobs. The
DLSP can be solved as a BSP if the DLSP instances are transformed. For each schedule of one model there
is a corresponding solution for the other model. We proved the equivalence of both models, meaning that
for an optimal schedule of the BSP the corresponding solution of the DLSP is also an optimal schedule,
and vice versa.
In order to solve the BSP effectively, we tried to restrict the search to only a subset of all possible schedules.
We found out that jobs of one family can be preordered according to their deadlines. Furthermore, for
equal holding costs, it is optimal to start production for a family only if there is no inventory of this
family.
When solving the BSP with a branch & bound algorithm to optimality, we face the difficulty that already
the feasibility problem is difficult. We must maintain feasibility and minimize costs at the same time.
Compared with other scheduling models, the objective function is rather difficult for the BSP. A tight lower
bound could thus not be developed. We therefore used dominance rules to prune the search tree. Again,
the difficult objective function complicates the dominance rules and forces us to distinguish different cases.
In order to evaluate our approach, we tested it against (specialized) procedures for solving variants of the
DLSP. Despite the fact that we have no effective lower bound, our approach proved to be more efficient if
(i) the number of items is small, and (ii) instances are hard to solve, i.e. capacity utilization is high and
setup times are significant. It is then "more appropriate" to schedule jobs than to decide what to do in
each individual period.
In the DLSP, the time horizon is divided into small periods and all parameters are based on the period
length. In the BSP, all parameters can also be real numbers: setup times, in particular, are not restricted
to being multiples of a period length. The different models also result in different problem sizes for DLSP
and BSP: the problem size for DLSP is essentially the number of items N and periods T while the problem
size for the BSP depends on the number of families and jobs.
We conjecture that our approach is advantageous for instances with few items and a small solution space
(i.e. long setup times and high capacity utilization), where the job sequence is the main characteristic
of a solution. In such cases we managed to solve instances with 10 (5) families and jobs on a
PC. DLSP solution procedures are thought to be better suited for lower capacitated instances with many
items, setup times that are not very significant, and parameters which differ among the periods. It is then
appropriate to decide anew for each individual period.
In the future we will extend the BSP to multilevel structures and multiple machines.
Acknowledgments
We are indebted to Dirk Cattrysse and Marc Solomon who made available their instances, and to
Bernhard Fleischmann who made available his code. Furthermore, we would like to thank three anonymous
referees for their valuable comments on earlier versions of this paper.
--R
Single facility multi-class job scheduling
Complexity of task sequencing with deadlines
"Some extensions of the discrete lotsizing and scheduling problem"
A dual ascent and column generation heuristic for the discrete lotsizing and scheduling problem with setup-times
Technical Note: An optimal algorithm for the traveling salesman problem with time windows.
The discrete lot-sizing and scheduling problem
The discrete lot-sizing and scheduling problem with sequence-dependent setup-costs
Batching and Scheduling - Models and Methods for Several Problem Classes
Computers and intractability - a guide to the theory of NP-completeness
Minimizing flow time on a single machine with job classes and setup times.
On the complexity of scheduling with batch setup-times
Integrating scheduling with batching and lot- sizing: a review of algorithms and complexity
Some extensions of the discrete lotsizing and scheduling problem.
Discrete lotsizing and scheduling with sequence dependent setup times and costs.
Batching in single operation manufacturing systems.
Batch sequencing.
Batching and sequencing of components at a single facility.
Dynamic version of the economic lot size model.
Scheduling groups of jobs on a single machine.
--TR
--CTR
C. K. Y. Lin , C. L. Wong , Y. C. Yeung, Heuristic Approaches for a Scheduling Problem in the Plastic Molding Department of an Audio Company, Journal of Heuristics, v.8 n.5, p.515-540, September 2002
Satyaki Ghosh Dastidar , Rakesh Nagi, Scheduling injection molding operations with multiple resource constraints and sequence dependent setup times and costs, Computers and Operations Research, v.32 n.11, p.2987-3005, November 2005 | batch sequencing;Sequence-Dependent Setup Times and Setup Costs;Bounding/Dominance Rule;Discrete Lotsizing and Scheduling;Branch-and-Bound Algorithm |
293922 | Synthesis of Novel Views from a Single Face Image. | Images formed by a human face change with viewpoint. A new technique is described for synthesizing images of faces from new viewpoints, when only a single 2D image is available. A novel 2D image of a face can be computed without explicitly computing the 3D structure of the head. The technique draws on a single generic 3D model of a human head and on prior knowledge of faces based on example images of other faces seen in different poses. The example images are used to learn a pose-invariant shape and texture description of a new face. The 3D model is used to solve the correspondence problem between images showing faces in different poses. The proposed method is interesting for view independent face recognition tasks as well as for image synthesis problems in areas like teleconferencing and virtualized reality. | Introduction
Given only a driver's license photograph of a
person's face, can one infer how the face might
look like from a different viewpoint? The three-dimensional
structure of an object determines how
the image of the object changes with a change in
viewpoint. With viewpoint changes, some previously
visible regions of the object become oc-
cluded, while other previously invisible regions become
visible. Additionally, the arrangement or
configuration of object regions that are visible in
both views may change. Accordingly, to synthesize
a novel view of an object, two problems must
be addressed and resolved. First, the visible regions
that the new view shares with the previous
view must be redrawn at their new positions. Sec-
ond, regions not previously visible from the view
of the example image must be generated or syn-
thesized. It is obvious that this latter problem is
unsolvable without prior assumptions. For human
which share a common structure, such prior
knowledge can be obtained through extensive experience
with other faces.
The most direct and general solution for the synthesis
of novel views of a face from a single example
image is the recovery the three-dimensional structure
of the face. This three-dimensional model
can be rotated artificially and would give the correct
image for the all points visible in the example
image (i.e. the one from which the model was
obtained). However, without additional assump-
tions, the minimal number of images necessary to
reconstruct a face using localized features is three
(Huang and Lee, 1989), and even the assumption
that a face is bilaterally symmetric reduces this
number only to two (Rothwell et al., 1993; Vetter
and Poggio, 1994). While shape from shading
algorithms have been applied in previous work
to recover the surface structure of a face (Horn,
1987), the inhomogeneous reflectance properties
of faces make surface integration over the whole
face imprecise and questionable. Additionally, the
fact that the face regions visible from a single image
are insufficient to obtain the three-dimensional
structure makes clear, that the task of synthesizing
new views to a given single image of a face,
cannot be solved without prior assumptions about
the structure and appearance of faces in general.
Models that have been proposed previously to
generalize faces from images can be subdivided
into two groups: those drawing on the three-dimensional
head structure and those considering
only view- or image-dependent face models.
In general, the knowledge about faces, which has
been incorporated into flexible three-dimensional
head models, consists of hand-constructed representations
of the physical properties of the muscles
and the skin of a face (Terzopoulos and Waters,
1993; Thalmann and Thalmann, 1995). To adjust
such a model to a particular face, two or more
images were used (Akimoto et al., 1993; Aizawa
et al., 1989). For present purposes, it is difficult
to assess the usefulness of this approach, since generalization
performance to new views from a single
image only has never been reported.
In recent years, two-dimensional image-based face
models have been applied for the synthesis of rigid
and nonrigid face transitions (Craw and Cameron,
1991; Poggio and Brunelli, 1992; Beymer et al.,
1993; Cootes et al., 1995). These models exploit
prior knowledge from example images of
prototypical faces and work by building flexible
image-based representations (active shape models)
of known objects by a linear combination of labeled
examples. These representations are applied
for the task of image search and recognition
(Cootes et al., 1995) or synthesis (Craw and
Cameron, 1991). The underlying coding of an
image of a new object or face is based on linear
combinations of the two-dimensional shape of examples
of prototypical images. A similar method
has been used to synthesize new images of a face
with a different expression or a changed viewpoint
(Beymer et al., 1993) making use of only a single
given image. The power of this technique is
that it uses an automated labeling algorithm that
computes the correspondence between every pixel
in the two images, rather than for only a hand-selected
subset of feature points. The same technique
has been applied recently to the problem of
face recognition across viewpoint change with the
aim of generating additional new views given an
example face image (Beymer and Poggio, 1995).
In spite of the power of this technique, its most
serious limitation is its reliance on the solution of
the correspondence problem across view changes.
Over large changes in viewpoint, this is still highly
problematic due to the frequency with which occlusions
and occluding contours occur. To overcome
this difficulty in the present work, we draw
on the concept of linear object classes, which we
have introduced recently in the context of object
representations (Vetter and Poggio, 1996). The
application of the linear object class approach to
this problem mediates the requirement of image
correspondence across large view changes for success
in novel view synthesis.
MAPPING PROCESS
Figure
1: Two examples of face images (top row) mapped onto a reference face (center) using pixelwise correspondence
established through an optical flow algorithm are shown (lower row). This separates the 2D-shape
information captured in the correspondence field from the texture information captured in the texture mapped onto
the reference face (lower row).
Overview of the Approach
In the present paper, the linear object class approach
is improved and combined with a single
three-dimensional model of a human head for generating
new views of a face. By using these techniques
in tandem, the limitations inherent in each
approach (used alone) can be overcome. Specif-
ically, the present technique is based on the linear
object class method described in (Vetter and
Poggio, 1996), but is more powerful because the
addition of the 3D model allows a much better
utilization of the example images. The 3D-model
also allows the transfer of features particular to an
individual face from the given example view into
new synthetic views. This latter point is an important
addition to the linear class approach, because
it now allows for individual identifying features
like moles and blemishes that are present in "non-
standard" locations on a given individual face, to
be transferred onto synthesized novel views of the
face. This is true even when these blemishes, etc.,
are unrepresented in the "general experience" that
the linear class model has acquired from example
faces. On the other hand, the primary limitation
of a single 3D head model is the well-known difficulty
of representing the variability of head shapes
in general, a problem that the linear class model,
with its exemplar-based knowledge of faces will allow
us to solve.
Another way of looking at the combination of
these approaches returns us to the two-fold problem
we described at the beginning of this paper.
The synthesis of novel views from a single exemplar
image requires the ability to redraw the regions
shared by the two views, and also the ability
to generate the regions of the novel face that
are invisible in the exemplar view. The 3D head
model allows us to solve the former, and linear
object class approach the allows us to solve the
latter.
Linear Object Classes
A linear object class is defined as a 3D object class
for which the 3D shape can be represented as a linear
combination of a sufficiently small number of
prototypical objects. Objects that meet this criterion
have the following important property. New
orthographic views according to uniform affine 3D
transformation can be generated for any object
of the class. Specifically, rigid transformations in
3D, can be generated exactly if the corresponding
transformed views are known for the set of proto-
types. Thus, if the training set consists of frontal
and rotated views of a set of prototype faces, any
rotated view of a new face can be generated from a
single frontal view - provided that the linear class
assumption holds.
The key to this approach is a representation of
an object or face view in terms of a shape vector
and a texture vector (see also (Cootes et al.,
1995; Jones and Poggio, 1995; Beymer and Pog-
gio, 1995)). The separation of 2D-shape and texture
information in images of human faces requires
correspondence to be established for all feature
points. At its extreme, correspondence must be
established for every pixel, between the given face
image and a reference image. As noted previ-
ously, while this is an extremely difficult problem
when large view changes are involved, the linear
object class assumption requires correspondence
only within a given viewpoint - specifically, the
correspondence between a single view of an individual
face and a single reference face imaged
from the same view. Separately for each orien-
tation, all example face images have to be set in
correspondence to the reference face in the same
pose, correspondence between different poses is
not needed. This can be done off-line manually
(Craw and Cameron, 1991; Cootes et al., 1995)
or automatically (Beymer et al., 1993; Jones and
Poggio, 1995; Beymer and Poggio, 1995; Vetter
and Poggio, 1996). Once the correspondence problem
within views is solved, the resultant data can
be separated into a shape and texture vector. The
shape vector codes the 2D-shape of a face image
as deformation or correspondence field to a reference
face (Beymer et al., 1993; Jones and Poggio,
1995; Beymer and Poggio, 1995; Vetter and Pog-
gio, 1996), which later also serves as the origin of
a linear vector space. Likewise the texture of the
exemplar face is coded in a vector of image intensities
being mapped onto corresponding positions
in the reference face image (see also figure 1 lower
row).
The Three-dimensional Head Model
The linear class approach works well for features
shared by all faces (e.g. eyebrows, nose, mouth
or the ears). But, it has limited representational
possibilities for features particular to a individual
face (e.g. a mole on the cheek). For this reason, a
single 3D model of a human head is added to the
linear class approach. Face textures mapped onto
the 3D model can be transformed into any image
showing the model in a new pose. The final "ro-
tated" version of a given face image (i.e. including
moles, etc.) can be generated by applying to this
new image of the 3D model the shape transformation
given through the linear object class ap-
proach. This is described in more detail shortly.
The paper is organized as follows. First, the algorithm
for generating new images of a face from
a single example image is described. The technical
details of the implementation used to realize
the algorithm on grey level images of human
faces are described in the Appendix. Under Results
a comparison of different implementations
of the generalization algorithm are shown. Two
variations of the combined approach are compared
with a method based purely on the linear object
class as described previously (Vetter and Poggio,
1996). First, the linear class approach is applied
to the parts of a face separately. The individual
parts in the two reference face images were separated
using the 3D-model. Second, the 3D-model
was used additionally to establish pixelwise correspondence
between the two reference faces images
in the two different orientations. This correspondence
field allows texture mapping across the view
point change. Finally, the main features and possible
future extensions of the technique are discussed
Approach and Algorithm
In this section an algorithm is developed that allows
for the synthesis of novel views of a face from
from a single example view of the face. For brevity,
in the present paper we describe the application of
the algorithm to the synthesis of a "frontal" view
(i.e., defined in this paper as the novel view) from
an example "rotated" view (i.e., defined in this
paper as the view 24 ffi from frontal). It should be
noted, however, that the algorithm is not at all
restricted to a particular orientation of faces.
The algorithm can be subdivided into three
parts (for an overview see figure 3).
ffl First, the texture and shape information in
an image of a face are separated.
ffl Second, two separate modules, one for texture
and one for shape, compute the texture
and shape representations of a given "ro-
tated" view of a face (in terms of the appropriate
view of the reference face). These
modules are then used to compute the shape
and texture estimates for the new "frontal"
view of that face.
ffl Finally the new texture and shape for a
"frontal" view are combined and warped to
the "frontal" image of the face.
Separation of texture and shape in images of faces:
The central part of the approach is a representation
of face images that consists of a separate texture
vector and 2D-shape vector, each one with
components referring to the same feature points
- in this case pixels. Assuming pixelwise correspondence
to a reference face in the same pose,
a given example image can be represented as fol-
lows: its 2D-shape will be coded as the deformation
field of n selected feature points - in the limit
of each pixel - to the reference image. So the
shape of a face image is represented by a vector
that is by the
y distance or displacement of each feature with
respect to the corresponding feature in the reference
face. The texture is coded as a difference map
between the image intensities of the exemplar face
and its corresponding intensities in the reference
face. Thus, the mapping is defined by the correspondence
field. Such a normalized texture can
be written as a vector
contains the image intensity differences i of the
n pixels of the image. All images of the training
set are mapped onto the reference face of the corresponding
orientation. This is done separately
for each rotated orientation. For real images of
faces the pixelwise correspondences necessary for
this mappings where computed automatically using
a gradient based optical technique which was
already used successfully previously on face images
(Beymer et al., 1993; Vetter and Poggio, 1996).
The technical details for this technique can be
found in appendix B.
Linear shape model of faces: The shape model of
human faces used in the algorithm is based on the
linear object class idea (the necessary and sufficient
conditions are given in (Vetter and Poggio,
is built on a training set of pairs of
images of human faces. From each pair of im-
ages, each consisting of a "rotated" and a "frontal"
view of a face, the 2D-shape vectors s r for the
"rotated" shape and s f for the "frontal"shape are
computed. Consider the three-dimensional shape
of a human head defined in terms of pointwise
features. The 3D-shape of the head can be represented
by a vector
that contains the x; y; z-coordinates of its n feature
points. Assume that S 2 ! 3n is the linear combination
of q 3D shapes S i of other heads, such
It is quite obvious that for
any linear transformation R (e.g. rotation in 3D)
Thus, if a 3D head shape can be represented as
the weighted sum of the shapes of other heads,
its rotated shape is a linear combination of the
rotated shapes of the other heads with the same
weights fi i .
To apply this to the 2D face shapes computed
from images, we have to consider the following.
projection P from 3D to 2D with s
under which the minimal number q of shape vectors
necessary to represent
i does not change, it allows the
correct evaluation of the coefficients fi i from the
images. Or in other words, the dimension of a
three-dimensional linear shape class is not allowed
to change under a projection P . Assuming such a
projection, and that s r , a 2D shape of a given "ro-
tated" view, can be represented by the "rotated"
shapes of the example set s r
i as
then the "frontal" 2D-shape s f to a given s r can be
computed without knowing S using fi i of equation
(1) and the other s f
given through the images in
the training set with the following equation:
In other words, a new 2D face shape can be
computed without knowing its three-dimensional
structure. It should be noted that no knowledge of
correspondence between equation (1) and equation
(2) is necessary (rows in a linear equation system
can be exchanged freely).
Texture model of faces: In contrast to the shape
model, two different possibilities for generating a
"frontal" texture given a "rotated" texture are de-
scribed. The first method is again based on the linear
object class approach and the second method
uses a single three-dimensional head model to map
the texture from the "rotated" texture onto the
"frontal" texture. The linear object class approach
for the texture vectors is equivalent to the method
described earlier for the 2D-shape vectors. It is
assumed that a "rotated" texture T r can be represented
by the q "rotated" textures T r
computed
from the given example set as follows:
It is assumed further that the new texture T f
can be computed using ff i of equation (3) and the
other
given through the "frontal" images in
the training set by the following equation:
The three-dimensional head model: Whereas the
linear texture approach is satisfactory for generating
new "frontal" textures for regions not visible
Correspondence
of parts
Correspondence
of pixels
Reference Faces
Figure
2: A three-dimensional model of a human head was used to render the reference images (column
the linear shape and texture model. The model defines corresponding parts in the two images (column B) and
also establishes pixelwise correspondence between the two views (column C). Such a correspondence allows texture
mapping from one view (C1) to the other (C2).
in the "rotated" texture, it is not satisfactory for
the regions visible in both views. The linear texture
approach is hardly able to capture or represent
features which are particular to an individual
face (e.g. freckles, moles or any similar distinct
aspect of facial texture). Such features ask for a
direct mapping from the given "rotated" texture
onto the new "frontal" texture. However, this requires
pixelwise correspondence between the two
views (see (Beymer et al., 1993)) .
Since all textures are mapped onto the reference
face, it is sufficient to solve the correspondence
problem across the the viewpoint change for the
reference face only. A three-dimensional model
of an object intrinsically allows the exact computation
of a correspondence field between images
of the object from different viewpoints, because
the three-dimensional coordinates of the whole object
are given, occlusions are not problematic and
hence the pixels visible in both images can be separated
from the pixels which are only visible from
one viewpoint.
A single three-dimensional model of a human
head is incorporated into the algorithm for three
different processing steps.
1. The reference face images used for the formation
of the linear texture and 2D-shape
representations were rendered from the 3D-
model under ambient illumination conditions
(see figure 2A).
2. The 3D-model was manually divided into separate
parts, the nose, the eye and mouth region
and the rest of the model. Using the projections
of these parts, the reference images
for different orientations could be segmented
into corresponding parts for which the linear
texture and 2D-shape representation could
be applied separately (see next paragraph on
"The shape and texture models applied to
parts" and also figure 2B).
3. The correspondence field across the two different
orientations was computed for the two
reference face images based on the given 3D-
model. So the visible part of any texture,
mapped onto the reference face in one orien-
tation, can now be mapped onto the reference
face in the second orientation (see figure 2C
and 3).
To synthesize a complete texture map on the
"frontal" reference face for a new view, (i.e., the
regions invisible in the exemplar view are lacking),
the texture of the region visible in both views,
which has been obtained through direct texture
mapping across the viewpoint change, is merged
with the texture obtained through the linear class
approach (see figure 3). The blending technique
used to merge the regions is described in detail in
the appendix D.
The shape and texture models applied to parts.
The linear object class approach for 2D-shape and
texture, as proposed in (Vetter and Poggio, 1996),
can be improved through the 3D-model of the reference
face. Since the linear object class approach
CONSTRUCTED TEXTURE
COMBINED TEXTURE
LINEAR COMBINATION
OF FRONTAL TEXTURES
OUTPUT IMAGES
REAL FACE
l i
INPUT IMAGE
NORMALIZED TEXTURE
MAPPED TEXTURE
Input - S l T I i)
INPUT SHAPE S Input
CONSTUCTED SHAPE SF
of Frontal View
LINEAR COMBINATION
OF FRONTAL SHAPES
APPROXIMATION
APPROXIMATION
MAPPING INPUT IMAGE ONTO REFERENCE IMAGE
Figure
3: Overview of the algorithm for synthesizing a new view from a single input image. After mapping the
input image onto a reference face in the same orientation, texture and 2D-shape can be processed separately. The
example based linear face model allows the computation of 2D-shape and texture of a new "frontal" view. Warping
the new texture along the new deformation field (coding the shape) results in the new "frontal" views as output.
In the lower row on the right the result purely based on the linear class approach applied to parts is shown, in the
center the result with texture mapping from the "rotated" to the "frontal" view using a single generic 3D model of
a human head. On the bottom left the real frontal view of the face is shown.
did not assume correspondence between equations
(1) and (2) or (3) and (4), shape and texture vectors
had to be constructed for the complete face
as a whole. On the other hand, modeling parts
of a face (e.g. nose, mouth or eye region ) in independent
separate linear classes is highly prefer-
able, because it allows a much better utilization of
the example image set and therefore gives a much
more detailed representation of a face. A full set of
coefficients for shape and texture representation is
evaluated separately for each part instead of just
one set for the entire face.
To apply equations (1 - 4) to individual parts of
a face, it is necessary to isolate the corresponding
it areas in the "rotated' and ``frontal'' reference
images. Such a separation requires the correspondence
between the "rotated' and ``frontal'' reference
image or equivalent between equations (1)
and (2) of the shape representation and also between
equations (3) and (4) for the texture. The
3D-model, however, used for generating the reference
face images determines such a correspondence
immediately (for example see figure 2B) and
allows the separate application of the linear class
approach to parts. To generate the final shape and
texture vector for the whole face, this separation
adds only a few complexities to the computational
process . Shape and texture vectors obtained for
the different parts must be merged, which requires
the use of blending techniques to suppress visible
border effects. The blending technique used
to merge the regions is described in detail in appendix
D.
The algorithm was tested on 100 human faces. For
each face, images were given in two orientations
(24 ffi and 0 ffi ) with a resolution of 256-by-256 pixels
and 8 bit (more details are given in appendix A).
In a leave-one-out procedure, a new "frontal"
view of a face was synthesized to a given "rotated"
view (24 ffi ). In each case the remaining 99 pairs
of face images were used to build the linear 2D-
shape and texture model of faces. Figure 4 shows
the results for six faces for three different implementations
of the algorithm (center rows A,B,C).
The left column shows the test image given to the
algorithm. The true "frontal" view to each test
face from the data base is shown in the right col-
umn. The implementation used for generating the
images in column A was identical to the method
already described in (Vetter and Poggio, 1996),
the linear object class approach was applied to the
shape and texture vector as a whole, no partitioning
of the reference face or texture mapping across
the viewpoints was applied. The method used in
was identical to A, except that the linear object
class approach was applied separately to the
different parts of a face. The three-dimensional
head model was divided into four parts (see figure
2B) the eye, nose, mouth region, and the remaining
part of the face. To segment the two reference
images correctly, it was clearly necessary to render
both of them from the same three-dimensional
model of a head. Based on this segmentation,
the texture and 2D-shape vectors for the different
parts were separated and for each part a separate
linear texture and 2D-shape model was ap-
plied. The final image was rendered after merging
the new shape and texture vectors of the parts.
The images shown in column C are the result of a
combination of the technique described in B and
texture mapping across the viewpoint change. After
mapping a given "rotated" face image onto the
"rotated" reference image, this normalized texture
can be mapped onto the "frontal" reference face
since the correspondence between the two images
of the reference face is given through the three-dimensional
model. The part of the "frontal" texture
not visible in the "rotated" view is substituted
by the texture obtained by the linear texture
model as described under B.
The quality of the synthesized "frontal" views was
tested in a simple simulated recognition experi-
ment. For each synthetic image, the most similar
frontal face image in the data base of 130 faces was
computed. For the image comparison, two common
similarity measures were used: a) the correlation
coefficient, also known as direction cosine;
and b) the Euclidean distance (L 2 ). Both measures
were applied to the images in pixel representation
without further processing.
The recognition rate of the synthesized images
(type A,B,C) was 100 % correct, both similarity
measures independently evaluated the true
"frontal" view to a given "rotated" view of a face
as the most similar image. This result holds for all
three different methods applied for the image syn-
thesis. The similarity of the synthetic images to
the real face image improved by applying the linear
object class approach separately to the parts
and improved again adding the correspondence between
the two reference images to the method.
This improvement is indicated in figure 5 where
decreases where as the correlation coefficients
increase for the different techniques.
INPUT
ROTATED
SYNTHESIZED IMAGES
Figure
4: Synthetic new frontal views (center columns) to a single given rotated (24 ffi ) image of a face (left column)
are shown. The prior knowledge about faces was given through a training set of 99 pairs of images of different
faces (not shown) in the two orientations. Column A shows the result based purely on the linear object class
approach. Adding a single 3D-head model, the linear object class approach can be applied separately to the nose,
mouth and eye region in a face (column B). The same 3D-model allows the texture mapping across the viewpoint
change (column C). The frontal image of the real face is shown in the right column.
Average Image Distance to Nearest Neighbor
Real Face Images 4780.3 0.9589
Synthetic Images Type A 3131.9 0.9811
Synthetic Images Type B 3039.3 0.9822
Synthetic Images Type C 2995.0 0.9827
Figure
5: Comparing the different image synthesis techniques using Direction Cosines and L2-Norm as distance
measures. First, for all real frontal face images the average distance to its nearest neighbor (an image of a
different computed over an images test set of 130 frontal face images. Second, for all synthetic images
(type A,B,C) the average value to its nearest neighbor was computed for both distance measures. For all synthetic
images the real face image was found as nearest neighbor. Switching from technique A to B and from B to C the
average values of Direction Cosines increase whereas the values of the L2-Norm decrease, indicating an improved
image similarity.
A crucial test for the synthesis of images is a direct
comparison of real and synthetic images by human
observers. In a two alternative forced choice
subjects were asked to decide which of the
two frontal face images matches a given rotated
image (24 ffi ) best. One image was the "real" face
the other a synthetic image generated applying the
linear class method to the parts of the faces separately
(method B). The first five images of the
data set were used to familiarize the subjects with
the task, whereas the performance was evaluated
on the remaining 95 faces. Although there was no
time limit for a response and all three images were
shown simultaneously, there were only 6 faces classified
correctly by all 10 subjects (see figure 6). In
all other cases the synthetic image was at least by
one subject classified as the true image and in one
case the synthetic image was found to match the
rotated image better as the real frontal image. In
average each observer was 74% correct whereas the
chance level was at 50%. The subjects responded
in average after 12 seconds.
The results demonstrate clearly an improvement
in generating new synthetic images of a human
face from only a single given example view, over
techniques proposed previously (Beymer and Pog-
gio, 1995; Vetter and Poggio, 1996). Here a single
three-dimensional model of a human head was
added to the linear class approach. Using this
model the reference images could be segmented
into corresponding parts and additionally any texture
on the reference image could be mapped precisely
across the view point change. The information
used from the three-dimensional model is
equivalent to the addition of a single correspondence
field across the viewpoint change. This addition
increased the similarity of the synthesized
image to the image of the real face for the shape
as well as for the texture. The improvement could
be demonstrated in automated image comparison
as well as in perceptual experiments with human
observers.
The results of the automated image comparison indicate
the importance of the proposed face model
for viewpoint independent face recognition sys-
tems. Here the synthetic rotated images were compared
with the real frontal face image. It should
also be noted, that coefficients, which result from
the decomposition of shape and texture into example
shapes and textures, already give us a representation
which is invariant under any 3D affine
transformation, supposing of course the linear face
model holds a good approximation of the target
The difficulties experienced by human observers in
distinguishing between the synthetic images and
the real face images indicate, that a linear face
model of 99 faces segmented into parts gives a
good approximation of a new face, it also indicates
possible applications of this method in computer
graphics. Clearly, the linear model depends on the
given example set, so in order to represent
from a different race or a different age group, the
model would clearly need examples of these, an
effect well known in human perception (cf. e.g.
(O'Toole et al., 1994)).
The key step in the proposed technique is a dense
correspondence field between images of faces seen
from the same view point. The optical flow technique
used for the examples shown worked well,
however, for images obtained under less controlled
conditions a more sophisticated method for finding
the correspondence might be necessary. New
Classification of Synthetic Versus Real Face Images
Number
of 6 17 22
Faces
Figure
For 95 different faces a rotated image (24 ffi ) and two frontal images were shown to human observers
simultaneously. They had to decide which of the frontal images was the synthesized image (type B) and which
one was the real image. The table shows the error rate for 10 observers and the related number of faces. In
average each observer was correct in 74% of the trails (chance level was 50%) and the average response time was
seconds.
correspondence techniques based on active shape
models (Cootes et al., 1995; Jones and Poggio,
are more robust against local occlusions and
larger distortions when applied to a known object
class. There shape parameters are optimized actively
to model the target image.
Several open questions remain for a fully automated
implementation. The separation of parts
of an object to form separated subspaces could
be done by computing the covariance between the
pixels of the example images. However, for images
at high resolution, this may need thousands of example
images. The linear object class approach
assumes that the orientation of an object in an
image is known. The orientation of faces can be
approximated computing the correlation of a new
image to templates of faces in various orientations
(Beymer, 1993). It is not clear jet how precisely
the orientation should be estimated to yield satisfactory
results.
Appendix
A Face Images.
pairs of images of caucasian faces, showing a
frontal view and a view taken 24 ffi from the frontal
were available. The images were originally rendered
for psychophysical experiments under ambient
illumination conditions from a data base of
three-dimensional human head models recorded
with laser scanner (Cyberware TM ). All faces were
without makeup, accessories, and facial hair. Ad-
ditionally, the head hair was removed digitally
(but with manual editing), via a vertical cut behind
the ears. The resolution of the grey-level images
was 256-by-256 pixels and 8 bit.
Preprocessing: First the faces were segmented
from the background and aligned roughly by automatically
adjusting them to their two-dimensional
centroid. The centroid was computed by evaluating
separately the average of all x; y coordinates
of the image pixels related to the face independent
of their intensity value.
A single three-dimensional model of a human
head, recorded with a laser scanner
(Cyberware TM ), was used to render the two reference
images.
B Computation of the
To compute the 2D-shape vectors s r
used
in equations (1) and (2), which are the vectors of
the spatial distances between corresponding points
in the face images, the correspondence of these
points has to be established first. That means we
have to find for every pixel location in an image,
e.g. a pixel located on the nose, the corresponding
pixel location on the nose in the other image.
This is in general a hard problem. However, since
all face images compared are in the same orienta-
tion, one can assume that the images are quite
similar and occlusions are negligible. The simplified
condition of a single view make it feasible
to compare the images of the different faces
with automatic techniques. Such algorithms are
known from optical flow computation, in which
points have to be tracked from one image to the
other. We use a coarse-to-fine gradient-based gradient
method (Bergen et al., 1992) and follow an
implementation described in (Bergen and Hingo-
rani, 1990). For every point x; y in an image I,
the error term
for y, with I x ; I y being the spatial image
derivatives and ffi I the difference of intensity
of the two compared images. The coarse-to-fine
strategy refines the computed displacements when
finer levels are processed. The final result of this
computation (ffix; ffi y) is used as an approximation
of the spatial displacement vector s in equation
(1)and (2). The correspondence is computed towards
the reference image from the example and
test images. As a consequence, all vector fields
have a common origin at the pixel locations of the
reference image.
C Linear shape and texture
synthesis.
First the optimal linear decomposition of a given
shape vector in equation (1) and a given texture
vector in equation (3) was computed. To compute
the coefficients ff i (or similar fi i ) the "initial" vector
T r of the new image is decomposed (in the
sense of least square) to the q training image vectors
given through the training images by min-
imizing
The numerical solution for ff i and fi i was obtained
by an standard SVD-algorithm (Press and Flan-
nery, 1992). The new shape and texture vectors
for the "frontal" view were obtained through simple
summation of the weighted "frontal" vectors
(equations( 2) and (4)).
D Blending of patches.
Blending of patches is used at different steps in
the proposed algorithm. It is applied for merging
different regions of texture as well as for merging
regions of correspondence fields which were computed
separately for different parts of the face.
Such a patch work might have little discontinuities
at the borders between the different patches.
It is known that human observers are very sensitive
to such effects and the overall perception of
the image might be dominated by these.
For images Burt and Adelson (Burt and Adel-
son, 1983; Burt and Adelson, 1985) proposed a
multiresolution approach for merging images or
components of images. First, each image patch is
decomposed into bandpass filtered component im-
ages. Secondly, this component images are merged
separately for each band to form mosaic images by
weighted averaging in the transition zone. Finally,
these bandpass mosaic images are summed to obtain
the desired composite image. This method
was applied to merge the different patches for the
texture construction as well as to combine the
texture mapped across the viewpoint change with
the missing part taken from the constructed one.
Originally this merging method was only described
for an application to images, however, the application
to patches of correspondence fields eliminates
visible discontinuities in the warped images.
Taking a correspondence field as an image with a
vector valued intensity, the merging technique was
applied to the x and y components of the correspondence
vectors separately.
Synthesis of the New Image.
The final step is image rendering. The new image
can be generated combining the texture and shape
vector generated in the previous steps. Since both
are given in the coordinates of the reference image,
for every pixel in the reference image the pixel intensity
and coordinates to the new location are
given. The new location generally does not coincide
with the equally spaced grid of pixels of the
destination image. The final pixel intensities of
the new image are computed by linear interpola-
tion, a commonly used solution of this problem
known as forward warping (Wolberg, 1990).
Acknowledgments
I am grateful to H.H. B-ulthoff and T. Poggio for
useful discussions and suggestions. Special thanks
to Alice O'Toole for editing the manuscript and for
her endurance in discussing the paper. I would like
to thank Nikolaus Troje for providing the images
and the 3D-model.
--R
Automatic creation of 3D facial mod- els
Hierarchical motion-based frame rate conversion
Face recognition under varying pose.
Face recognition from one model view.
Merging images through pattern decomposition.
Active shape models - their training and application
Parameterizing images for recognition and reconstruc- tion
Robot vision.
Motion and structure from orthographic projections.
Synthetizing a color algorithm from examples.
A novel approach to graphics.
Extracting projective structure from single perspective views of 3D point sets.
Analysis and synthesis of facial image sequences using physical and anatomical models.
Digital actors for interactive television.
Recognition by linear combinations of models.
Symmetric 3D objects are an easy case for 2D object recog- nition
Image synthesis from a single example image.
The importance of symmetry and virtual views in three-dimensional object recogni- tion
Image Warping.
--TR
--CTR
Xiaoyang Tan , Songcan Chen , Zhi-Hua Zhou , Fuyan Zhang, Face recognition from a single image per person: A survey, Pattern Recognition, v.39 n.9, p.1725-1745, September, 2006
Philip L. Worthington, Reillumination-driven shape from shading, Computer Vision and Image Understanding, v.98 n.2, p.326-344, May 2005
A. Criminisi , A. Blake , C. Rother , J. Shotton , P. H. Torr, Efficient Dense Stereo with Occlusions for New View-Synthesis by Four-State Dynamic Programming, International Journal of Computer Vision, v.71 n.1, p.89-110, January 2007
Martin A. Giese , Tomaso Poggio, Morphable Models for the Analysis and Synthesis of Complex Motion Patterns, International Journal of Computer Vision, v.38 n.1, p.59-73, June 2000
Bernd Heisele , Thomas Serre , T. Poggio, A Component-based Framework for Face Detection and International Journal of Computer Vision, v.74 n.2, p.167-181, August 2007
Yongmin Li , Shaogang Gong , Heather Liddell, Constructing Facial Identity Surfaces for Recognition, International Journal of Computer Vision, v.53 n.1, p.71-92, June
Athinodoros S. Georghiades , Peter N. Belhumeur , David J. Kriegman, From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.6, p.643-660, June 2001 | face recognition;flexible templates;image synthesis;rotation invariance |
293923 | Quasi-Invariant Parameterisations and Matching of Curves in Images. | In this paper, we investigate quasi-invariance on a smooth manifold, and show that there exist quasi-invariant parameterisations which are not exactly invariant but approximately invariant under group transformations and do not require high order derivatives. The affine quasi-invariant parameterisation is investigated in more detail and exploited for defining general affine semi-local invariants from second order derivatives only. The new invariants are implemented and used for matching curve segments under general affine motions and extracting symmetry axes of objects with 3D bilateral symmetry. | Introduction
The distortions of an image curve caused by the
relative motion between the observer and the
scene can be described by specific transformation
groups (Mundy and Zisserman, 1992). For exam-
ple, the corresponding pair of contour curves of
a surface of revolution projected on to an image
center can be described by a transformation of
the Euclidean group as shown in Fig. 1 (a). If a
planar object has bilateral symmetry viewed under
perspective, the corresponding contour
curves of the object in an image can be described
by the special affine group (Kanade and Kender,
1983; Van Gool et al., 1995a) (see Fig. 1 (b)).
The corresponding contour curves of a 3D bilateral
symmetry such as a butterfly are related by
a transformation of the general affine group (see
Fig. 1 (c)). Since these corresponding curves are
equivalent objects, they have the same invariants
under the specific transformation group. Thus,
invariants under these transformation groups are
very important for object recognition and identification
(Moons et al., 1995; Mundy and Zisserman,
1992; Pauwels et al., 1995; Rothwell et al., 1995;
Gool et al., 1992; Weiss, 1993).
Although geometric invariants have been studied
extensively, existing invariants suffer from occlusion
(Abu-Mostafa and Psaltis, 1984; Hu, 1962;
Reiss, 1993; Taubin and Cooper, 1992), image
noise (Cyganski et al., 1987; Weiss, 1988) and the
requirement of point or line correspondences (Bar-
rett and Payton, 1991; Rothwell et al., 1995;
Zisserman et al., 1992). To cope with these
problems, semi-local integral invariants were proposed
(Bruckstein et al., 1993; Sato and Cipolla,
1996a; Sato and Cipolla, 1996b) recently. They
showed that it is possible to define invariants semi-
locally, by which the order of derivatives in invariants
can be reduced from that of group curvatures
to that of group arc-length, and hence the invariants
are less sensitive to noise. As we have seen in
these works, the invariant parameterisation guar-
118 Sato and Cipolla
antees unique identification of corresponding intervals
on image curves and enables us to define
semi-local integral invariants even under partial
occlusions.
Although semi-local integral invariants reduce
the order of derivatives required, it is known that
the order of derivatives in group arc-length is still
high in the general affine and projective cases (see
table 1). In this paper, we introduce a quasi-invariant
parameterisation and show how it enables
us to use second order derivatives instead
of fourth and fifth. The idea of quasi-invariant
parameterisation is to approximate the group invariant
arc-length by lower order derivatives. The
new parameterisations are therefore less sensitive
to noise, and are approximately invariant under a
slightly restricted range of image distortions.
The concept of quasi-invariants was originally
proposed by Binford (Binford and Levitt, 1993),
who showed that quasi-invariants enable a reduction
in the number of corresponding points required
for computing algebraic invariants. For example
quasi-invariants require only four points for
computing planar projective invariants (Binford
and Levitt, 1993), while exact planar projective
invariants require five points (Mundy and Zisser-
man, 1992). It has also been shown that quasi-
invariants exist even under the situation where
the exact invariant does not exist (Binford and
Levitt, 1993). In spite of its potential, the quasi-invariant
has not previously been studied in de-
tail. One reason for this is that the concept of
quasiness is rather ambiguous and is difficult to
formalise. Furthermore, the existing method is
limited to the quasi-invariants based on point cor-
GA
GA
SA
Projective (8DOF)
Special
Affine
Similarity
(4DOF) (3DOF)
Euclidean
General Affine (6DOF)
(d)
(b)
(a) (c)
Fig. 1. Image distortion and transformation groups. A symmetric pair of contour curves (white curves) of (a) a surface
of revolution, (b) planar bilateral symmetry and (c) 3D bilateral symmetry can be described by Euclidean, special affine
(equi-affine) and general affine (proper affine) transformations under the weak perspective assumption. The image distortion
caused by the relative motion between the observer and the scene can also be described by group transformations as shown
in (d) and (e).
Quasi-Invariant Parameterisations 119
~
(a) (b)
Fig. 2. Identifying interval of integration semilocally. (a)
and (b) are images of a Japanese character extracted from
the first and the second viewpoints. The interval of integration
in these two images can be identified uniquely
from invariant arc-length, w. For example, if w1 and e
are a corresponding pair of points, then the interval [-1, 1]
with respect to w corresponds to the interval [-1, 1] with
respect to e w in the second image. Even though the curve
is occluded partially (second image), the semi-local integral
invariants can be defined on the remaining parts of
the curve.
respondences (Binford and Levitt, 1993), or the
quasi-invariants under specific models (Zerroug
and Nevatia, 1993; Zerroug and Nevatia, 1996).
In this paper, we investigate quasi-invariance on
smooth manifolds, and show that there exists a
quasi-invariant parameterisation, that is a parameterisation
approximately invariant under group
transformations. Although the approximated values
are no longer exact invariants, their changes
are negligible for a restricted range of transforma-
tions. Hence, the aim here is to find in parameterisations
the best tradeoff between the error caused
by the approximation and the error caused by image
noise.
Following the motivation, we investigate a measure
of invariance which describes the difference
from the exact invariant under group transforma-
tions. To formalise a measure of invariance in
differential formulae, we introduce the so called
prolongation (Olver, 1986) of vector fields. We
next define a quasi-invariant parameter as a function
which minimises the difference from the exact
invariant. A quasi invariant parameter under
general affine transformations is then proposed.
The proposed parameter is applied to semi-local
integral invariants and exploited successfully for
matching curves under general affine transformations
in real image sequences.
2. Semi-Local Integral Invariants
In this section, we review semi-local integral in-
variants, and motivate the new parameterisation,
quasi-invariant parameterisation.
If the invariants are too local such as differential
invariants (Cyganski et al., 1987; Weiss, 1988),
they suffer from noise. If the invariants are too
global such as moment (integral) invariants (Abu-
Mostafa and Psaltis, 1984; Hu, 1962; Lie, 1927;
Reiss, 1993; Taubin and Cooper, 1992), they suffer
from occlusion and the requirement of corre-
spondences. It has been shown recently (Sato and
Cipolla, 1996a; Sato and Cipolla, 1996b) that it is
possible to define integral invariants semi-locally,
so that they do not suffer from occlusion, image
noise and the requirement of correspondences.
Consider a curve, C 2 R 2 , to be parameterised
by t. It is also possible to parameterise the curve
by invariant parameters, w, under specific transformation
groups. These are called arc-length of
the group. The important property of group arc-length
in integral formulae is that it enables us
to identify the corresponding interval of integration
automatically. Consider a point, C(w 1 ), on
a curve C to be transformed to a point, e
on a curve e
C by a group transformation as shown
in Fig. 2. Since 1 e
it is clear that if we
take the same interval [01w; 1w] around C(w 1 )
and e
these two intervals correspond
to each other (see Fig. 2). That is, by integrating
with respect to the group arc-length, w, the
corresponding interval of integration of the original
and the transformed curves can be uniquely
identified.
We now define semi-local integral invariants at
point
Z w1+1w
w101w
Fdw (1)
where, F is any invariant function under the
group. The choice of F provides various kinds of
semi-local integral invariants (Sato and Cipolla,
1996b). If we choose the function F carefully,
the integral formula (1) can be solved analytically,
and the resulting invariants have simpler forms.
For example, in the affine case, if we substitute
then the integral formula is solved analytically,
120 Sato and Cipolla
and the integral invariants can be described by:
denotes the determinant of a
matrix which consists of two column vectors,
. The right hand side of (2) is
actually the area made by two vectors,
results have been proposed by Bruckstein (Bruck-
stein et al., 1993) by a different approach. The
important properties of semi-local integral invariants
are as follows:
1. The limits of integration [01w;1w] in semi-local
integral invariants are identified uniquely
in original and transformed images from invariant
parameterisations. Thus, we do not
need to worry about the correspondence problem
caused by a heuristic search of image features
2. Even though the curve is occluded partially as
shown in Fig 2 (b), the semi-local integral invariants
can be defined on the remaining parts
of the curve. Thus, they do not suffer from the
occlusion problem unlike classical moment invariants
3. In general, the lowest order differential invariant
of a transformation group is a group
curvature, and requires second, fourth, fifth
and seventh order derivatives in Euclidean,
special affine, general affine and projective
cases (Guggenheimer, 1977; Olver et al.,
1994). The semi-local integral invariants enable
us to reduce the order of derivatives required
from that of group curvatures to that of
Table
1. Order of derivatives required for the group arc-length
and curvature. In general derivatives more than the
second order are sensitive to noise, and are not available
from images. Thus, the general affine and projective arc-length
as well as curvatures are not practical.
group arc-length curvature
Euclidean 1st 2nd
special affine 2nd 4th
general affine 4th 5th
projective 5th 7th
group arc-length. Since as shown in table 1,
the order of derivatives of group arc-length
is lower than that of group curvature, the
semi-local integral invariants are more practical
than differential invariants.
From table 1 (Olver et al., 1994), it is clear that
the semi-local integral invariants are useful under
Euclidean and special affine cases, but they still
require high order derivatives in general affine and
projective cases. The distortion caused by a group
transformation is often not so large. For exam-
ple, the distortion caused by the relative motion
between the observer and the scene is restricted
because of the finite speed of the camera or object
motions. In such cases, parameters approximated
by lower order derivatives give us a good approximation
of the exact invariant parameterisation.
We call such a parameterisation a quasi-invariant
parameterisation.
In the following sections, we define the quasi-invariant
parameterisation, and derive an affine
quasi-invariant parameterisation.
3. Infinitesimal Quasi-Invariance
We first derive the concept of infinitesimal quasi-
invariance; that is quasi-invariance under infinitesimal
group transformations.
3.1. Vector Fields of the Group
Let G be a Lie group, that is a group which carries
the structure of a smooth manifold in such
a way that both the group operation (multiplica-
tion) and the inversion are smooth maps (Olver,
1986). Transformation groups such as rotation,
Euclidean, affine and projective groups are Lie
groups. Consider an image point x to be
transformed to an image point e
by a group
G:
so that a function, I(x; y), with respect to x and
y coordinates is transformed to ~
y) by h.
Infinitesimally we can interpret this phenomenon
by an action of a vector field, v:
@
@x
@
@y
Quasi-Invariant Parameterisations 121
~
~
G
Fig. 3. The vector field, v, and an integral curve, 0. The
curve C is transformed to e
C by a group transformation, so
that the point P on the curve is transformed to e
P. Locally
the orbit of the point caused by a group transformation
coincides with the integral curve, 0, of the vector field at
the point, P.
where, and j are functions of x and y, and provide
various vector fields. Locally the orbit of the
point, x, caused by the transformation, h, is described
by an integral curve, 0, of the vector field,
v, passing through the point (see Fig. 3). v is
called an infinitesimal generator of the group ac-
tion. The uniqueness of an ordinary differential
equation guarantees the existence of such a unique
integral curve in the vector field.
Because of its linearity, any infinitesimal generator
can be described by the summation of a
finite number of independent vector fields, v i
m), of the group as follows:
is the ith independent vector field:
@
@x
@
@y
are basis coefficients of @
@x and @
@y
respectively, and are functions of x and y. These
independent vector fields form a finite dimensional
vector space called a Lie algebra (Olver, 1986).
Locally any transformation of the group can be
described by an integral of a finite number of independent
vector fields, v i . The vector field described
in (3) acts as a differential operator of the
Lie derivative.
3.2. Exact Invariance
We now state the condition of invariance of an
arbitrary function I, which is well known in Lie
Group theory.
Let v be an infinitesimal generator of the group
transformation. A real-valued function I is invariant
under group transformations, if and only
if the Lie derivative of I with respect to any infinitesimal
generator, v, of the group, G, vanishes
as follows (Olver, 1995):
where denotes the Lie derivatives with respect
to a vector field v. Since I is a scalar func-
tion, the Lie derivative is the same as the directional
derivative with respect to v. Thus, the condition
of invariance (6) can be rewritten as follows:
is the directional derivative with respect
to v.
3.3. Infinitesimal Quasi-Invariance
The idea of quasi-invariance is to approximate the
exact invariant by a certain function I(x; y), which
is not exactly invariant but nearly invariant. If the
I , is not exactly invariant the equation
no longer holds. We can however measure the
difference from the exact invariant by using (7).
By definition, the change in function I caused by
the infinitesimal group transformation induced by
a vector field, v, is described by the Lie derivative
of I as follows:
For measuring the invariance of a function irrespective
of the choice of basis vectors, we consider
an intrinsic vector field of the group, G. It
is known (Sattinger and Weaver, 1986) that if the
122 Sato and Cipolla
group is semi-simple (e.g. rotation group, special
linear group), there exists a non-degenerate symmetric
bilinear form called Killing form, K, of the
Lie algebra as follows:
where ad(v i ) denotes the adjoint representation 1
of v i , and tr denotes the trace. The Killing form
provides the metric tensor, for the algebra:
and the Casimir operator, C a , defined by the metric
tensor is independent of the choice of the basis
vectors:
ij is the inverse of g ij . That is, the met-
changes according to the choice of basis
vectors, v i , so that C a is an invariant. Since g ij is
symmetric, there exists a choice of basis vectors,
m), by which g ij is diagonalised as
follows:
ae 61 if
Such vector fields, v i are unique
in the group, G, and thus intrinsic. By using the
intrinsic vector fields in (8), we can measure the
change in value of a function, I , which is intrinsic
to the group, G.
For measuring the quasi invariance of a function
irrespective of the magnitude of the function, we
consider the change in function, ffiI , normalised by
the original function, I. We, thus, define a measure
of infinitesimal quasi invariance, Q, of a function
I by the squared sum of normalised changes
in function caused by the intrinsic vector fields,
I
This is a measure of how invariant the function, I,
is under the group transformation. If Q is small
enough, we call I a quasi-invariant under infinitesimal
group transformations.
Unfortunately, if the group is not semi-simple
(e.g. general affine group, general linear group),
the Killing form is degenerate and we do not have
such intrinsic vector fields. However, it is known
that a non-semi-simple group is decomposed into a
semi-simple group and a radical (Jacobson, 1962).
Thus, in such cases, we choose a set of vector fields
which correspond to the semi-simple group and
the radical.
4. Quasi-Invariance on Smooth Manifolds
In the last section, we introduced the concept
of infinitesimal quasi-invariance, which is the
quasi-invariance under infinitesimal group trans-
formations, and derived a measure for the invariance
of an approximated function. Unfortunately
(11) is valid only for functions which do
not include derivatives. In this section, we introduce
an important concept known as the prolongation
(Olver, 1986) of vector fields, and investigate
quasi-invariance on smooth manifolds, so that it
enables us to define quasi-invariants with a differential
formula.
4.1. Prolongation of Vector Fields
The prolongation is a method for investigating the
differential world from a geometric point of view.
Let a smooth curve C 2 R 2 be described by an
independent variable x and a dependent variable
y with a smooth function f as follows:
The curve, C, is transformed to e
C by a group
transformation, h, induced by a vector field, v, as
shown in Fig. 4. Consider a kth order prolonged
space, whose coordinates are x, y and derivatives
of y with respect to x up to kth order, so that
the prolonged space is dimensional. The
curves, C and e
C, in 2D space are prolonged and
described by space curves, C (k) and e
in the
dimensional prolonged space. The prolonged
vector field, v (k) , is a vector field in k
sion, which carries the prolonged curve, C (k) , to
the prolonged curve, e
explicitly as shown in
Fig. 4. More precisely, the kth order prolonga-
tion, v (k) , of a vector field, v, is defined so that
it transforms the kth order derivatives, y (k) , of a
into the corresponding kth order
derivatives, e
y (k) , of the transformed function
geometrically.
Quasi-Invariant Parameterisations 123
x
y
y
x
-0.4
x
y
x
y
y
x
-0.4
x
y
pr
prolonged space
~
pr
original image
prolonged space
transformed image
Fig. 4. Prolongation of a vector field. The kth order prolonged vector field, v (k) transforms kth order derivatives of y
into kth order derivatives of ey. That is the prolonged curve, C (k) , is transformed into the prolonged curve, e
by the
prolonged vector field, v (k) . This enables us to investigate derivatives of functions geometrically. pr (k) denotes kth order
prolongation. This figure illustrates the first order prolongation
m) be m independent vector
fields induced by a group transformation, h. Since
the prolongation is linear, the kth prolongation,
of a general vector field, v, can be described
by a sum of kth prolongations, v (k)
i , of the independent
vector fields, v i as follows:
Consider a vector field (5) in 2D space again. Its
first and second prolongations, v (1) , v (2) , are computed
as follows (Olver, 1986):
@y x
v (2)
@y xx
where D x and D 2
x denote the first and the second
total derivatives with respect to x, and y x , y xx ,
y xxx denote the first, second and the third derivatives
of y with respect to x. Let F (x;
function of x, y and derivatives of y with respect to
x up to kth order, which is denoted by y (k) . Since
the prolongation describes how the derivatives are
going to change under group transformations, we
can compute the change in function, ffiF , caused
by the group transformation, h, as follows:
where, v (k) is the kth order prolongation of the
infinitesimal generator, v, of a transformation h.
Note that we require only the same order of prolongation
as that of the function, F . Since the
prolongation describes how derivatives are going
to change, it is important for evaluating the quasi-
124 Sato and Cipolla
invariance of a differential formula as described in
the next section.
4.2. Quasi-Invariance on Smooth Manifolds
Let us consider the curve C in 2D space again.
Suppose I(y (n) ) is a function on the curve containing
the derivatives of y with respect to x up
to the nth order, which we denote by y (n) . Since
the nth order prolongation, v (n) , of the vector
field v transforms nth order derivatives, y (n) , of
the original curve to nth order derivatives, e y (n) ,
of the transformed curve, the change in function,
caused by the infinitesimal group transformation
induced by the ith independent vector
described by:
A quasi-invariant is a function whose variation
caused by group transformations is relatively
small compared with its original value. We thus
define a measure of invariance, Q, on smooth
curve, C, by the normalised squared sum of
integrated along the curve, C, as follows:
Z
If I(y (n) ) is close to the exact invariant, then Q
tends to zero. Thus, Q is a measure of how invariant
the function, I(y (n) ), is under the group
transformation.
5. Quasi-Invariant Parameterisation
In the last section, we have derived quasi-
invariance on smooth manifolds. We now apply
the results and investigate the quasi-invariance of
parameterisation under group transformations.
A group arc-length, w, of a curve, C, is in general
described by a group metric, g, and the independent
variable, x, of the curve as follows:
where, dw and dx are the differentials of w and x
respectively. Suppose the metric, g, is described
by the derivatives of y with respect to x up to kth
order as follows:
where y (k) denotes the kth order prolongation of y.
The change of the differential, ffidw i , caused by the
ith independent vector field, v i , is thus derived by
computing the Lie derivative of dw with respect
to the kth order prolongation of
dx
The change in dw normalised by dw itself is described
as follows:
ffid
dw
=g
dx
The measure of invariance of the parameter, w, is
thus described by integrating the squared sum of
ffid
along the curve, C, as follows:
Z
where,
dx
If the parameter is close to the exact invariant
parameter, Q tends to zero. Although there is no
exact invariant parameter unless it has enough orders
of derivatives, there still exists a parameter
which minimises Q and requires only lower order
derivatives. We call such a parameter a quasi-invariant
parameter of the group, if it minimises
(18) under the linear sum of the independent vector
fields of the group and keeps Q small enough
in a certain range of the group transformations.
To find a function, g, which minimises (18) is in
fact a variational problem (Gelfand and Formin,
1963) with the Lagrangian of L, which includes
one independent variable, x, and two dependent
variables, g and y (g is also dependent on y). In
the next section, we derive a metric, g, which minimises
the measure of invariance, Q, under general
affine transformations by solving the variational
problem.
Quasi-Invariant Parameterisations 125
6. Affine Quasi-Invariant Parameterisa-
tion
In this section, we apply quasi-invariance to derive
a quasi-invariant parameterisation under general
affine transformations which requires only second
order derivatives and is thus less sensitive to noise
than the exact invariant parameter which requires
fourth order derivatives.
Suppose the quasi-invariant parameterisation,
' , under general affine transformation is of second
order, so that the metric, ' g, of the parameter,
, is made of derivatives up to the second:
where, y x and y xx are the first and the second
derivatives of y with respect to x. To find a
quasi-invariant parameter is thus the same as finding
a second order differential function, ' g(y x ; y xx ),
which minimises the quasi-invariance, Q, under
general affine transformations. Since the metric,
' g, is of second order, we require the second order
prolongation of the vector fields to compute the
quasi-invariance of the metric.
6.1. Prolongation of Affine Vector Fields
A two dimensional general affine transformation is
described by a 222 invertible matrix, A 2 GL(2),
and a translational component, t 2 R 2 , and transforms
e
Since the differential form, d' , in (20) does not
include x and y components, it is invariant under
translations. Thus, we here simply consider
the action of A 2 GL(2), which can be described
by four independent vector fields, v i
that is the divergence, curl, and the two
components of deformation (Cipolla and Blake,
1992; Kanatani, 1990; Koenderink and van Doorn,
@
@x
@
@y
@
@x
@
@y
@
@x
@
@y
@
@x
@
@y
Since the general linear group, GL(2), is not semi-
simple, the Killing form (9) is degenerate and
there is no unique choice of vector fields for the
group (see section 3.3). It is however decomposed
into the radical, which corresponds to the diver-
gence, and the special linear group, SL(2), which
is semi-simple and whose intrinsic vector fields coincide
with (21). Thus, we use
the vector fields in (21) for computing the quasi-
invariance of differential forms under general affine
transformations.
From (12), (13), and (21), the second prolongations
of these vector fields are computed by:
v (2)
@
@x
@
@y
@
@y xx
v (2)
@
@x
@
@y
@y x
@
@y xx
v (2)
@
@x
@
@y
@
@y x
@
@y xx
v (2)
@
@x
@y
@y x
03y x y xx
@
@y xx
These are the vector fields in four dimension,
whose coordinates are x, y, y x and y xx , and the
projection of these vector fields onto the x0y plane
coincides with the original affine vector fields in
two dimension. Since ' gdx is of second order, the
prolonged vector fields, v (2)
3 and v (2)
describe how the parameter, ' , is going to change
under general affine transformations.
6.2. Affine Quasi-Invariant Parameterisation
The measurement of the invariance, Q a , under a
general affine transformation is derived by substituting
the prolonged vector fields, v (2)
4 of (22) into (18):
Z
where, L is a function of y x , y xx , '
g and its derivatives
as follows:
v (2)
dx
126 Sato and Cipollayg
y
x
xx
x
*
y
Prolonged Space
Image
Fig. 5. Variation of ' g. The image curve C is transformed
to C (n) in the prolonged space. The curve '
varies only on
the surface 2 defined by C (n) . What we need to do is to
find a curve ' g 3 on 2 which minimises Qa . Thus, there is
no variation in yx and yxx .
where, i denotes of ith vector field, v i , and ' g yx
and ' g yxx are the first derivatives of ' g with respect
to y x and y xx respectively.
We now find a function, ' g, which minimises, Q a
of (23). The necessary condition of Q a to have a
minimum is that its first variation, ffiQ a vanishes:
This is a variational problem (Gelfand and
Olver, 1995) of one independent
variable, x, and two dependent variables, y and
' g, and the integrand, L, of (23) is called the Lagrangian
of the variational problem. It is known
that (25) holds if and only if its Euler-Lagrange
vanishes as follows (Olver, 1986):
denotes the Euler operator. Since, in
our case, one of the dependent variables, ' g, is a
function of the derivatives, y x , y xx , of the other
dependent variable, y, the Euler-Lagrange expression
is different from the standard form of one independent
and two dependent variables. We now
investigate how this variational problem can be
formalised.
Suppose ' g changes to ' g+1'g, so that '
to ' g yx + 1'g yx and ' g yxx changes to '
respectively, where 1'g yx and 1'g yxx denote derivatives
of 1'g with respect to y x and y xx . Then, the
first variation of the function, Q a , caused by the
change in '
g is described as follows:
Z
@'g
@'g yx
@'g yxx
1'g yxx
Note that the variation occurs only on the surface
2 as shown in Fig. 5, and therefore the variation,
ffiQ a does not include the change in y x and y xx .
As shown in Appendix A, assuming that ' g has
a form, y xx
(where ff and fi are real
values), we find that ffiQ a vanishes for any curve,
if the following function, ' g, is chosen:
We conclude that for any curve the following parameter
' is quasi-invariant under general affine
transformations:
The quasi-invariance, Q a , of an example curve
computed by varying the power of y xx and (1+y 2
in (28) is shown in Fig. 6. We find that Q a takes
a minimum when we choose ' shown in (28). By
reformalising (28), we find that the parameter, ' ,
is described by the Euclidean arc-length, dv, and
the Euclidean curvature, , as follows:
Thus, d' , is in fact an exact invariant under ro-
tation, and quasi-invariant under divergence and
deformation. Note, it is known that the invariant
parameter under similarity transformations is dv
and that of special affine transformations is 1
The derived quasi-invariant parameter ' for general
affine transformations is between these two
as expected. We call ' the affine quasi-invariant
parameter (arc-length). Since the new parameter
requires only the second order derivatives, it is expected
to be less sensitive to noise than the exact
invariant parameter under general affine transformations
Quasi-Invariant Parameterisations 127
betaalpha
a
a
Fig. 6. Quasi-invariance of an artificial curve. The
quasi-invariance, Qa , of an example curve is computed
varying the power, ff and fl, in the parameter,
dx. As we can see, Qa takes a minimum
at This agrees with (28).
7. Quasi Affine Integral Invariants
In this section, we apply the extracted affine quasi-invariant
parameterisation to the semi-local integral
invariants described in section 2, and derive
quasi integral invariants under general affine
transformations.
defined in (29) is quasi-invariant, we
can derive quasi integral invariants under general
affine transformations by substituting ' in
into w in (1). If we substitute F ('
then we have
the following quasi semi-local integral invariant:
which is actually the area made by two vectors,
points C(' 1 +1') and C(' 1 01') are identified by
computing affine quasi-invariant arc-length,
R d' . (30) is a relative quasi-invariant of weight
one under general affine transformations as follows
e
can be computed from just second order
derivatives, the derived invariants are much less
sensitive to noise than differential invariants (i.e.
affine curvature). This is shown in the experiment
in section 9.2.
8. Validity of Quasi-Invariant Parameter-
isation
Up to now we have shown that there exists a quasi-invariant
parameter under general affine transfor-
mations, namely ' . In this section we investigate
the systematic error of the quasi-invariant
parameterisation, that is the difference from the
exact invariance, and show under how wide range
of transformations the quasi-parameterisation is
valid. As we have seen in (29), the new parameterisation
is an exact invariant under rotational
motion. We thus investigate the systematic error
90 oo
f
y
Fig. 7. Valid area of the affine quasi-invariant parame-
terisation. The tilt and the slant motion of a surface is
represented by a point on the sphere which is pointed by
the normal to the oriented disk. The motion allowed for
the affine quasi-invariant parameterisation is shown by the
shaded area on the sphere, which is approximately less than
in slant, and there is no preference in tilt angle.
128 Sato and Cipolla5030O
O
O
O
O
(a) original curve (b) distorted curves
Affine quasi-invariant arc-length
Invariants
O
O
(c) invariant signature
Fig. 8. Systematic error in invariant signatures. The original curve on a fronto-parallel planar surface shown in (a) is
distorted by the slant motion of the surface with tilt of 60 degrees (dashed lines) and slant of 30, 40, 50, and 60 degrees as
shown in (b). The distortions of the curve caused by this slant motion can be modeled by general affine transformations.
(c) shows invariant signatures made of quasi semi-local integral invariants (30) extracted from the curves in (b). If the slant
motion of the plane is 40 degrees or more, the invariant signature suffers from large systematic error, while if the slant is
less than 40 degrees, the proposed quasi-invariants are useful.
caused by the remaining components of the affine
transformation, that is the divergence and the deformation
components.
From (17), the systematic error of d' normalised
by d' itself is computed to the first order:
e =X
a i
v (2)
dx
where, a 1 , a 2 , a 3 and a 4 are the magnitude of di-
vergence, curl, and deformation vector fields. Substituting
(22) and (27) into (31), and since the curl
component of the vector field does not cause any
systematic error, we have:
x
2y x
x
Since both (10y 2
x
and 2yx
in (32) vary only
from 01 to 1, we have the following inequality:
Thus, if ja 1 j 0:1, ja 3 j 0:1 and ja 4 j 0:1, then
e 0:1, and the affine quasi-invariant parameterisation
is valid. Fig. 7 shows the valid area of
the affine quasi-invariant arc-length (parameteri-
represented by the tilt and slant angles
Quasi-Invariant Parameterisations 129
affine quasi-invariant arc-length
integral
invariants
(a) Integral invariants (std 0.1)
-0.020.020.06
affine arc-length
differential
invariants
(b) Differential invariants (std 0.1)
affine quasi-invariant arc-length
integral
invariants
(c) Integral invariants (std 0.5)
-0.020.020.06
affine arc-length
differential
invariants
(d) Differential invariants (std 0.5)
Fig. 9. Results of noise sensitivity analysis. The invariant signatures of an artificial curve are derived from the proposed
invariants (semi-local invariants based on affine quasi invariant parameterisation) and the affine differential invariants (affine
curvature), and are shown by thick lines in (a) and (b) respectively. The dots in (a) and (b) show signatures after adding
random Gaussian noise of std 0.1 pixels, and the dots in (c) and (d) show signatures after adding random Gaussian noise of
std 0.5 pixels. The thin lines show the uncertainty bounds of the signatures estimated by the linear perturbation method.
The signatures from the proposed method are much more stable than those of differential invariants.
which result in the systematic error, e, smaller
than 0:1. In Fig. 7, we find that if the slant motion
is smaller than 35 ffi , the systematic error is
approximately less than 0.1, and the affine quasi-invariant
parameterisation is valid.
9. Experiments
9.1. Systematic Error of Quasi Invariants
In this section, we present the results of systematic
error analysis of the quasi semi-local integral
invariants, that is the semi-local integral invariants
based on quasi-invariant parameterisation defined
in (30), and show how large distortion is allowed
for the quasi semi-local integral invariants.
Fig. 8 (a) shows an image of a fronto-parallel planar
surface with a curve. We slant the surface
with tilt angle of 60 degrees and slant angle of 30,
40, 50 and 60 degrees as shown in Fig. 8 (b), and
compute the invariant signatures of curves at each
slant angle. Fig. 8 (c) shows invariant signatures
of the curves computed from the quasi-invariant
arc-length and the semi-local integral invariant
(30). In this graph, we find that the invariant signature
is distorted more under large slant motion
as expected, and the proposed invariants are not
valid if the slant motion is more than 40 degrees.
Sato and Cipolla
(a) viewpoint 1 (b) viewpoint 2
Affine quasi-invariant arc-length
Invariants
(c) invariant signature from viewpoint 1
Affine quasi-invariant arc-length
Invariants
(d) invariant signature from viewpoint 2
Fig. 10. Curve matching experiment. Images of natural leaves from the first and the second viewpoint are shown in (a)
and (b). The white lines in these images show extracted contour curves. The quasi-invariant arc-length and semi-local
integral invariants are computed from the curves in (a) and (b), and shown in (c) and (d) respectively. In this example, we
chose It is clearly shown in these two signatures that the contour curve is partially occluded in (a).
9.2. Noise Sensitivity of Quasi Invariants
We next compare the noise sensitivity of the proposed
quasi semi-local integral invariants shown
in (30) and the traditional differential invariants,
i.e. affine curvature.
The invariant signatures of an artificial curve
have been computed from the proposed quasi-
invariants and the affine curvature, and are shown
by solid lines in Fig. 9 (a) and (b). The dots in (a)
and (b) show the invariant signatures after adding
random Gaussian noise of standard deviation of
pixels to the position data of the curve, and
the dots in (c) and (d) show those of standard deviation
of 0.5 pixels. As we can see in these signa-
tures, the proposed invariants are much less sensitive
to noise than the differential invariants. This
is simply because the proposed invariants require
only second order derivatives while differential in-
Quasi-Invariant Parameterisations 131
Affine quasi-invariant arc-length
Invariants
(a) invariant signatures
(b) matched curves (c) matched curves
Fig. 11. Results of curve matching experiment. The solid and dashed lines in (a) show the invariant signatures of the curves
shown in Fig. 10 (a) and (b), which are shifted horizontally minimising the total difference between the two signatures. (b)
and (c) show the corresponding curves extracted from the invariant signatures (a).
variants require fourth order derivatives. The thin
lines show the results of noise sensitivity analysis
derived by the linear perturbation method.
9.3. Curve Matching Experiments
Next we show preliminary results of curve matching
experiments under relative motion between
an observer and objects. The procedure of curve
matching is as follows:
1. Cubic B-spline curves are fitted (Cham
and Cipolla, 1996) to the Canny edge
data (Canny, 1986) of each curve. This allows
us to extract derivatives up to second order.
2. The quasi affine arc-length and the quasi
affine semi-local integral invariants (30) with
an arbitrary but constant 1' are computed
at all points along a curve, and subsequently
plotted as an invariant signature with quasi
affine arc-length along the horizontal axis and
the integral invariant along the vertical axis.
The derived curve on the graph is an invariant
signature up to a horizontal shift. We extract
the invariant signatures of both the original
and deformed curves.
3. To match curves we simply shift one invariant
signature horizontally minimising the total
difference between two signatures.
4. Corresponding points are derived by taking
identical points on these two signatures. Even
though a curve may be partially occluded
or partially asymmetric, the corresponding
points can be distinguished by the same procedure
Fig. 10 (a) and (b) show the images of natural
leaves taken from two different viewpoints. The
white lines in these images show example contour
132 Sato and Cipolla
Affine quasi-invariant arc-length
Invariants
(a)
Affine quasi-invariant arc-length
Invariants
Affine quasi-invariant arc-length
Invariants
(c)
Fig. 12. Comparison of signature. The invariant signatures in (a), (b) and (c) are computed from Fig. 10 (a) and (b) by
using three different 1' , i.e.
curves extracted from B-spline fitting. As we can
see in these curves, because of the viewer motion,
the curves are distorted and occluded partially.
Since the leaf is nearly flat and the extent of the
leaf is much less than the distance from the camera
to the leaf, we can assume that the corresponding
curves are related by a general affine transformation
The computed invariant signatures of the original
and the distorted curves are shown in Fig. 10
(c) and (d) respectively. One of these two signatures
was shifted horizontally minimising the total
difference between these two signatures (see
Fig. 11 (a)). The corresponding points on the
contour curves were extracted by taking identical
points in these two signatures, and are shown
in Fig. 11 (b) and (c). Note that the extracted
corresponding curves are fairly accurate. In this
experiment, we have chosen computing
invariant signatures. For readers' reference, we
in Fig. 12 compare the invariant signatures computed
from three different 1' .
9.4. Extracting Symmetry Axes
We next apply the quasi integral invariants for
extracting the symmetry axes of three dimensional
objects. Extracting symmetry (Brady and
Asada, 1984; Friedberg, 1986; Giblin and Bras-
sett, 1985; Gross and Boult, 1994; Van Gool et al.,
1995a) of objects in images is very important for
recognising objects (Mohan and Nevatia, 1992;
Gool et al., 1995b), focusing attention (Re-
isfeld et al., 1995) and controlling robots (Blake,
Quasi-Invariant Parameterisations 133
Rotation
l
~
~
(a) (b)
Fig. 13. Bilateral symmetry with rotation. The left and the right parts of an object with bilateral symmetry are rotated
with respect to the symmetry axis, L in (a). The intersection point, O1 , of two tangent lines, l 1 and e l 1 , at corresponding
points, P1 and e
P1 , of a bilateral symmetry with rotation lies on the symmetry axes, L in (b). If we have N cross points,
O the symmetry axis can be computed by fitting a line to these cross points, O1 , O2 , 1 1 1, ON .
1995) reliably. It is well known that the corresponding
contour curves of a planar bilateral symmetry
can be described by special affine transformations
(Kanade and Kender, 1983; Van Gool
et al., 1995a). In this section, we consider a class
of symmetry which is described by a general affine
transformation.
Consider a planar object to have bilateral symmetry
with an axis, L. Suppose the planar object
can be separated into two planes at the axis, L,
and is connected by a hinge so that two planes can
rotate around this axis, L, as shown in Fig. 13
(a). The objects derived by rotating these two
planes have a 3D bilateral symmetry. This class
of symmetry is also common in artificial and natural
objects such as butterflies and other flying in-
sects. Since the distortion in images caused by a
three dimensional motion of a planar object can be
described by a general affine transformation, this
class of symmetry can also be described by general
affine transformations under the weak perspective
assumption. Thus, the corresponding two curves
of this symmetry have the same invariant signatures
under general affine transformations. We
must note the following properties:
1. The skewed symmetry proposed by Kanade
(Kanade, 1981; Kanade and Kender, 1983) is
a special case of this class of symmetry, where
the rotational angle is equal to zero and the
distortion can be described by a special affine
transformation with determinant of 01.
2. Unlike the skewed symmetry of planar ob-
jects, 3D bilateral symmetry takes both negative
and positive determinant in its affine ma-
trix. The positive means that the two planes
are on the same side of projected symmetry
axis, and the negative means that the planes
are on opposite sides of the symmetry axis in
the image. The extracted invariant signatures
of corresponding curves of 3D bilateral symmetry
are therefore either the same (i.e. positive
determinant) or reflections of each other
(i.e. negative determinant).
3. Unlike the skewed symmetry of planar ob-
jects, the symmetry axis of 3D bilateral symmetry
is no longer on the bisecting line of
corresponding symmetric curves. Instead, the
cross points of the tangent lines at corresponding
points on the symmetric curves lie on the
symmetry axis as shown in Fig. 13 (b). Thus
the symmetry axis can be extracted by computing
a line which best fits to the intersection
points of corresponding tangent lines.
We next show the results of extracting symmetry
axes of 3D bilateral symmetry. Fig. 14
(a) shows an image of a butterfly (Small White)
with a flower. Since the two wings of the but-
134 Sato and Cipolla
(a) original image (b) contour curves
-20002000
Affine quasi-invariant arc-length
Invariants
(c) invariant signature of the left wing
-20002000
Affine quasi-invariant arc-length
Invariants
(d) invariant signature of the right wing
Fig. 14. Extraction of axis of bilateral symmetry with rotation. (a) shows the original image of a butterfly (Small White),
perched on a flower. (b) shows an example of contour curves extracted by fitting B-spline curves (Cham and Cipolla, 1996)
to the edge data (Canny, 1986). The invariant signatures of these curves are computed from the quasi-invariant arc-length
and semi-local integral invariants. (c) and (d) are the extracted signatures of the left and the right curves in (b). In this
example, we chose
terfly are not coplanar, the corresponding contour
curves of the two wings are related by a general
affine transformation as described above. Fig. 14
(b) shows example contour curves extracted from
(a). Note that not all the points on the curves
have correspondences because of the lack of edge
data and the presence of spurious edges. Fig. 14
(c) and (d) shows the invariant signatures computed
from the left and the right wings shown in
Fig. 14 (b) respectively. (In this example, we chose
computing semi-local integral invari-
ants.) Since the signatures are invariant up to a
shift, we have simply reflected and shifted one invariant
signature horizontally minimising the total
difference between two signatures (see Fig. 15 (a)).
As shown in these signatures, semi-local invariants
based on quasi-invariant parameterisation are
Quasi-Invariant Parameterisations 135
-20002000
Affine quasi-invariant arc-length
Invariants
(a) invariant signatures
(b) tangent lines (c) symmetry axis
Fig. 15. Results of extracting symmetry axis of 3D bilateral symmetry. The solid and dashed lines in (a) show the invariant
signatures of the curves shown in Fig. 14 (b), which are reflected and shifted horizontally minimising the total difference
between two signatures. The black lines in (b) connect pairs of corresponding points extracted from the invariant signatures
in (a). The white lines and the square dots show the tangent lines for the corresponding points and their cross points. The
white line in (c) shows the symmetry axis of the butterfly extracted by fitting a line to the cross points.
quite accurate and stable. Corresponding points
are derived by taking the identical points on these
two signatures, and shown in Fig. 15 (b) by connecting
the corresponding points. Tangent lines at
every corresponding pair of points are computed
and displayed in Fig. 15 (b) by white lines. The
cross points of every pair of tangent lines are extracted
and shown in Fig. 15 (b) by square dots.
The symmetry axis of the butterfly is extracted
by fitting a line to the cross points of tangent
lines and shown in Fig. 15 (c). Although the extracted
contour curves include asymmetric parts
as shown in Fig. 14 (b), the computed axis of
symmetry agrees with the body of the butterfly
quite well. Whereas purely global methods, e.g.
moment based methods (Friedberg, 1986; Gross
and Boult, 1994), would not work in such cases.
These results show the power and usefulness of the
proposed semi-local invariants and quasi-invariant
parameterisation.
10. Discussion
In this paper, we have shown that there exist
quasi-invariant parameterisations which are not
exactly invariant but approximately invariant under
group transformations and do not require high
order derivatives. The affine quasi-invariant parameterisation
is derived and applied for matching
of curves under the weak perspective assumption.
Although the range of transformations is lim-
ited, the proposed method is useful for many cases
especially for curve matching under relative motion
between a viewer and objects, since the movements
of a camera and objects are, in general,
136 Sato and Cipolla
limited. We now discuss the properties of the proposed
parameterisation.
1. Noise Sensitivity
Since quasi-invariant parameters enable us to
reduce the order of derivatives required, they
are much less sensitive to noise than exact
invariant parameters. Thus using the quasi-invariant
parameterisation is the same as finding
the best tradeoff between the systematic
error caused by the approximation and the error
caused by the noise. The derived parameters
are more feasible than traditional invariant
parameters.
2. Singularity
The general affine arc-length (Olver et al.,
1994) suffers from a singularity problem.
That is, the general affine arc-length goes to
infinity at inflection points of curves, while
the affine quasi-invariant parameterisation defined
in (29) does not. This allows the new parameterisation
to be more applicable in practice
3. Limitation of the Amount of Motion
As we have seen in section 8, the proposed
quasi-invariant parameter assumes the group
motion to be limited to a small amount. In the
affine case, this limitation is about a 1 0:1,
a 3 0:1 and a 4 0:1 for the divergence and
the deformation components (there is no limitation
on the curl component, a 2 ). Since, in
many computer vision applications, the distortion
of the image is small due to the limited
speed of the relative motion between a
camera and the scene or the finite distance
between two cameras in a stereo system, we
believe the proposed parameterisation can be
exploited in many applications.
Appendix
A
In this section, we derive the affine quasi-invariant
arc-length, '
. As we have seen in (26), ffiQ a is
described as follows:
Z
@'g
@'g yx
@'g yxx
1'g yxx
dx
By computing the Lie derivatives of ' g with respect
to v (2)
3 and v (2)
4 in (22), we have:
v (2)
@'g
@y xx
v (2)
@y x
@'g
@y xx
v (2)
@'g
@y x
@'g
@y xx
v (2)
@y x
@'g
@y xx
Note, that @'g
@x and @'g
@y components vanish. This is
because ' g does not include x and y components,
and by definition the prolonged vector fields act
only on the corresponding components given explicitly
(e.g. @
@x acts only on x component and
does not act on y, y x or other components). Substituting
into (24), we find that the La-
grangian, L, is computed from:
@y x
04'gy xx (2+3y 2
@y xx
@y xx
@y x
@y x
@'g
@y xx
Consider a derivative:
d
dy x
@'g yx
1'g dx
dy x
dy x
@'g yx
1'g dx
dy x
@'g yx
1'g yx
dx
dy x
@'g yx
dy 2
x
By integrating both sides of (A4) with respect to
dy x , the second term of (A1) can be described by:
Z
@'g yx
@L
@'g yx
1'g dx
dy x
aZ
d
dy x
@'g yx
Z
@'g yx
1'gd dx
dy x
Similarly the third term of (A1) can be described
by:
Z
@'g yxx
@L
@'g yxx
1'g dx
dy xx
aZ
d
dy xx
@'g yxx
Z
@'g yxx
1'gd dx
dy xx
Quasi-Invariant Parameterisations 137
Thus, the variation, ffiQ a , is computed from (A5)
and (A6) as follows:
Z
where,
@L
@'g yx
1'g dx
dy x
a
@L
@'g yxx
1'g dx
dy xx
a
@'gd
dy x
@'g yx@L
@'g yx
d
dx
dx
dy xd
dy xx
@'g yxx@L
@'g yxx
d
dx
dx
dy xx
where, a and b are the limit of integration specified
by the curve, C. From (A3), the derivatives of L
in (A8) and (A9) can be computed by:
@'g
@y x
@y xx
024y x y xx (1+y 2
@y x
@'g
@y xx
x )'g @'g
@y x
x )'g @'g
@y xx
@'g yx
@y xx
@y x
@'g yxx
@y xx
x )'g
@y x
Since 1'g in (A7) must be able to take any value,
ffiQ a vanishes if and only if:
The question is what sort of function, ' g, makes
the condition (A13) hold. Here, we assume that ' g
takes the following form:
and investigate the unknown parameters ff and fi
for (A13) to hold for arbitrary curves. Substituting
into (A10), (A11), and (A12), we have:
@'g
@'g yx
@'g yxx
Since y x and y xx in (A15), (A16) and (A17) take
arbitrary values, the condition (A13) holds for arbitrary
curves if:
Thus, ff and fi must be:
ff =5
Substituting (A19) and (A20) into (A14), we find
that the following form for ' g gives the extremal to
Acknowledgements
The authors acknowledge the support of the EP-
SRC, grant GR/K84202.
Notes
1. The adjoint representation, ad(v i ), provides a
matrix representation of the algebra, whose (j; component
is described by a structure constant C j
ik (Sat-
tinger and Weaver, 1986).
--R
Recognitive aspects of moment invariants.
General methods for determining projective invariants in imagery.
A symmetry theory of planar grasp.
Smoothed local symmetries and their implementation.
Invariant signatures for planar shape recognition under partial occlusion.
A computational approach to edge de- tection
Automated B-spline curve representation with MDL-based active contours
Surface orientation and time to contact from image divergence and deformation.
In Sandini
An affine transformation invariant curvature function.
Finding axes of skewed symme- try
Calculus of Variations.
Local symmetry of plane curves.
Analyzing skewed sym- metries
Differential Geometry.
Visual pattern recognition by moment in- variants
Lie algebras.
Recovery of the three-dimensional shape of an object from a single view
Mapping image properties into shape constraints: Skewed symme- try
Geometry of binocular vision and a model for stereopsis.
Gesammelte Abhandlungen
Perceptual organization for scene segmentation and description.
Pattern Analysis and Machine Intelligence
Foundations of semi-differential invariants
Geometric Invariance in Computer Vision.
Applications of Lie Groups to Differential Equations.
Differential invariant signatures and flows in computer vision.
Recognition of planar shapes under affine distortion.
Recognizing planar objects using invariant image features.
Planar object recognition using projective shape representation.
Affine integral invariants and matching of curves.
Affine integral invariants for extracting symmetry axes.
Lie groups and algebras with applications to physics
Object recognition based on moment (or algebraic) invariants.
The characterization and detection of skewed symmetry.
International Journal of Robotics Research
In
Projective invariants of shapes.
Geometric invariants and object recog- nition
Recognizing general curved objects efficiently.
--TR
--CTR
Tat-Jen Cham , Roberto Cipolla, Automated B-Spline Curve Representation Incorporating MDL and Error-Minimizing Control Point Insertion Strategies, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.21 n.1, p.49-53, January 1999 | bilateral symmetry;integral invariants;semi-local invariants;differential invariants;curve matching;quasi-invariant parameterisations |
293932 | Rational Filters for Passive Depth from Defocus. | A fundamental problem in depth from defocus is the measurement of relative defocus between images. The performance of previously proposed focus operators are inevitably sensitive to the frequency spectra of local scene textures. As a result, focus operators such as the Laplacian of Gaussian result in poor depth estimates. An alternative is to use large filter banks that densely sample the frequency space. Though this approach can result in better depth accuracy, it sacrifices the computational efficiency that depth from defocus offers over stereo and structure from motion. We propose a class of broadband operators that, when used together, provide invariance to scene texture and produce accurate and dense depth maps. Since the operators are broadband, a small number of them are sufficient for depth estimation of scenes with complex textural properties. In addition, a depth confidence measure is derived that can be computed from the outputs of the operators. This confidence measure permits further refinement of computed depth maps. Experiments are conducted on both synthetic and real scenes to evaluate the performance of the proposed operators. The depth detection gain error is less than irrespective of texture frequency. Depth accuracy is found to be 0.51.2% of the distance of the object from the imaging optics. | Introduction
A pertinent problem in computational vision is the recovery of three-dimensional scene
structure from two-dimensional images. Of all problems studied in vision, the above
has, by far, attracted the most attention. This has resulted in a variety of sensors and
algorithms [Jarvis-1983, Besl-1988] that can be broadly classified into two categories:
active and passive. Active techniques produce relatively reliable depth maps, and have
been applied to many industrial applications. However, when the environment cannot
be controlled, as in the case of distant objects in outdoor scenes, active methods prove
impractical. As a consequence, passive techniques are always desirable.
Passive sensing methods, such as stereo and structure from motion, rely on algorithms
that establish local correspondences between two or more images. From the
resulting disparity estimates or motion vectors, the depths of points in the scene are
computed. The process of determining correspondence is widely acknowledged as being
computationally expensive. In addition, the above techniques suffer from the occlusion
or missing part problems; it is not possible to compute depths of scene points that are
visible in only one of the images. Alternative passive techniques are based on focus anal-
ysis. Depth from focus uses a sequence of images taken by changing the focus setting
of the imaging optics in small steps. For each pixel, the focus setting that maximizes
image contrast is determined. This, in turn, can be used to compute the depth of the corresponding
scene point [Horn-1968, Jarvis-1983, Krotkov-1987, Darrell and Wohn-1988,
Nayar and Nakagawa-1994] .
In contrast, depth from defocus uses only two images with different optical settings
[Pentland-1987, Subbarao-1988, Ens and Lawrence-1991, Bove, Jr.-1993, Subbarao
and Surya-1994, Nayar et al.-1995, Xiong and Shafer-1995]. The relative defocus in the
two images can, in principle, be used to determine three-dimensional structure. The
focus level in the two images can be varied by changing the focus setting of the lens, by
moving the image sensor with respect to the lens, or by changing the aperture size. Depth
from defocus is not confronted with the abovementioned missing part and correspondence
problems. This makes it an attractive prospect for structure estimation.
Despite these merits, at this point in time, fast, accurate, and dense depth from
defocus has only been demonstrated using active illumination that constrains the dominant
frequencies of the scene texture [Nayar et al.-1995, Watanabe et al.-1995] . Past
investigations of passive depth from defocus indicate that it can prove computationally
expensive to obtain a reliable depth map. This is because the frequency characteristics of
scene textures are, to a large extent, unpredictable. Furthermore, the texture itself can
vary dramatically over the image. Since the response of the defocus (blur) function varies
with texture frequency, a single broadband filter that produces an aggregate estimate of
defocus for an unknown texture cannot lead to accurate depth estimates. The obvious
solution is to use an enormous bank of narrow-band filters and compute depth in a
least-squares sense using all dominant frequencies of the texture [Xiong and Shafer-1995,
Gokstorp-1994] . This requires one to forego computational efficiency. To worsen mat-
ters, a depth map of high spatial resolution can be obtained only if all the filters in the
bank have small kernel sizes. The uncertainty relation [Bracewell-1965] tells us that the
frequency resolution of the filter bank reduces proportional to the inverse of the kernel
size used. In short, one cannot design a filter with narrow enough response if the support
area of the filter kernel is small.
Xiong and Shafer [Xiong and Shafer-1995] proposed an attractive way to cope with
this problem. They used moment filters to compensate for the frequency spectrum of
the texture within the passband of each of the narrowband filters. This approach results
in accurate depth estimates but requires the use of four additional filters for each of the
tuned filters in the filter bank. This translates to five times as many convolutions as is
needed for any typical filter bank method. Xiong and Shafer [Xiong and Shafer-1995] use
convolutions in total, which makes their approach computationally expensive.
Ens and Lawrence [Ens and Lawrence-1991] have proposed a method based on a
spatial-domain analysis of two blurred images. They estimate the convolution matrix,
which is convolved with one of the two images to produce the other image. The matrix
corresponds to the relative blur between the two images. Once the matrix is computed,
it can be mapped to depth estimates. This method produces accurate depth maps. How-
ever, the iterative nature of the convolution matrix estimation makes it computationally
expensive.
Subbarao and Surya [Subbarao and Surya-1994] proposed the S-Transform and
applied it to depth from defocus. They modeled the image as a third-order polynomial
in spatial domain, and arrived at a simple and elegant expression [Subbarao and Surya-
1994]:
are the far and near focused images, respectively. The blur circle diameters
in images i 1 and i 2 are expressed by their second central moments oe 2
2 and oe 1
respectively. Since an additional relation between oe 2 and oe 1 can be obtained from the
focus settings used for the two images, oe 2 and oe 1 can be solved for and mapped to a
depth estimate. As we see no terms that depend on scene frequency in equation (1), this
can be considered to be a sort of texture-frequency invariant depth from defocus method.
It produces reasonable depth estimates for large planar surfaces in the scene. However, it
does not yield depth maps with high spatial resolution that are needed when depth variations
in the scene are significant. We argue that this requires a more detailed analysis
of image formation as well as the design of novel filters based on frequency analysis.
In this paper, we propose a small set of filters, or operators, for passive depth
from defocus. These operators, when used in conjunction, yield invariance to texture
frequency while computing depth. The underlying idea is to precisely model relative
image blur in frequency domain and express this model as a rational function of two
linear combinations of basis functions. This rational expression leads us to a texture-
invariant set of operators. The outputs of the operators are used as coefficients in a depth
recovery equation that is solved to get a depth estimate. The attractive feature of this
approach is that it uses only a small number of broadband linear operators with small
kernel supports. Consequently, depth maps are computed not only with high efficiency
and accuracy but also with high spatial resolution. Since our operators are derived using a
rational expression to model relative image blur, they are referred to as rational operators.
Rational operators are general, in that, they can be derived for any blur model.
The paper is structured as follows. First, the concept of a texture invariant operator
set is described. Next, all the operations needed for depth from defocus are discussed,
including the use of prefiltering and coefficient smoothing. An efficient algorithm for obtaining
a confidence measure from the operator outputs is outlined. These confidence
measures are effectively used for further refinement of computed depth maps. In our specific
implementation of rational operators, we have used three basis functions to model
the relative blurring function. This has resulted in a set of three rational operators with
kernel sizes of 7\Theta7. This operator set has been used to compute depth maps for both
synthetic scenes and real scenes. The experimental results are analyzed to quantify the
performance of the proposed depth from defocus approach.
Depth From Defocus
2.1 Principle
Fundamental to depth from defocus is the relationship between focused and defocused
images[Born and Wolf-1965] . Figure 1 shows the basic image formation geometry. All
light rays that are radiated by object point P and pass the aperture A are refracted by
the lens to converge at point Q on the image plane. The relationship between the object
distance d, focal length of the lens f , and the image distance d i is given by the lens law:d
f
Each point on the object plane is projected onto a single point on the image plane,
causing a clear or focused image i f to be formed. If, however, the sensor plane does not
coincide with the image plane and is displaced from it, the energy received from P by
the lens is distributed over a patch on the sensor plane. The result is a blurred image of
It is clear that a single image does not include sufficient information for depth
estimation, as two different scenes defocused to different degrees could produce identical
images. A solution to the depth estimation problem is achieved by using two images,
separated by a known physical distance 2e [Ens and Lawrence-1991, Subbarao and
Surya-1994]. The distance fl of the image i 1 from the lens should also be known. Given
the above described setting, the problem is reduced to analyzing the relative blurring of
each scene point in the two images and computing the position of its focused image. A
restriction here is that the images of all of the scene points must lie between the far-
focused sensor plane i 1 and the near-focused sensor plane i 2 . For ease of description,
we introduce the normalized depth ff, which equals \Gamma1 at i 1 and 1 at i 2 . Then, using
ff)e in the lens law (2), we obtain the depth d of the scene point.
R
f
a
A
a
O
Figure
1: Image formation and depth from defocus. The two images, i 1 and i 2 , include all
the information required to recover scene structure between the focused planes in the scene
corresponding to the two images.
2.2 Defocus Function
Precise modeling of the defocus function is critical to accurate depth estimation. The
defocus function is described in detail in previous works [Born and Wolf-1965, Horn-
1986]. In
Figure
ff)e is the distance between the focused image of a scene point
and its defocused image formed on the sensor plane. The light energy radiated by the
scene point and collected by the imaging optics is uniformly distributed on the sensor
plane over a circular patch with a radius of (1 \Sigma ff)e a=d i 1 . This distribution, also called
the pillbox, is the defocus function:
is used for image used for image i 2 , and \Pi(r) is the rectangular
function which takes the value 1 for jrj ! 1and 0 otherwise. F e is the effective F-number
of the optics. In the optical system shown in Figure 1, F e equals d i =2a. In order to
eliminate magnification differences between the near and far focused images, we have
used telecentric optics, which is described in Appendix 7.1.1 and detailed in [Watanabe
and Nayar-1995b]. In the telecentric case, F e equals f=2a 0 .
In Fourier domain, the defocus function in (3) is:
where, J 1 is the first-order Bessel function of the first kind, and u and v denote spatial
frequency parameters in the x and y directions, respectively 2 . As is evident from the
above expression, defocus serves as a low-pass filter. The bandwidth of the filter decreases
as the radius of the blur circle increases, i.e. as the plane of focus gets farther from the
sensor plane. Figure 2 illustrates this effect. Figure 2(a) shows the image i f (x; y) formed
at the focused plane and its Fourier spectrum I f (u; v). When the sensor plane is displaced
by a distance (1 \Gamma ff)e, the defocused image i 2 is the convolution of the focused image
with the pillbox h 2 (x; y), as shown in Figure 2(b). The effect of defocus in spatial
This geometric model is valid as far as the image is not exactly focused, in which case, a wave
optics model is needed to describe the point spread function. Further, it is assumed that lens induced
aberrations are small compared to the radius of the blur circle [ Born and Wolf-1965 ] .
2 In the past, most investigators have used the Gaussian model instead of the pillbox model for the blur
function. This is mainly to facilitate mathematical manipulations; the Fourier transform of a Gaussian
function is also a Gaussian which can be converted into a quadratic function by using the logarithm. As
we will see, in our approach to depth from defocus, any form of blur function can be used.
and frequency domains can be written as:
I 2 (u;
Since ff can vary from point to point in the image, strictly speaking, we have a space-variant
system that cannot be expressed as a convolution. Therefore, equation (5) does
not hold in a rigorous sense. However, if we assume that ff is constant in a small patch
around each pixel, equation (5) remains valid within the small patch. Hereon, when we
use the terms Fourier transform or spectrum, they are assumed to be those of a small
image patch. For the assumption that ff variation in a patch is small to be valid, the
patch itself must be small. In practice, to realize this requirement, one is forced to
use broadband filters; the kernel size of a linear filter is inversely proportional to the
bandwidth of the filter.
Figure
2(c) is similar to (b) except that the sensor lies at the distance (1
from the focused plane to produce the defocused image i 1 . Again:
I 1 (u;
Note that in the spectrum plots we have used the polar coordinates (f r ; f ' ) for spatial
frequencies, rather than Cartesian coordinates (u; v). This is because the defocus function
is usually rotationally symmetric. This symmetry allows us to express the defocus
spectrum using a single parameter, namely, the radial frequency f
We see in
Figure
2 that, since the image in (c) is defocused more than the one in (b), the low-pass
response of H 1 (u; v) is greater than that of H 2 (u; v).
2.3 Depth from Two Images
We now introduce the normalized ratio, M
and P (u; Equivalently, in the spatial domain, we have
the spectrum I f (u; v)
of the focused image, which appears in equations (5) and (6), gets cancelled, the above
normalized ratio is simply:
(1- a) e /2Fe
x y
1.22Fe /(1- a)e
4Fe /p (1- a) e
(1+ a) e /2Fe
x y
4Fe /p (1+ a) e
if
FT
FT
FT
1.22Fe /(1+ a)e
(a)
(b)
(c)2
If
Figure
2: The effect of blurring on the near and far focused images. (a) Focused image i f
and its Fourier spectrum. (b) Pillbox defocus model h 2 and the Fourier spectrum I 2 of the
blurred image. (c) Pillbox defocus model h 1 and the Fourier spectrum I 1 of the image for
larger blurring. f
is the radial frequency.
Figure
3 shows the relationship between the normalized image ratio M=P and the
normalized depth ff for several spatial frequencies. It is seen that M/P is a monotonic
function of ff for
large. As a rule of thumb, this frequency range equals the width of the main lobe of
the defocus function H when it is maximally defocused, i.e. when the distance between
the focused image i f and the sensor plane is 2e. From the zero-crossing of the defocus
function in Figure 2, the highest frequency below which the normalized image ratio M/P
is monotonic is found to be:
For any given frequency within the above bound, since M/P is a monotonic function of
ff, M/P can be unambiguously mapped to a depth estimate fi, as shown in Figure 3.
a
Figure
3: Relation between the normalized image ratio M=P and the defocus parameter ff. An
upper frequency bound can be determined, below which, M=P is a monotonic function of the
defocus parameter ff. For any given frequency within this bound, M=P can be unambiguously
mapped to a depth estimate fi.
Besides serving a critical role in our development, Figure 3 also gives us new way of
viewing previous approaches to depth from defocus: If one can by some method determine
the amplitudes, I 1 and I 2 , of the spectra of the two defocused images at a predefined
radial frequency f
2 , a unique depth estimate can be obtained. This is the
basic idea that most of the previous work is based on [Pentland-1987, Gokstorp-1994,
Xiong and Shafer-1995], although the ratio used in the past is simply I 1 =I 2 rather than
the normalized ratio M=P introduced here.
Magnitudes of the two image spectra, at a predefined frequency, can be determined
using linear operators (convolution). However, this is not a trivial problem. The image
texture is unknown and can include unpredictable dominant frequencies and hence it
is not possible to fix a priori the frequency of interest. This problem may be resolved
by using a large bank of narrowband filters that densely samples the frequency space
to estimate powers at a large number of individual frequencies. However, important
trade-offs emerge while implementing narrowband linear operators [Gokstorp-1994, Xiong
and Shafer-1995]. First, such an approach is clearly inefficient from a computational
perspective. Furthermore, the uncertainty relation [Bracewell-1965] tells us that, when
we apply frequency analysis to a small image area, the frequency resolution reduces
proportional to the inverse of the area used. To obtain a dense depth map, one must
estimate H 1 I and H 2 I using a very small area around each pixel. A narrow filter in
spatial domain corresponds to a broadband filter in frequency domain. As a result, any
operator output is inevitably an average of the local image spectrum over a band of
frequencies. Since the response of the defocus function H depends on the local depth ff,
and is not uniform within the pass-band of the operator, the output of the operator is,
at best, an approximate focus measure and can result in large errors in depth.
Given that all linear operators, however carefully designed, end up having a pass-
band, it would be desirable to have a set of broadband operators that together provide
focus measures that are invariant to texture. Further, if the operators are broadband,
a small number of them could cover the entire frequency space and avoid the use of an
extensive filter bank. The result would be efficient, robust, and high-resolution depth
estimation. In the next section, we describe a method to accomplish this.
3 Rational Operator Set
3.1 Modeling Relative Defocus using a Rational Expression
We have established the monotonic response of the normalized image ratio M=P to the
normalized depth (or defocus) ff over all frequencies (see equation (7) and Figure 3).
Our objective here is to model this relation in closed form. In doing so, we would like
the model to be precise and yet lead us to a small number of linear operators for depth
recovery. To this end, we model the function M=P by a rational expression of two linear
combinations of basis functions:
are the basis functions, G P i (u; v) and
are the coefficients which are functions of frequency (u; v), and "(u; v; ff) is the
residual error of the fit of the model to the function M=P . If the model is accurate,
the residual error is negligible, and it becomes possible to use the model to map the
normalized image ratio M=P to the normalized depth ff. The above expression can be
rewritten as:
Here, ff on the left hand side represents the actual depth of the scene point while fi on
the right is the estimated depth. A difference between the two can arise only when the
residual error is non-zero. If the normalized ratio on the left side is given to us for any
frequency (u; v), we can obtain the depth estimate fi by solving equation (10).
The above model for the normalized image ratio is general. In principle, any basis
that captures the monotonicity and structure of the normalized ratio can be used. To
be specific in our discussion, we use the basis we have chosen in our implementation.
Since the response of M=P to ff is odd-symmetric and is almost linear for small radial
frequencies f r (see Figure 3), we could model the response using three basis functions
that are powers of fi:
Then, equation (10) becomes 3 :
The term including fi 3 can be seen as a small correction that compensates for the discrepancy
of M=P from a linear model. From the previous section, we know that the
blurring model completely determines M=P for any given depth ff and frequency (u; v).
3 We found that replacing b P2 (ff) by
a tanh aff) gives us a slightly better fit when the defocus
model is the pillbox function. Yet, to reduce the computational cost of solving equation (10) for depth
fi, we have chosen this simple polynomial model.
The above polynomial model, R(fi; u; v), can therefore be fit to the theoretical M=P in
equation (7) by assuming fi to be ff. This gives us the unknown ratios G P1 =GM1 and
G P2 =GM1 as functions of frequency (u; v). In the case of a rotationally symmetric blurring
model, such as the pillbox function, these ratios reduce to functions of just the radial
frequency f r .
Now, if we fix any one of the coefficient functions, say, G P1 (u; v), all the other
coefficients can be determined from the ratios 4 . Therefore, it is possible to determine
all the coefficient functions that ensure that the above polynomial model accurately fits
the normalized image ratio M=P given by equation (7). Figure 4 shows an example set
(based on an arbitrary selection of G P1 (u; v)) of the coefficient functions, G P1 , G P2 and
GM1 , for the case of the pillbox blur model.
In the general form of the rational expression in equation (9), the coefficients of
the rational expression can only be determined up to a multiplicative constant at each
frequency. Therefore, we have:
Here, -(u; v) is the unknown scaling function of all the coefficient functions and G P i (u; v)
and GM i (u; v) represent the structures of the ratios obtained by fitting R(fi; u; v) to
ff). The frequency response of the unknown scaling function -(u; v) is needed to
determine all the coefficient functions without ambiguity. How this can be accomplished
for the general rational expression will be described in section 4.1.
We now examine how well the polynomial model fits the plots in Figure 3 of the
normalized ratio M
precisely, we are interested in knowing how well the
model can used to estimate depth. To this end, for each frequency, we select a "true"
depth value ff and find the corresponding ratio M=P using the analytical expression
in (7). This ratio is then plugged into the polynomial model of (12) to calculate the
depth estimate fi using the Newton-Raphson method. This process is repeated for all
frequencies.
Let us rewrite equation (12) as:
As the third-order term can be considered to be a small correction, the following initial
In practice, GP1 (u; v) cannot be selected arbitrarily. There are other restrictions that need to be
considered. The exact selection procedure is discussed later in section 4.1.
GP1(fr, fq)
GP2(fr, fq)
GM1(fr, fq)
Figure
4: An example set of the coefficient functions obtained by fitting the polynomial model
to the normalized image ratio M=P . Here, G P1 (u; v) was chosen and the remaining two functions
determined from the fit.
INVARIANCE TO fr
Figure
5: Depth fi, estimated using the polynomial model in equation (12), is plotted as a
function of spatial frequency for different values of actual depth ff. We see that the estimated
depth equals the actual depth and is invariant to frequencies within the upper bound f r max
given by equation (17).
value can be provided to the Newton-Raphson method:
Then, the solution after one iteration is:
Figure
5 shows that the estimated depth fi is, for all practical purposes, equal
to the actual depth, indicating that the polynomial model is indeed accurate. Further,
the estimated depth is invariant (insensitive) to texture frequency as far as the radial
frequency f r is below f r max . Above this frequency limit f r max , the response of M
to ff, shown in Figure 3, becomes non-monotonic within the region
an accurate depth estimate is not obtainable. In practice, any image can be convolved
using a passband filter to ensure that all frequencies above f r max are removed. The rule
of thumb used to determine f r max is given by equation (8). However, for the pillbox blur
model, we have found via numerical simulation that f r max is in fact 1.2 times larger 5
than the limit given by equation (8).
e
This is a valuable side-effect of introducing the normalized image ratio M=P ; we can
utilize 20% more frequency spectrum information than conventional methods which use
the ratio I 1 =I 2 .
3.2 Rational Operator Set
We have introduced a rational expression model for the normalized ratio M=P and shown
that the solution of equation (10) gives us robust depth estimates for all frequencies
within a permissible range. Thus far, this robustness was demonstrated for individual
frequencies. In this section, we show how the rational model can be used to design a
small set of broadband operators that can handle arbitrary textures.
5 This number can be increased from 1.2 to 1.3 if a larger number of Newton-Raphson iterations are
used. However, depth results in this additional range are not numerically stable in the presence of noise
since the response curves of M=P tend to flatten out. Hence, we use only one iteration.
Taking cross-products in equation (10), we get:
By integrating over the entire frequency space, we get:
where:
Here, we invoke the power theorem [Bracewell-1965]:
where, F (u; v) and G(u; v) are the Fourier transforms of functions f(x; y) and g(x; y),
respectively. Since we are conducting a spatial-frequency analysis, that is, we are analyzing
the frequency content in a small area centered around each pixel, the right hand
side of equation (20) is nothing but a convolution. This implies that cM i (ff) and c P i (ff)
are actually functions of (x; y) and can be determined by convolutions as:
are the inverse Fourier transforms of GM i (u; v) and
In short, all the coefficients needed to compute depth using the
polynomial in equation (19) can be determined by convolving the difference image m(x; y)
and the summed image p(x; y) with linear operators that are spatial domain equivalents
of the coefficient functions. We refer these as rational operators. The outputs of these
operators at each pixel (x; y) are plugged into equation (19) to determine depth fi(x; y).
As an example, if we use the model in equation (12), the depth recovery equation
becomes:
By substituting equation (22), we have:
Again, the above rational operators are nothing but inverse Fourier transforms of the
coefficient functions shown in Figure 4. We see that, though the operators are all broad-band
(see Figure 4), the above recovery equation is independent of scene texture and
provides an efficient means of computing precise depth estimates.
4 Implementation of Rational Operators
The previous section described the theory underlying rational operators. In this section,
we discuss various design and implementation issues that must be addressed to ensure
that the rational operators produce accurate depth from defocus. In particular, we
describe the design procedure used to optimize rational operator kernels, the estimation
of a depth confidence measure, prefiltering of images prior to application of the rational
operators, and the post-processing of the outputs of the operators.
Since the rational expression model of equation (9) is too general, we focus on
the simpler model of equation (12) which we used in our experiments. However, the
procedures described here can be applied to other forms of the rational model.
4.1 Design of Rational Operators Kernels
Since our rational operators are broadband linear filters, we can implement them with
small convolution kernels. This is beneficial for two reasons; (a) low computational cost
and (b) high spatial resolution. However, as we shall see, the design problem itself is not
trivial.
Note that, after deriving the operators, the functions G
equation must have a ratio that equals the one obtained by fitting the polynomial
model to the normalized image ratio. Any discrepancy in this ratio would naturally cause
depth estimation errors. Fortunately, the base form function -(u; v) of equation (13)
remains at our discretion and can be adjusted to minimize such discrepancies. This
does not imply that -(u; v) will be selected arbitrarily, but rather that it will be given
a convenient initial form that can be optimized. Clearly, the effect of discrepancies in
the ratio would vary with frequency and hence depend on the textural properties of the
scene. The design of the operator kernels is therefore done by minimizing an objective
function that represents ratio errors over all frequencies. The relation between depth
estimation error and ratio error is derived in appendix 7.2. We argue in the appendix
that, for the depth error to be kept at a minimum, the ratio errors must satisfy the
following
oe GM1 (u;
oe G P2 (u;
oe GM1 (u; v) and oe G P2 (u; v) determine the weighting functions to be used in the minimization
of errors in GM1 (u; v) and G P2 (u; v). Here, - is a constant and in the derivation of
these expressions we have set -(u; v) equal to G P1 (u; v), i.e. G P1 (u; Therefore,
from equation (13) we have -(u;
G P2 (u;
Now, we are in a position to formulate our objective function for operator design
as follows:
oe GM1
oe GM1
P2 (u; v) are the actual ratios of the designed discrete kernels,
GM1 (u; v) and G P2 (u; v) are the ratios obtained in the previous section by fitting the
polynomial model to the normalized image ratio, and oe GM1 o is a constant used to ensure
that the minimization of - 2 does not produce the trivial result of zero-valued operators.
M1 (0; 0) is the actural DC response of the designed discrete kernel g M1 , and GM1 (0;
is its initial value. In the above summation, the discrete frequency samples should
be sufficiently dense. When the kernel size is n \Theta n, the frequency samples should be at
least 2n \Theta 2n in order to avoid the Gibbs phenomenon [Oppenheim and Schafer-1989]. In
our optimization, we use 32 \Theta 32 sample points for 7 \Theta 7 kernels. Since - 2 is non-linear,
its minimization is done using the Levenberg-Marquardt algorithm [Press et al.-1992] .
We still need to define P (u; v; ff) in equation (25), which is dependent on the
unknown texture of the image. However, since P (u; v; ff) is only used to fix the weighting
functions in equation (25), a rough approximation suffices. To this end, we assume the
distribution of the image spectrum to be:
In our optimization we have used which corresponds to Brownian motion 6 .
Though P (u; v; ff) changes with ff, we can use the approximation P (u; v;
6 If we denote fractal dimension [ Peitgen and Saupe-1988 ] by D h , in the two dimensional case the
The last issue concerns the base form function -(u; in equation (25).
An initial selection can be made for this function that will be refined by the optimization
of - 2 . As GM1 (u;
must be 0 in order to realize GM1 (u; using a finite kernel. Also,
G P1 (u; v) must be smooth (without rapid fluctuations) to obtain rational operators with
small kernels. In our implementation, we have imposed rotational symmetry as an added
constraint and used the Laplacian of Gaussian to initialize G P1 (u; v):
G P1 (f r
f peak
f peak
where, f peak is the radial frequency at which G P1 is maximum. This frequency is set to
0:4f Nyquist in our optimization. Once again, the above function is only used for initialization
and is further refined by the optimization of - 2 . An example set of discrete rational
operators obtained from the optimization of - 2 will be presented shortly.
4.2 Prefiltering
We now discuss prefiltering that needs to be applied to the input images i 1 (x; y) and
y). The purpose is to remove the DC component and very
high frequencies before applying the rational filters. The DC component is harmful
because a small change in the illumination, between the two images,
can cause an unanticipated bias in the image m(x; y). Such a bias would propagate
errors to the coefficient image cM1 (x; y) since the GM1 operator applied to m(x; y) is
essentially a low-pass filter. This, in turn, would cause depth errors. At the other end of
the spectrum, radial frequencies greater than f r max (see equation (17)) are also harmful
as they violate the monotonicity property of M=P , which is needed for rational operators
to work. Therefore, such high frequencies must also be removed.
Although it is possible to embed the desired prefilter within the rational filters
(given that prefiltering can be done using linear operators), we have chosen to use a
separate prefilter for the following reason. Since the prefilter attempts to cut low and
high frequencies, it tends to have a large kernel. Embedding such a prefilter in the
rational operators would require the operators also to have large kernels, thus, resulting
in low spatial resolution as well as unnecessary additional computations.
holds true. D corresponds to the case of extreme fractal, D
1:5 corresponds to Brownian motion and D corresponds to a smooth image. Finally,
corresponds to white noise (completely random image).
7 In equation (12), M=P is zero when j(u; v)j ! 0. Since ff can be non-zero, 1=GM1 (u;
must be zero for equation (12) to be valid.
As with the rational operators, the design of the prefilter can be posed as the
optimization of an objective function. Let us define the desirable frequency response of
the prefilter as f(u; v). For reasons stated earlier, this frequency response must cut both
the DC component and high frequencies. In addition, the frequency response should be
smooth and rotationally symmetric to ensure a small kernel size. A function with these
desired properties is again the Laplacian of Gaussian given by the right hand side of
equation (28), but using f We define the objective function as:
oe pass
oe stop
is the frequency response of the designed prefilter kernel. oe pass and oe stop
represent the weights assigned to the passband and the stopband regions of the prefilter,
respectively. The stopband is
. The Levenberg-Marquardt
algorithm [Press et al.-1992] is used to determine the prefilter kernel that
4.3 An Example Set of Discrete Rational Filters
Figures
6 and 7 show the kernels and their frequency responses for the rational operators
and the prefilter, derived with kernel size set to 7\Theta7 and e=F pixels. In order
to make the operators uniformly sensitive to textures in all directions, we imposed the
constraint that the kernels must be symmetric with respect to the x and y axes as well as
the lines These constraints reduce the number of degrees of freedom
(DOF) in the kernel design problem. In the case of a 7\Theta7 kernel, the DOF is reduced
to 10. This further reduces to 6 for a 6\Theta6 or a 5\Theta5 kernel. This DOF of 6 is too small
to design operators with the desired frequency responses. Therefore, the smallest kernel
size was chosen to be k 7. Note that the passband response of the prefilter in Figure
7 can be further refined if its kernel size is increased.
The final design issue pertains to the maximum frequency f r max . Since the discrete
Fourier transform of a kernel of size k s has the minimum discrete frequency period of
1=k s , it is difficult to obtain precisely any response in the frequency region below 1=k s .
Further, the spectrum in this region is going to be suppressed by the prefilter as it is
close to the DC component. Therefore, the maximum frequency f r max must be well
above 1=k s . We express this condition as f r
ks . Using equation (17), we obtain:
2e
This condition can be interpreted as follows: The maximum blur circle diameter 2e=F e
must be smaller than 73% of the kernel size k s . This is also intuitively reasonable as the
kernel should be larger than the blur circle as it seeks to measure blur 8 .
4.4 Coefficient Image Smoothing
By applying the prefilter and the rational operators in Figure 6 to the images m(x; y)
and p(x; y), we obtain coefficients that can plugged into equation (23) to compute depth
fi. However, a problem can arise in solving for depth. If c P1 (x;
in equation (24) is close to zero, the depth estimate becomes unstable as is evident from
the solution step in equation (15). Since the frequency response of g P1 (x; y) cuts the
DC component (Figure 5 (a)), zero-crossings are usually common in the coefficient image
c P1 (x; y; ff). It is also obvious that, for image areas with weak texture 9 , c P1 (x;
approaches zero.
To solve this problem, we apply a smoothing operator to the coefficient image.
This enables us to avoid unstable depth estimates at zero-crossings in the coefficient
image, which otherwise must be removed by some ad hoc post-filtering. To optimize this
smoothing operation, so as to minimize depth errors, we need an analytic model of depth
error. Using the depth recovery equation (23), we get:
Here, we have dropped the parameter (x; y) for brevity. Solving for dfi, we get:
As c P2 is only a small correction factor, the following approximation can be made:
c P1
We denote the standard deviations (errors) of cM1 , c P1 and c P2 by oe cM1 , oe cM1 and oe cM1 ,
respectively. To simplify matters, it is assumed that the errors are independent of each
other. Then, we get [Hoel-1971]:
oe fi
8 Since the above conditions related to kernel size are rough, we suggest that the linearity of depth
estimation be checked (using synthetic images) to find the best kernel size k s . Such an evaluation is
reported in the experimental section.
9 Weak texture is equivalent to low spectrum power in the high frequency region.
\Gamma0:00133 0:0453 0:1799 0:297 0:1799 0:0453 \Gamma0:00133
0:0453 0:4009 0:8685 1:093 0:8685 0:4009 0:0453
0:1799 0:8685 2:957 4:077 2:957 0:8685 0:1799
0:297 1:093 4:077 6:005 4:077 1:093 0:297
0:1799 0:8685 2:957 4:077 2:957 0:8685 0:1799
0:0453 0:4009 0:8685 1:093 0:8685 0:4009 0:0453
\Gamma0:00133 0:0453 0:1799 0:297 0:1799 0:0453 \Gamma0:00133C C C C C C C C C C C C A
\Gamma0:03983 \Gamma0:09189 \Gamma0:198 \Gamma0:259 \Gamma0:198 \Gamma0:09189 \Gamma0:03983
\Gamma0:09189 \Gamma0:3276 \Gamma0:4702 \Gamma0:4256 \Gamma0:4702 \Gamma0:3276 \Gamma0:09189
\Gamma0:259 \Gamma0:4256 1:393 3:385 1:393 \Gamma0:4256 \Gamma0:259
\Gamma0:09189 \Gamma0:3276 \Gamma0:4702 \Gamma0:4256 \Gamma0:4702 \Gamma0:3276 \Gamma0:09189
0:05685 \Gamma0:02031 \Gamma0:06835 \Gamma0:06135 \Gamma0:06835 \Gamma0:02031 0:05685
\Gamma0:02031 \Gamma0:06831 0:05922 0:1454 0:05922 \Gamma0:06831 \Gamma0:02031
\Gamma0:06835 0:05922 0:1762 \Gamma0:01998 0:1762 0:05922 \Gamma0:06835
\Gamma0:06135 0:1454 \Gamma0:01998 \Gamma0:698 \Gamma0:01998 0:1454 \Gamma0:06135
\Gamma0:06835 0:05922 0:1762 \Gamma0:01998 0:1762 0:05922 \Gamma0:06835
\Gamma0:02031 \Gamma0:06831 0:05922 0:1454 0:05922 \Gamma0:06831 \Gamma0:02031
0:05685 \Gamma0:02031 \Gamma0:06835 \Gamma0:06135 \Gamma0:06835 \Gamma0:02031 0:05685C C C C C C C C C C C C A
pref ilter
\Gamma0:143 \Gamma0:1986 \Gamma0:1056 \Gamma0:07133 \Gamma0:1056 \Gamma0:1986 \Gamma0:143
\Gamma0:1986 \Gamma0:1927 0:01795 0:07296 0:01795 \Gamma0:1927 \Gamma0:1986
\Gamma0:1056 0:01795 0:2843 0:4601 0:2843 0:01795 \Gamma0:1056
\Gamma0:07133 0:07296 0:4601 0:6449 0:4601 0:07296 \Gamma0:07133
\Gamma0:1056 0:01795 0:2843 0:4601 0:2843 0:01795 \Gamma0:1056
\Gamma0:1986 \Gamma0:1927 0:01795 0:07296 0:01795 \Gamma0:1927 \Gamma0:1986
\Gamma0:143 \Gamma0:1986 \Gamma0:1056 \Gamma0:07133 \Gamma0:1056 \Gamma0:1986 \Gamma0:143C C C C C C C C C C C C A
Figure
Rational operator kernels derived using kernel size of 7\Theta7 and e=F pixels.
Regardless of scene texture, passive depth from defocus can be accomplished using this small
operator set.
u0.010.03(a) g M1 (b) g P1
u0.0050.015(c) g P2 (d) prefilter
Figure
7: Frequency responses of the rational operators shown in Figure 6.
This expression is useful as it gives us an estimate of depth error. The inverse of this
estimate, 1=oe fi 2 , can be viewed as a depth confidence measure and be used to combine
adjacent depth estimates in a maximum likelihood sense to obtain more accurate depth
estimates. Also, when one wishes to apply depth from defocus at different scales using
a pyramid framework [Jolion and Rosenfeld-1994, Burt and Adelson-1983, Darrell and
Wohn-1988, Gokstorp-1994] , the above confidence measure can be used to combine depth
values at different levels of the pyramid.
In equation (34), oe c M1 , oe c P1 and oe c P2 are constants because they are defined by
the readout noise of the image sensor used and the frequency responses of the rational
operators. On the other hand, fi can be assumed to be locally constant, since depth can
be expected to vary smoothly at most points in the image. These facts lead us to:
With the above error model in place, we can develop a method for coefficient
image smoothing. If we multiply equation (23) by c P1 (x; y; ff), and sum up depth values
in the neighborhood R of each pixel, we get:
c P1 (x;
c P1 (x;
where, fi a is the depth estimate after coefficient smoothing. Since the last terms in
equations (36) and (23) are small corrections, fi a can be approximated by:
fi a '
c P1
c P1 (x;
c P1
c P1
From equation (35), we see that fi a is the weighted average of raw depth estimates fi in
the neighborhood R, where the weights are 1=oe fi 2 (x;
From statistics [Hoel-1971] we know that the optimal weighted average of independent
variables whose variances are oe i 2 , is obtained by weighting the X i
with 1=oe i 2 . Therefore, the above weighted average of depth estimates can be viewed as
optimal. The variance oe a 2 of the resulting depth estimate fi a is given by:oe a 2
Hence, the coefficient smoothing of equation (36) is optimal, in that, it minimizes 10 the
error in estimated depth fi a . In addition, the resulting smoothed coefficient c P1 (x;
is proportional to the inverse of the variance of fi a , i.e. 1=oe fi a
2 , which is clear from
equations (35), (37) and (38). Therefore, the smoothed coefficient c P1 (x; y; ff) can be
used as a confidence measure to post-process computed depth maps.
Another method to cope with zero-crossings in the cP1 coefficient image is based on the Hilbert
This approach is detailed in [ Watanabe and
Nayar-1995a
PREFILTER
PREFILTER
FAR FOCUSED IMAGE NEAR FOCUSED IMAGE
RATIONAL OPERATOR
COEFFICIENT SMOOTHING
DEPTH COMPUTATION
Figure
8: The flow of the depth from defocus algorithm. Using Datacube's MV200 pipeline
processor, the entire algorithm can be executed in as little as 0.16 msec to obtain a 512\Theta480
depth map.
4.5 Algorithm
Figure
8 illustrates the flow of the depth from defocus algorithm we have implemented.
The far and near focused images are first added and subtracted to produce p(x; y) and
m(x; y), respectively. Then they are convolved with the prefilter and subsequently with
the three rational operators. The resulting coefficient images are then smoothed by local
averaging. The final step is the computation of depth from the coefficients using a single
iteration of the Newton-Raphson method using equations (15) and (16). Alternatively,
depth computation can be achieved using a precomputed two-dimensional look-up table.
The look-up table is configured to take c 0
inputs and provides depth fi(x; y) as output. In summary, a depth map is generated with
as few as 5 two-dimensional convolutions, simple smoothing of the coefficient images, and
a straightforward depth computation step.
The above operations can be executed efficiently using a pipelined image processor.
If one uses Datacube's MV200 pipeline processor, all the computations can be realized
using as few as 10 pipelines. The entire depth from defocus algorithm can then be
executed in 0.16 msec for an image size of 512\Theta480. The efficiency of the algorithm,
which comes from the use of the rational operator set, is far superior to any existing
depth from defocus algorithm that attempts to compute accurate depth estimates [Xiong
and Shafer-1993, Gokstorp-1994] .
5 Experiments
5.1 Experiments with Synthetic Images
We first illustrate the linearity of depth estimation and its invariance to texture frequency
using synthetic images. The synthetic images shown in Figure 9 correspond to a planar
surface that is inclined away from the sensor such that its normalized depth value is 0
at the top and 255 at the bottom. The plane includes 10 vertical strips with different
textural properties. The left 7 strips have textures with narrow power spectra whose
central frequencies are 0.015, 0.03, 0.08, 0.13, 0.18, 0.25 and 0.35, from left to right. The
th strip is white noise. The next two strips are fractals with dimensions of 3 and 2.5,
respectively [Peitgen and Saupe-1988] . The near and far focused images were generated
using the pillbox blur model. The defocus condition used was e=F pixels. In all
our experiments, the digital images used are of size 640\Theta480. The depth map estimated
using the 7\Theta7 rational operators and 5\Theta5 coefficient smoothing is shown as a gray-
coded image in Figure 9(c) and a wireframe in Figure 9(d). As is evident, the proposed
algorithm produces high accuracy despite the significant texture variations between the
vertical strips.
Figure
summarizes quantitative results obtained from the above experiment.
The figure includes plots of (a) the gradient of the estimated depth map, (b) RMS (root
mean square) error (oe) in computed depth, and (c) the averaged confidence value. Each
point (square) in the plots corresponds to one of the strips in the image, and is numbered
from left to right (see numbers next to the squares). Note that the gradient of the
estimated depth map is nothing but the depth detection gain. Figure 10(a) shows that
the gain is invariant except for the left three strips. The slight gain error in the left three
strips is because the ratio GM1 (u; v)=G P1 (u; v) is high for low frequencies. As a result,
(a) far-focused image (b) near-focused image
(c) gray-coded depth map (d) wireframe of depth map
Figure
9: Depth from defocus applied to synthetic images of an the inclined plane is accurately
recovered despite the significant texture variations.
G P1 (u; v) is small in the low frequency region and a small error in G P1 (u; v) causes a
large error in the ratio. The low values of G P1 (u; v) for low-frequency textures is reflected
by the extremely low confidence values for the corresponding strips. However, as such
low frequencies are cut by the prefilter, depth errors are suppressed if there exist other
frequency components. When one wants to utilize low frequencies, a pyramid [Jolion
and Rosenfeld-1994, Darrell and Wohn-1988] can be constructed and the rational filters
can applied to each level of the pyramid. Depth maps computed at different levels of
the pyramid can be combined in a maximum-likelihood sense using confidence measures
which are easily computed along with the coefficient image using equation (34). Figure
10(b) and (c) show a rough agreement between the confidence measure plot and the
function 1=oe 2 .0.2center freq.1.011.031.05gain
1/f 1/f
1/f
(a)0.2center freq.
(c)
1/f 1/f
1/f
(b)0.40.8confidence
1/f 1/f
1/f
1.5freq. distribution
confidence4080120246
Figure
10: Analysis of depth errors for the textured inclined plane shown in Figure 9. Each
point (square) in the plots corresponds to a single texture strip on the inclined plane (numbered
1-10.) (a) The gradient of the computed depth map which corresponds to the depth detection
gain. The invariance of depth estimation to image texture is evident. (b) The RMS error (oe) in
computed depth. (c) The depth confidence value which is seen to be in rough agreement with
1=oe 2 .
In
Figure
11, the synthetic images were generated assuming a staircase like three-dimensional
structure. The steps of the staircase have textures that are the same as those
used in Figure 9. The computed depth map is again very accurate. The depth discontinuities
are sensed with sharpness preserved, demonstrating the high spatial resolution
of the proposed algorithm. Spikes in the two left strips are again due to extremely low
depth confidence values in these areas. In the case of natural textures with enough texture
contrast, such low confidence values are unlikely as other frequencies in the texture
will provide sufficient information for robust depth estimation.
(a) gray-coded depth map (a) wire frame plot of the depth map
Figure
11: Depth from defocus applied to synthetic images of a staircase. The textures of the
stairs are the same as those of the strips in Figure 9. The depth discontinuities are estimated
with high accuracy reflecting high spatial resolution produced by the proposed algorithm.
5.2 Experiments on Real Images
Images of real scenes were taken using a SONY XC-77 monochrome camera. The lens
used is a Cosmicar B1214D-2 with f=25mm. The lens was converted into a telecentric
lens by using an additional aperture to make its magnification invariant to defocus (see
appendix and [Watanabe and Nayar-1995b]). As a result of telecentricity, image shifts
between the far and near focused images are lower than 1/10 of a pixel. The lens aperture
was set to F/8.3. The far-focused image i 1 was taken with the lens focused at 869mm
from the camera, and the near-focused image i 2 with the lens focused at 529mm. These
two distances were chosen so that all scene points lie between them. The above focus
settings result in a maximum blur circle radius of e=F pixels. For each of
the two focus settings, 256 images were averaged over 8.5 sec to get images with high
signal-to-noise ratio.
Figure
12 shows results obtained for a scene that includes a variety of textures.
Figure
12(a) and (b) are the far-focused and near-focused images, respectively. Figure
12(c) and (d) are the computed depth map and its wireframe plot. Depth maps of
all the curved and planar surfaces are detected with high fidelity and high resolution
without any post-filtering. After 9\Theta9 median filtering, we get an even better depth map
as shown in Figure 12(e).
(a) far-focused image (b) near-focused image
(c) gray-coded depth map without post-filtering (d) wireframe plot of (c)
wireframe plot after 9\Theta9 median filtering
Figure
12: The depth from defocus algorithm applied to a real scene with complex textures.
Figure
13 shows results for a scene which includes areas with extremely weak
textures, such as, the white background and the clay cup. Figure 13(a) and (b) are the
far-focused and near-focused images, respectively. Figure 13(c) and (d) are the computed
depth map and its wireframe plot. All image areas, except the white background area,
produce accurate depth estimates. The depth confidence value oe in the textured background
is 0.5% of the object distance. 11 The error on the table surface is 1.0% relative
to object distance. Even the white background area has a reasonable depth map despite
the fact that its texture is very weak. We see that the confidence map in Figure 13(e)
reflects this lack of texture. This has motivated us to develop a modified algorithm,
called adaptive coefficient smoothing, that repeatedly averages the coefficients computed
by the rational operators until the confidence value reaches a certain acceptable level.
Figure
13(f) shows the depth map computed using this algorithm.
The last experiment seeks to quantify the accuracy of depth estimation. The
target used is a plane paper similar to the textured background in the scene in Figure 12.
This plane is moved in steps of 25mm and a depth map of the plane is computed for
each position. Since the estimated depth fi is measured on the image side, it is mapped
to the object side using the lens law of equation (2). The optical settings and processing
conditions are the same as those used in the previous experiments. The plot in Figure 14
illustrates that the algorithm has excellent depth estimation linearity. The RMS error of
a line fit to the measured depths is 4.2 mm. The slight curvature of the plot is probably
due to errors in optical settings, such as, focal length and aperture.
Depth values for a 50\Theta50 area were used to estimate the RMS depth error for
each position of the planar surface. In Figure 14 the RMS errors are plotted as \Sigmaoe error
bars. The RMS error relative to object distance is seen to vary with object distance. It
is 0.4% - 0.8% for close objects and 0.8% - 1.2% for objects farther than 880 mm. This
is partly because of the mapping from the depth measured on the image side to depth on
the object side. The other reason is that the error in estimated depth oe fi is larger for a
scene point with larger fi, as seen from equation (34). Note that this RMS error depends
on the coefficient smoothing and post-filtering stages. We found empirically that the
error has a Gaussian-like distribution. Using this distribution, one can show that the
error reduces by a factor of 1/8 if the depth map is convolved with an 8\Theta8 averaging
filter.
This definition of error is often used to quantify the performance of range sensors.
(a) far-focused image (b) near-focused image
(c) gray-coded depth map (d) wireframe of the depth map
(confidence value) 1=2 map (f) wireframe after adaptive coefficient
smoothing
Figure
13: Depth from defocus applied to a scene that includes very weak texture (white
background). The larger errors in the region of weak texture is reflected by the confidence map.
An adaptive coefficient smoothing algorithm uses the confidence map to refine depth estimates
in regions with weak texture.
actual depth (mm)
estimated
depth
fitting
error
Figure
14: Depth estimation linearity for a textured plane. The plane is moved in increments
of 25mm, away from the lens. All plotted distances are measured from the lens. The RMS
error relative to object distance is 0.4% - 1.2%.
6 Conclusions
We proposed the class of rational operators for passive depth from defocus. Though the
operators are broadband, when used together, they provide invariance to scene texture.
Since they are broadband, a small number of operators are sufficient to cover the entire
frequency spectrum. Hence, rational operators can replace large filter banks that are
expensive from a computational perspective. This advantage comes without the need
to sacrifice depth estimation accuracy and resolution. We have detailed the procedure
used to design rational operators. As an example, we constructed 7\Theta7 operators using
a polynomial model for the normalized image ratio. However, the notion of rational
operators is more general and represents a complete class of filters. The design procedure
described here can be used to construct operators based on other rational models for the
normalized image ratio. Further, rational operators can be derived for any desired blur
function.
In addition to the rational operators, we discussed a wide range of issues that are
pertinent to depth from defocus. In particular, detailed analyses and techniques were
provided for prefiltering near and far focused images as well as post-processing the outputs
of the rational operators. The operator outputs have also been used to derive a depth
confidence measure. This measure can be used to enhance computed depth maps. The
proposed depth from defocus algorithm requires only a total of 5 convolutions. We tested
the algorithm using both synthetic scenes and real scenes to evaluate performance. We
found the depth detection gain error to be less than 1%, regardless of texture frequency.
Depth accuracy was found to be 0:5 - 1:2% of object distance from the sensor.
These results have several natural extensions. (a) Since some scene areas are
expect to have very low texture frequency, it would be meaningful to embed the proposed
scheme in a pyramid-based processing framework. Image areas with dominant low
frequencies will have higher frequencies at higher levels of the pyramid. The proposed
algorithm can be applied to all levels of the pyramid and the resulting depth maps can
be combined using the depth confidence measures. (b) Given the efficiency of the al-
gorithm, it is worth implementing a real-time version using a pipeline image processing
architecture such as the Datacube MV200. We estimate that such an algorithm would
result in at least 6 depth maps per second of 512\Theta480 resolution. (c) In our present im-
plementation, we have varied the position of the image sensor to change the focus setting.
Alternatively, the aperture size can be varied. Rational operators can be derived for such
an optical setup using the basis functions b P1
(see [Watanabe and Nayar-1995a]). (d) Finally, it would be worthwhile applying the
algorithm to outdoor scenes with large structures.
Appendix
7.1 Problem of Image Registration
For the rational operators to give accurate results, the far-focused image i 1 and near-
focused image i 2 need to be precisely registered (within 0.1 pixel) with respect to one
another. However, in most conventional lenses, magnification varies with focus setting
and hence misregistration is introduced. Further, in our experiments, we have mechanically
changed the focus setting and, in the process, introduced some translation between
the two images. If the lens aberrations are small, the misregistration is decomposed into
two factors - a global magnification change and a global translation. Of the two factors,
magnification change proves much more harmful. This change can be corrected using
image warping techniques [Darrell and Wohn-1988, Wolberg-1990]. However, this generally
introduces undesirable effects such as smoothing and aliasing since warping is based
on spatial interpolation and resampling techniques. We have used an optical solution
to the problem that is described in the following section and detailed in [Watanabe and
Nayar-1995b].
7.1.1 Telecentric Optics
In the imaging system shown in Figure 1, the effective image location of point P moves
along the principal ray R as the sensor plane is displaced. This causes a shift in image
coordinates of the image of P . This variation in image magnification with defocus manifests
as a correspondence like problem in depth from defocus, as corresponding points in
images are needed to estimate blurring.
We approach the problem from an optical perspective rather than a computational
one. Consider the image formation model shown in Figure 15. The only modification
made with respect to the model in Figure 1 is the use of the external aperture A 0 . The
aperture is placed at the front-focal plane, i.e. a focal length in front of the principal
point O of the lens. This simple addition solves the problem of magnification variation
with distance ff of the sensor plane from the lens. Simple geometrical analysis reveals
that a ray of light R 0 from any scene point that passes through the center O 0 of aperture
A 0 emerges parallel to the optical axis on the image side of the lens [Kingslake-1983]. As
a result, despite blurring, the effective image coordinates of point P in both images i 1
are the same as the coordinate of its focused image Q on i f . Given an off-the-
shelf lens, such an aperture is easily appended to the casing of the lens. The resulting
optical system is called a telecentric lens. While the nominal and effective F-numbers
of the classical optics in Figure 1 are f/a and d i /a, respectively, they are both equal to
f/a 0 in the telecentric case. The magnification change can be reduced to an order of less
than 0.03%, i.e. 0.1 pixel for a 640\Theta480 image. A detailed discussion on telecentricity
and its implementation can be found in [Watanabe and Nayar-1995b]. We recently used
this idea to develop a real-time active depth from defocus sensor [Nayar et al.-1995,
Watanabe et al.-1995].
7.1.2 Translation Correction
We have seen in the previous section how magnification changes between the far-focused
and near-focused images can be avoided. When the focus setting is changed, translations
may also be introduced. Translation correction can be done using image processing without
introducing any harmful image artifacts. However, the processing must be carefully
R
a'
O '
f
f
a
Figure
15: A constant-magnification imaging system for depth from defocus is achieved by
simply placing an aperture at the front-focal plane of the optics. The resulting telecentric
optics avoids the need for registering the far-focused and near-focused images [ Watanabe and
Nayar-1995b
implemented since we seek 0.1 pixel registration between the two images. The procedure
we use is briefly described here and is detailed in [Watanabe and Nayar-1995b]. We use
FFT-phase based local shift detection to estimate shift vectors with sub-pixel accuracy.
We divide the Fourier spectra of corresponding local areas of the two images. Then we
fit a plane to the phases of the ratio of the spectra. The gradient of the fitted plane
is nothing but the relative shift between the two images. Once we get shift vectors at
several positions in the image, similarity transform is used to model the shift vector field.
By fitting the vectors to the similarity model, we can estimate the global translation and
any residual magnification changes, separately [Watanabe and Nayar-1995b]. The residual
magnification is corrected by tuning the aperture position of the telecentric optics.
The translation is corrected by shifting both images in opposite directions. As we need
sub-pixel accuracy, we interpolate the image and resample it to generate the registered
images. The interpolating function is the Lanczos4 windowed sinc function [Wolberg-
1990]. Since the translation correction remains constant over the entire image, a single
shift invariant convolution achieves the desired shift. Though this convolution distorts
the image spectrum, since both images undergo the same amount of shift, the distortion is
the same for both images. This common distortion is eliminated when the normalized image
ratio M=P is computed before the application of the rational filters. After the above
translation correction, we found the maximum registration error in our experiments to
be as small as 0.02 pixels.
7.2 Operator Response and Depth Error
The deviation of the ratio functions G P i (u; v) or GM i (u; v) after filter design, to those
obtained from fitting the polynomial model to the normalized image ratio, varies with
frequency (u; v), and hence depends on the texture of the scene. For the filter design
described in section 4.1, we need a relation between the above ratio error and the depth
estimation error. Starting with equation (12), we get:
G P1 (u; v)
G P2 (u; v)
Here, fi f (u; v) is the depth estimated at a single frequency (u; v). Since -(u; v) in the
ratio condition (13) has not been fixed, we can define G P1 (u;
becomes:
By differentiation we get:
where, M(u; v; ff) and P (u; v; ff) can be treated as constants since we wish to find the
error in fi f (u; v) caused by errors in GM1 (u; v) and G P2 (u; v). Solving for dfi f (u; v), we
get:
Since G P2 (u; v) is a small correction factor, it can be approximated by:
From equations (23) and (20), since c P2 - c P1 (c P2 represents a small correction), the
depth estimate fi can be approximated by integrating over all frequencies:
du dv
Hence, the error in fi caused by the error in fi f (u; v) is:
Combining this expression with equation (43), we have:
du dv
What are the optimal values of dGM1 (u; v) and dG P2 (u; v) that would minimize the depth
error dfi? This question is not trivial as dGM1 (u; v) and dG P2 (u; v) influence each other in
a complex way. To avoid either of the two terms in the integrand in the numerator from
taking on a disproportionately large value, we have decided to assume both terms to be
constant of value -. This gives us the following bounds on dGM1 (u; v) and dG P2 (u; v):
oe GM1 (u;
oe G P2 (u;
where, the jfi f (u; v)j was set to 1 as this represents the worst case, i.e. largest normalized
depth error.
Acknowledgements
This research was conducted at the Center for Research in Intelligent Systems, Department
of Computer Science, Columbia University. It was supported in part by the Production
Engineering Research Laboratory, Hitachi, and in part by the David and Lucile
Packard Fellowship. The authors thank Yasuo Nakagawa of Hitachi Ltd. for his support
and encouragement of this work.
--R
Range imaging sensors.
Principles of Optics.
The Fourier Transform and Its Applications.
The Laplacian pyramid as a compact image code.
Pyramid based depth from focus.
A matrix based method for determining depth from focus.
Computing depth from out-of-focus blur using a local frequency representation
Introduction to Mathematical Statistics.
Focusing. Memo 160
Robot Vision.
A perspective on range finding techniques for computer vision.
A Pyramid Framework for Early Vision.
Optical System Design.
Journal of Computer Vision
Shape from focus: An effective approach for rough surfaces.
The Science of Fractal Images.
A new sense for depth of field.
Numerical Recipes in C.
Depth from defocus: A spatial domain approach.
Parallel depth recovery by changing camera parameters.
Minimal operator set for texture invariant depth from defocus.
Telecentric optics for constant-magnification imaging
Digital Image Warping.
Depth from focusing and defocusing.
Moment filters for high precision computation of focus and stereo.
--TR
--CTR
M. Boissenin , J. Wedekind , A. N. Selvan , B. P. Amavasai , F. Caparrelli , J. R. Travis, Computer vision methods for optical microscopes, Image and Vision Computing, v.25 n.7, p.1107-1116, July, 2007
A. N. Rajagopalan , S. Chaudhuri , Uma Mudenagudi, Depth Estimation and Image Restoration Using Defocused Stereo Pairs, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.11, p.1521-1525, November 2004
Liming Chen , Georgy Kukharev , Tomasz Ponikowski, The PCA reconstruction based approach for extending facial image databases for face recognition systems, Enhanced methods in computer security, biometric and artificial intelligence systems, Springer-Verlag, London, 2005
M. Boissenin , J. Wedekind , A. N. Selvan , B. P. Amavasai , F. Caparrelli , J. R. Travis, Computer vision methods for optical microscopes, Image and Vision Computing, v.25 n.7, p.1107-1116, July, 2007
Vinay P. Namboodiri , Subhasis Chaudhuri, On defocus, diffusion and depth estimation, Pattern Recognition Letters, v.28 n.3, p.311-319, February, 2007
Paolo Favaro , Stefano Soatto, A Geometric Approach to Shape from Defocus, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.3, p.406-417, March 2005
Reinhard , Erum Arif Khan, Depth-of-field-based alpha-matte extraction, Proceedings of the 2nd symposium on Applied perception in graphics and visualization, August 26-28, 2005, A Coroa, Spain
Lyubomir Zagorchev , Ardeshir Goshtasby, A paintbrush laser range scanner, Computer Vision and Image Understanding, v.101 n.2, p.65-86, February 2006 | scene textures;texture invariance;depth estimation;blur function;depth confidence measure;normalized image ratio;real-time performance;passive depth from defocus;broadband rational operators |
294178 | The Efficient Computation of Sparse Jacobian Matrices Using Automatic Differentiation. | This paper is concerned with the efficient computation of sparse Jacobian matrices of nonlinear vector maps using automatic differentiation (AD). Specifically, we propose the use of a graph coloring technique, bicoloring, to exploit the sparsity of the Jacobian matrix J and thereby allow for the efficient determination of J using AD software. We analyze both a direct scheme and a substitution process. We discuss the results of numerical experiments indicating significant practical potential of this approach. | Introduction
The efficient numerical solution of nonlinear systems of algebraic equations, F
usually requires the repeated calculation or estimation of the matrix of first derivatives, the Jacobian
In large-scale problems matrix J is often sparse and it is important to exploit
this fact in order to efficiently determine, or estimate, matrix J at a given argument x. This paper
is concerned with the efficient calculation of sparse Jacobian matrices by the judicious application of
automatic differentiation techniques. Specifically, we show how to define "thin" matrices V and W
such that the nonzero elements of J can easily be extracted from the calculated pair (W T J; JV ).
This research was partially supported by the Applied Mathematical Sciences Research Program (KC-04-02) of
the Office of Energy Research of the U.S. Department of Energy under grant DE-FG02-90ER25013, and in part by
the Advanced Computing Research Institute, a unit of the Cornell Theory Center which receives major funding from
the National Science Foundation and IBM Corporation, with additional support from New York State and members
of its Corporate Research Institute. This technical report also appears as Cornell Computer Science Technical Report
TR 95-1557.
y Computer Science Department and Center for Applied Mathematics, Cornell University, Ithaca NY 14850.
z Computer Science Department, Cornell University, Ithaca NY 14850.
Given an arbitrary n-by-t V matrix V , product JV can be directly calculated using automatic
differentiation in the "forward mode"; given an arbitrary m-by-t W matrix W , the product W T J
can be calculated using automatic differentiation in the "reverse mode", e.g., [11, 13].
The forward mode of automatic differentiation allows for the computation of product JV in
time proportional to t V \Delta !(F ) where !(F ) is the time required to evaluate F . This fact leads to
the following practical question. Given the structure of a sparse Jacobian matrix J , how can a
matrix V be chosen so that the nonzeros of J can easily be determined from the product JV ? A
good solution is offerred by the sparse finite-differencing literature [4, 5, 6, 7, 8, 10] and adapted to
the automatic differentiation setting [1]. Partition the columns of J into a set of groups GC , where
the number of groups in GC is denoted by jGC j, such that the columns in each group G 2 GC are
structurally orthogonal 1 . Each group G 2 GC defines a column v of only if column
i is in group G; otherwise, v It is clear that the nonzeros of J can be immediately "identified"
from the computed product JV . Graph coloring techniques, applied to the column intersection
graph of J , can be used to try and produce a partition GC with low cardinality jGC j. This, in turn,
induces a thin matrix V , i.e., construct However, it is not always
possible to ensure that jGC j is small: consider a sparse matrix J with a single dense row.
Alternatively, the reverse mode of automatic differentiation allows for the computation of the
product W T J in time proportional to t W \Delta !(F ) where !(F ) is the time required to evaluate F , and
. The "transpose" of the argument above can lead to an efficient way to determine J .
That is, apply graph coloring techniques to the row intersection graph of J to induce a thin matrix
via the reverse mode of automatic differentiation (takes time proportional to
trivially extract the nonzeros of J from the computed matrix W T J . Of course it is easy
to construct examples where defining a thin matrix W is not possible - consider the case where J
has a dense column.
Clearly there are problems where a row-oriented approach is preferable, there are problems
where a column-oriented approach is better. Unfortunately, it is easy to devise problems where
neither approach is satisfactory: let J have both a dense row and a dense column. This is exactly
when it may pay to use both modes of automatic differentiation simultaneously: compute a pair
choices of W and V , and extract the nonzero elements of J from this
computed pair of thin matrices.
Our concern in this paper is with efficiency with respect to number of floating point operations
or f lops. We do not concern ourselves with space requirements in this study. However, it should
be noted that the reverse mode of automatic differentiation often requires significantly more space
than the forward mode: if space is tight then our suggested approach, which involves application
of both forward and reverse modes, may not be possible. There is current research activity on
reducing the space requirements of the reverse mode of automatic differentiation, e.g., [12].
We note that an independent proposal regarding sparse Jacobian calculation is made by Hossain
and Steihaug [15]: a graph-theoretic interpretation of the direct determination problem is given and
an algorithm based on this interpretation is provided. In this paper we proffer a new direct method
and we also propose a substitution method, both based directly on the Jacobian structure. We
compare our direct and substitution methods, numerically, and we discuss the round-off properties
of the substitution method. In addition, we interface our graph coloring software to the automatic
differentiator ADOL-C, [14], and report on a few preliminary computational results.
The remainder of the paper is organized as follows. In x2, we review the relevant aspects of
Two nonzero n-vectors v; w are structurally orthogonal if v i w
automatic differentiation, both forward and reverse modes. In x3, we formalize the combinatorial
problems to be solved both from a matrix point of view and in terms of graph theory. We propose
both a direct determination problem and a substitution problem. In x4, we propose "bi-coloring"
approaches to both the direct determination and "determination by substitution" problems. The
bi-coloring technique produces matrices V and W where JV is subsequently determined via the
forward mode of automatic differentiation, W T J is detemined via the reverse mode. Typically, the
column dimensions of V and W will be small: the cost of the application of automatic differentiation
is proportional to the sum of the column dimensions of V and W (times the work to evaluate F ).
In x5 we present and discuss various numerical experiments. The experiments indicate that our
bi-coloring approach can significantly reduce the cost of determining J (over one-sided Jacobian
determination).
The substitution method we propose consistently outperforms the direct method. However, the
substitution calculation increases the chance of round-off contamination. This effect is discussed in
x6. We end the paper in x7 with some concluding remarks and observations on possible directions for
future research. Specifically, we note that while sparsity is a symptom of underlying structure in a
nonlinear problem, it is not a necessary symptom. Moreover, it is often possible to exploit structure
in the absence of sparsity and apply AD tools "surgically" to efficiently obtain the Jacobian matrix
J .
/sectionBasics of automatic differentiation Automatic differentiation is a chain rule based technique
for evaluating the derivatives analytically (and hence without any truncation errors) with
respect to input variables of functions defined by a high-level language computer program. In this
section we briefly review the basics of automatic differentiation, borrowing heavily from [11, 13].
A program computing the function can be viewed as a sequence
of scalar assignments v where the vector v can be thought of as set of ordered
variables such that v i computed before v j using the set of variables fv k jk ! ig. Here / j
represent elementary functions, which can be arithmetic operations and/or univariate transcedental
functions.
Ordering the variables as above, we can partition variables v j into three vectors.
In general, the number of intermediate variables is much larger than the dimensions of the problem,
Assume that all these elementary functions / i are well defined and have continuous elementary
Assuming with out loss of generality that the dependent variables y do
not themselves occur as arguments of elementary functions, we can combine the partials c ij into
the (p +m) \Theta (n + p) matrix
c n+i;j
1-i-p+m
Unless elementary functions with more than two arguments are included in the library, each
row of C contains either one or two non zero entries. We define a number
Also, since work involved in an elementary function is proportional to number of arguments, it
follows that
proportionality. Because of the ordering relation the square matrix C is upper
trapezoidal with nontrivial superdiagonals. Thus C can be partitioned as
A L
where
and L is strictly lower triangular. Application of the chain rule yields:
!/
\Deltax
\Deltay
If we eliminate the intermediate vector \Deltay from above (1.4), we get an expression(the Schur
complement) for the Jacobian:
is a unit lower triangular matrix, the calculation of the matrix products ~
leads to two natural ways to compute J :
A or
The alternative expressions for J given in (1.6) define the two basic modes of automatic differenti-
ation, forward and reverse.
The forward mode corresponds to computing the rows of ~
A, one by one, as the corresponding
rows of [A; are obtained from successive evaluation of elementary functions. Since this
amounts to solutions of n linear systems with lower-triangular matrix [I \Gamma L], followed by multiplication
of dense columns of ~
A by M , the total computational effort is roughly n \Delta q or n \Delta !(F ).
The reverse mode corresponds to computing ~
M as solution to linear system
M T . This back-substitution process can begin only after all elementary functions and their partial
derivatives have been evaluated. Since this amounts to the solution of m linear systems with lower
triangular matrix [I \Gamma L], followed by multiplication of dense rows of ~
M by A, total computational
effort is roughly m \Delta q or m \Delta !(F ).
We are interested in computing products of the form JV and W T J . Product JV can be
computed:
which can clearly be done in time proportional to t V n\Thetat V . Analagously, product
can be computed:
which can be done in time proportional to t W \Delta !(F ) assuming
An important subcase worthy of special attention is when F is a scalar function, i.e.,
In this case the Jacobian matrix corresponds to the transpose of the gradient of F and is a single
row vector. Note that the complexity arguments applied to this case imply that the reverse mode
of AD yields the gradient in time proportional to !(F ) whereas the forward mode costs n \Delta !(F ).
The efficiency of the forward mode evaluation of the gradient can be dramatically increased - i.e.,
the dependence on n is removed - if F has structure that can be exploited [2].
Partition problems and graph theory
Our basic task is to efficiently determine thin matrices V; W so that the nonzero elements of J can
be readily extracted from the information (W T J; JV ). The pair of matrices (W T J; JV ) is obtained
from the application of both modes of automatic differentiation: matrix W T J is computed by the
reverse mode, the forward mode determines JV . The purpose of this section is to more rigorously
formulate the question of determining suitable matrices V; W , first in the language of "partitions",
and then using graph theoretic concepts.
We begin with an example illustrating the usefulness of simultaneously applying both modes of
AD, forward and reverse. Consider the following n-by-n Jacobian, symmetric in structure but not
in value:
It is clear that a partition of columns consistent with the direct determination of J requires n
groups. This is because a "consistent column partition" requires that each group contain columns
that are structurally orthogonal and the presence of a dense row implies each group consists of
exactly one column. Therefore, if matrix V corresponds to a "consistent column partition" then V
has n columns and the work to evaluate JV by the forward mode of AD is proportional to n \Delta !(F ).
By a similar argument, and the fact that a column of J is dense, a "consistent row partition"
requires n groups. Therefore, if matrix W corresponds to a "consistent row partition" then W has
rows and the work to evaluate W T J by the reverse mode of AD is proportional to n \Delta !(F ).
Definition 2.1 A bi-partition of a matrix A is a pair (GR ; GC ) where GR is a row partition of
a subset of the rows of A , GC is a column partition of a subset of the columns of A.
In this example the use of a bi-partition dramatically decreases the amount of work required to
determine J . Specifically, the total amount of work required is proportional to 3 \Delta !(F ). To see this
the usual convention of representing
the i th column of the identity matrix with e i . Clearly elements are directly determined from
the product JV ; elements 4 are directly determined from the product W T J .
The basic idea is to partition the rows into a set of groups GR and the columns into a set of
groups GC , with small as possible, such that every nonzero element of J can be
directly determined from either a group in GR or a group in GC .
Definition 2.2 A bi-partition (GR ; GC ) of a matrix A is consistent with direct determination
if for every nonzero a ij of A, either column j is in a group of GC which has no other column having
a nonzero in row i, or row i is in a group of GR which has no other rows having a nonzero in column
j.
Clearly, given a bi-partition (GR ; GC ) consistent with direct determination, we can trivially
construct matrices n\ThetajG C j such that A can be directly determined from
If we relax the restriction that each nonzero element of J be determined directly then it is
possible that the work required to evaluate the nonzeroes of J can be further reduced. For example
we could allow for a "substitution" process when recovering the nonzeroes of J from the pair
Figures
2.1, 2.2 illustrate that a substitution method can win over direct determination:
Figure
2.1 corresponds to direct determination, Figure 2.2 corresponds to determination using
Figure
2.1: Optimal partition for direct method
In both cases elements labelled are computed from the column grouping, i.e., calculated using
the product JV ; elements labelled 4 are calculated form the row groupings, i.e., calculated using
the product W T J . The matrix in Figure 2.1 indicates that we can choose GC with jGC
GR with jGR determine all elements directly. That is, choose choose
Therefore in this case the work to compute J satisfies !(J) - 3 \Delta !(F ). Note
that some elements can be determined twice, e.g., J 11 .
However, the matrix in Figure 2.2 shows how to obtain the nonzeroes of J , using substitution,
in work proportional to 2 \Delta !(F ). Let be the (forward) computed
Figure
2.2: Optimal partition for substitution method
vector
2 be the (reverse) computed row vector p T
Most of the nonzero elements are determined directly (no conflict). The remaining elements can
be resolved,
It is easy to extend this example so that the difference between the number of groups needed,
between substitution and direct determination, increases with the dimension of the matrix. For
example, a block generalization is illustrated in Figure 2.3: if we assume l ? 2w it is straightforward
to verify that in the optimal partition the number of groups needed for direct determination will
be 3w and determination by substitution requires 2w groups.
Definition 2.3 A bi-partition (GR ; GC ) of a matrix A is consistent with determination by
substitution, if there exists an ordering - on elements a ij , such that for every nonzero a ij of A,
either column j is in a group where all nonzeros in row i, from other columns in the group, are
ordered lower than a ij , or row i is in a group where all the nonzeros in column j, from other rows
in the group, are ordered lower than a ij .
In the usual way we can construct a matrix V from the column grouping GC and a matrix W
from the row grouping GR : for example, to construct the columns of V associate with each group
l
l
Figure
2.3: Block example
in GC a boolean vector, with unit entries indicating membership of the corresponding columns. We
can now state our main problem(s) more precisely:
The bi-partition problem (direct) : Given a matrix A, obtain a bi-partition (GR ; GC )
consistent with direct determination, such that total number of groups,
minimized.
The bi-partition problem (substitution) : Given a matrix A, obtain a bi-partition (GR ; GC )
consistent with determination by substitution, such that total number of groups,
jGC j, is minimized.
The bi-partition problems can also be expressed in terms of graphs and graph coloring. This
graph view is important in that it more readily exposes the relationship of the bi-partition problems
with the combinatorial approaches used in the sparse finite-differencing literature, e.g., [4, 5, 6, 7, 8].
However, we note that the remainder of this paper, with the exception of the error analysis in x6,
does not rely directly on this graph interpretation.
To begin, we need the usual notion of a coloring of the vertices of a graph, the definition of a
bipartite graph, and the concept of path coloring [4, 7] specialized to the bipartite graph case.
A p-coloring of a graph is the set of vertices or nodes and E is the set of
edges, is a function
such that OE(u) 6= OE(v), if (u; v) 2 E. The chromatic number -(G) is the smallest p for which G has
a p-coloring. A p-coloring OE of G induces a partition of vertices
such that
Given a matrix A 2 ! m\Thetan , define a bipartite graph E) where
corresponds to the jth column of A, and r i corresponds to
the ith row of A. There is an edge, (r a nonzero in A.
In [4, 7] a path p-coloring of a graph is defined to be a vertex coloring using p colors with
the additional property that every path of at least 3 edges uses at least 3 colors. Here we need a
slight modification of that concept appropriate for the direct determination problem. We note that
"color 0" is distinguished in that it corresponds to the lack of a true color assignment: i.e.,
indicates that vertex i is not assigned a color.
Definition 2.4 Let G E) be a bipartite graph. A mapping OE
is a bipartite path p-coloring of G b if
1. Adjacent vertices have different assignments, i.e., if (i;
2. The set of positive colors used by vertices in V 1 is disjoint from the set of positive colors
used by vertices in 0g.
3. If vertices i and j are adjacent to vertex k with
4. Every path of 3 edges uses at least at least 3 colors.
The smallest number for which graph G b is bipartite path p-colorable is denoted by -
Figure
2.4 shows a valid bipartite path p-coloring. Numbers adjacent to the vertices denote colors.
We note that Hossain and Steihaug [15] define a similar concept. However, their definition of path
p-coloring does not allow for the "uncolor assignment", i.e., Consequently, a technique to
remove empty groups is needed [15].
We are now in position to state the graph analogy to the concept of a bi-partition consistent
with direct determination.
Theorem 2.1 Let A be a m \Theta n matrix with corresponding bipartite graph E).
The mapping induces a bi-partition (GR ; GC ), with
consistent with direct determination if and only if OE is a bipartite path p-coloring of G b (A).
Proof. (() Assume that OE is a bipartite path p-coloring of G b (A), inducing a bi-partition (GR ; GC )
of rows and columns of A. If this bi-partition is not consistent with direct determination, then there
is a nonzero element a ij in the matrix for which the definition "either column j is in a group of
GC which has no other column having a nonzero in row i, or row i is in a group of GR which has
no other rows having a nonzero in column j" doesn't hold. This can happen only if one of the
following cases hold:
there exists a column q with a iq 6= 0, such that OE(c j
this contradicts condition 3.
there exists a row p with a pj 6= 0, such that OE(r i
contradicts condition 3.
COLUMNS
ROWS000000
Figure
2.4: A valid bipartite path coloring
There exists a column q and a row p, such that columns j and q are
in the same group with a iq 6= 0 and rows i and p are in the same group with a pj 6= 0 This
implies OE is a 2-coloring of the path (r contradiction of condition 4.
Conversely, assume that OE induces a bi-partition consistent with direct determination of
A. It is clear that conditions 1 \Gamma 3 must be satisfied. It remains for us to establish condition 4: i.e.,
every path of 3 edges uses at least 3 colors. Suppose there is a bi-colored
by condition 3 the two colors on this path are positive.
It is easy to see that element a jk cannot be determined directly: there is a conflict in row group
there is a conflict in column group OE(c k these are the only two
chances to determine a jk . .
To capture the substitution notion the cyclic p-coloring definition [4] is modified slightly and
applied to a bipartite graph.
Definition 2.5 Let A be a m \Theta n matrix with corresponding bipartite graph E).
A mapping OE is a bipartite cyclic p-coloring of G b if
1. Adjacent vertices have different assignments, i.e., if (i;
2. The set of positive colors used by vertices in V 1 is disjoint from the set of positive colors
used by vertices in 0g.
3. If vertices i and j are adjacent to vertex k with
4. Every cycle uses at least at least 3 colors.
The smallest number for which graph G b is bipartite cyclic p-colorable is denoted by - c (G b ).
Figure
2.5 shows a valid bipartite cyclic p-coloring note that only 2 colors are necessary whereas
the bipartite path p-coloring in Figure 2.4 requires 3 colors. The notion of a bi-partition consistent12COLUMNS
Figure
2.5: A valid bipartite cyclic coloring
with determination via substitution can now be cleanly started in graph-theoretic terms.
Theorem 2.2 Let A be a m \Theta n matrix with corresponding bipartite graph E).
The mapping OE : induces a bi-partition (GR ; GC ), with
consistent with determination by substitution if and only if OE is a bipartite cyclic p-coloring of
G b (A).
Proof. ()) Assume OE induces a bi-partition consistent with determination by substitution but
OE is not a bipartite cyclic p-coloring of G b (A). Clearly condition must hold; it is easy to
see that if condition 3 doesn't hold then not all nonzero elements can be determined. The only
non-trivial violation is condition 4: there is a cycle which has only two colors, i.e all the vertices
in the cycle have the same color c 1 , and all the vertices in the cycle have the same color
c 2 . Note that neither c 1 and c 2 can be equal to 0, since a node colored 0 in a cycle would imply
that its adjacent vertices are both colored differently, implying that there are at least 3 colors.
Consider the submatrix A s of A, corresponding to this cycle. Submatrix A s has at least two non
zeros in each row and in each column, since each vertex has degree 2 in the cycle. But since we are
considering substitution methods only, at least one element of A s needs to be computed directly.
Clearly there is no way to get any element of this submatrix directly, a contradiction.
Conversely, assume that OE is a bipartite cyclic p-coloring of G b (A) but that bi-partition
induced by OE is not consistent with determination by substitution. But, edges (nonzeros)
with one end assigned color "0" can be determined directly: by the definition of bi-coloring there
will be no conflict. Moreover, every pair of positive colors induces a forest (i.e., a collection of
trees); therefore, the edges (nonzeros) in the induced forest can be resolved via substitution [4].
The two bi-partition problems can now be simply stated in terms of optimal bipartite path and
cyclic p-colorings:
The bipartite path p-coloring problem : Determine a bipartite path p-coloring of G b (A)
with the smallest possible value of p, i.e.,
The bipartite cyclic p-coloring problem : Determine a bipartite cyclic p-coloring of G b (A)
with the smallest possible value of p, i.e.,
The graph theoretic view is useful for both analyzing the complexity of the combinatorial
problem and suggesting possible algorithms, exact or heuristic. In fact, using the the p-coloring
notions discussed above, and an approach similar to that taken in [4], it is easy to show that
corresponding decision problems are NP-complete.
Bipartite cyclic p-coloring decision problem (CCDP): Given an integer p - 3 and an arbitrary
bipartite graph G, is it possible to assign a cyclic p-coloring to nodes of G?
Bipartite path p-coloring decision problem (PCDP) : Given an integer p - 3 and an arbitrary
bipartite graph G, is it possible to assign a bipartite path p-coloring to nodes of G?
The proofs are a straightforward adaptation of those in [4] and we omit them here. The upshot
of these (negative) complexity results is that in practise we must turn our attention to (fast)
heuristics to approximately solve the cyclic and path coloring problems. In the next section we
present simple, effective, and "easy-to-visualize" heuristics for these two combinatorial problems.
Finally, it is easy to establish a partial ordering of chromatic numbers:
where G(M) refers to the column intersection graph of matrix M , -(G(M)) is the (usual) chromatic
number of graph G(M ).
The first inequality in (2.2) holds because if OE is a bipartite path p-coloring then OE is a bipartite
cyclic p-coloring; the second inequality holds because a trivial way to satisfy conditions 1 \Gamma 4 of
Definition 2.4 is to assign "0" to all the row (column) nodes and then use positive colors on all
the column (row) nodes to satisfy condition 3. This ordering supports the tenet that use of bi-
partition/bi-coloring is never worse than one-sided calculation and that a substitution approach is
never worse than a direct approach (in principle).
Bi-coloring
The two combinatorial problems we face, corresponding to direct determination and determination
by substitution, can both be approached in the following way. First, permute and partition the
structure of J : ~
indicated in Figure 3.1. The construction of this
partition is crucial; however, we postpone that discussion until after we illustrate its' utility. Assume
I and
R
R
Figure
3.1: Possible partitions of the matrix ~
Second, define appropriate intersection graphs G I
R based on the partition [J C jJ R ]; a coloring
of G I
C yields a partition of a subset of the columns, GC , which defines matrix V . Matrix W is
defined by a partition of a subset of rows, GR , which is given by a coloring of G I
R . We call this
double coloring approach bi-coloring. The difference between the direct and substitution cases is
in how the intersection graphs, G I
R , are defined, and how the nonzeroes of J are extracted from
the pair (W T J; JV ).
3.1 Direct determination
In the direct case the intersection graph G I
C is defined: G I
9k such that J kr 6= 0; J ks 6= 0 and either (k; r) 2 JC or
The key point in the construction of graph G I
C , and why G I
C is distinguished from the usual
column intersection graph, is that columns r and s are said to intersect if and only if their nonzero
locations overlap, in part, in JC : i.e., columns r and s intersect if J kr \Delta J ks = 0 and either
The "transpose" of the procedure above is used to define G I
R ). Specifically, G I
I
R if nnz(row
If M is a matrix or a vector then "nnz(M)" is the number of nonzeroes in M
In this case the reason graph G I
R is distinguished from the usual row intersection graph is that
rows r and s are said to intersect if and only if their nonzero locations overlap, in part, in JR : rows
r and s intersect if J rk \Delta J
The bi-partition (GR ; GC ), induced by coloring of graphs G I
R and G I
C , is consistent with direct
determination of J . To see this consider a nonzero element (i; is in a group
of GC (corresponding to a color) with the property that no other column in GC has a nonzero in
row i: hence, element (i; j) can be directly determined. Analagously, consider a nonzero element
row r will be in a group of GR (corresponding to a color) with the property that no
other row in GR has a nonzero in column s: hence, element (r; s) can be directly determined. Since
every nonzero of J is covered, the result follows.
Example: Consider the example Jacobian matrix structure shown in Figure 3.2 with the partition
42 44
Figure
3.2: Example Partition
The graphs GC and GR formed by the algorithm outlined above are given in Figure 3.3. Coloring
GC requires 3 colors, while GR can be colored in two. Boolean matrices V and W can be formed in
the usual way: each column corresponds to a group (or color) and unit entries indicate column (or
row) membership in that group:
\Theta J
\Theta J 42 J 23 J 44 0
Clearly, all nonzero entries of J can be identified in either JV or W T J .
c
c 21212
r
r
r
Figure
3.3: Graphs GC and GR (direct approach)
3.2 Determination by substitution
The basic advantage of determination by substitution in conjunction with partition
is that sparser intersection graphs G I
R can be used. Sparser intersection graphs mean thinner
matrices V ,W which, in turn, result in reduced cost.
In the substitution case the intersection graph G I
C is defined: G I
9k such that J kr 6= 0; J ks 6= 0 and both (k; r) 2 JC , (k; s) 2 JC .
Note that the intersection graph G I
captures the notion of two columns intersecting if
there is overlap in nonzero structure in JC : columns r and s intersect if J kr \Delta J ks = 0 and both
some k. It is easy to see that E I
C is a subset of the set of edges used in
the direct determination case.
The "transpose" of the procedure above is used to define G I
R ). Specifically, G I
I
R if row i 2 JR and nnz(row
The intersection graph G I
R ) captures the notion of two rows intersecting if there is overlap
in nonzero structure in JR : rows r and s intersect if J rk \Delta J
for some k. It is easy to see that E I
R is a subset of the set of edges used in direct determination.
All the elements of J can be determined from (W T J; JV ) by a substitution process. This is
evident from the illustrations in Figure 3.4.
Figure
3.4 illustrates two of four possible nontrival types of partitions. In both cases it is clear
that nonzero elements in the section labelled "1" can be solved for directly - by the construction
process they will be in different groups. Nonzero elements in "2" can either be determined directly,
or will depend on elements in section "1". But elements in section "1" are already determined
(directly) and so, by substitution, elements in "2" can be determined after "1". Elements in
section "3" can then be determined, depending only on elements in "1" and "2", and so on until
the entire matrix is resolved.
JR
Figure
3.4: Substitution Orderings
Example. Consider again the example Jacobian matrix structure shown in Figure 3.2.
Column and row intersection graphs corresponding to substitution are given in Figure 3.5. Note
that GC is disconnected and requires 2 colors; GR is a simple chain and also requires 2 colors.
c
c 21112
r
r
r
Figure
3.5: Graphs GC and GR for substitution process
The coloring of GC and GR leads to the following matrices V , W and the resulting computation
of JV , W
\Theta 0
\Theta J
\Theta J 42 J 23 J 44 0
It is now easy to verify that all nonzeroes of J can be determined via substitution.
3.3 How to partition J.
We now consider the problem of obtaining a useful partition [J C jJ R ], and corresponding permutation
matrices P ,Q, as illustrated in Figure 3.1. A simple heuristic is proposed based on the
knowledge that the subsequent step, in both the direct and the substitution method, is to color
intersection graphs based on this partition.
Algorithm MNCO builds partition JC from bottom up, and partition JR from right to left.
At the k th major iteration either a new row is added to JC or a new column is added to JR : the
choice depends on considering a lower bound effect:
ae(J T
r
where ae(A) is the maximum number of nonzeroes in any row of matrix A, r is a row under consideration
to be added to JC , c is a column under consideration to be added to JR . Hence, the
number of colors needed to color G I
C is bounded below by ae(J C ); the number of colors needed to
color G I
R is bounded below by ae(J T
In algorithm MNCO, matrix C) is the submatrix of J defined by row indices R and
column indices C: M consists of rows and columns of J not yet assigned to either JC or JR .
Minimum Nonzero Count Ordering (MNCO)
1. Initialize
2. Find r 2 R with fewest nonzeros in M
3. Find c 2 C with fewest nonzeros in M
4. Repeat Until
if ae(J T
R=R-frg
else
C=C-fcg
repeat
Note that, upon completion, JR ; JC have been defined; the requisite permutation matrices are
implicitly defined by the ordering chosen in MNCO.
Bi-coloring performance
In this section we present results of numerical experiments. The work required to compute the
sparse Jacobian matrix is the work needed to compute (W T J; JV ) which, in turn, is proportional
to the work to evaluate the function F times the sum of the column dimensions of the boolean
matrices . The column dimension sum, is equal to the number
of colors used in the bi-coloring. In our experiments we compare the computed coloring numbers
required for the direct and substitution approaches. We also compute the number of colors required
by one-sided schemes: a column partition alone corresponds to the construction of V based on
coloring the column intersection graph of J , a row partition alone corresponds to the construction
of W based on coloring the row intersection graph of J . The latter case leads to the application of
the reverse mode of AD (alone), whereas the former case leads to use of the forward mode.
Both the direct and substitution methods require colorings of their respective pairs of intersection
graphs, G I
R . Many efficient graph coloring heuristics are available: in our experiments we
use the incidence degree (ID) ordering [3, 8].
We use three sources of test matrices: a linear programming testbed with results reported
in
Table
1 and summarized in Table 2; the Harwell-Boeing sparse matrix collection, with results
reported in Tables 3,4; self-generated m-by-n "grid matrices" with results given in Tables 5, 6.
A grid matrix is constructed in the following way. First, approximately
n of the columns are
chosen, spaced uniformly. Each chosen column is randomly assigned DENS \Delta m nonzeroes. Second,
approximately p m of the rows are chosen, spaced uniformly. Each chosen row is randomly assigned
nonzeroes. We vary DENS as recorded in Table 5.
For each problem we cite the dimensions of the matrix A and the number of nonzeros(nnz).
The experimental results we report are the number of colors required by our bi-coloring approach,
both direct and substitution, and the number of colors required by one-sided schemes.
Bi-coloring One-sided
Name m n nnz Direct Substitution column row
standata
stair 356 620 4021 36 29 36 36
blend 74 114 522
vtp.base 198 347
agg 488 615 2862 19 13 43 19
agg2 516 758 4740 26 21 49 43
agg3 516 758 4756 27 21 52 43
bore3d 233 334 1448 28 24 73 28
israel 174 316 2443 61
adlittle 56 138
Table
1: LP Constraint Matrices (http://www.netlib.org/lp/data/)
Table
2: Totals for LP Collection
Bi-coloring 1-sided Coloring
Name M N NNZ Direct Substitution column row
cannes 256 256 256 2916
cannes 268 268 268 1675
cannes
cannes 634 634 634 7228 28 21 28 28
cannes 715 715 715 6665 22
cannes 1054 1054 1054 12196 31 23
cannes 1072 1072 1072 12444
chemimp/impcolc 137 137 411 6 4 8 9
chemimp/impcold 425 425 1339 6 5 11 11
chemimp/impcole 225 225 1308 21 14
chemwest/west0067 67 67 294 9 7 9 12
chemwest/west0381 381 381 2157 12 9 29 50
chemwest/west0497 497 497 1727 22 19 28 55
Table
3: The Harwell-Boeing collection (ftp from orion.cerfacs.fr)
4.1 Observations
First, we observe that the bi-coloring approach is often a significant win over one-sided determi-
nation. Occasionally, the improvement is spectacular, e.g., "cannes 715". Improvement in the
Harwell-Boeing problems are generally more significant than on the LP collection in the sense that
bi-coloring significantly outperforms both one-sided possibilities. This is partially due to the fact
that the matrices in the LP collection are rectangular whereas the matrices in the Harwell-Boeing
collection are square: calculation of the nonzeroes of J from W T J alone can be quite attractive
when J has relatively few rows. The grid collection displays the advantage of bi-coloring to great
effect - grid matrices are ideal bi-coloring candidates.
In general the advantage of substitution over direct determination is not as great as the difference
between bi-coloring and one-sided determination. Nevertheless, fewer colors are almost always
needed and for expensive functions F this can be important. For most problems the gain is about
20% though it can approach 50%, e.g., "watt2".
Table
4: Totals for Harwell-Boeing Collection
Bi-coloring 1-sided coloring
M N DENS Direct Substitution column row
100 100 0.52 20 20 84 74
100 100 0.64 20 20 95 93
100 100 1.00 20 20 100 100
100 400 0.53
100 400 0.64
100 400 1.00
Table
5: Grid Matrices
4.2 Interface with ADOL-C
We have interfaced our coloring and substitution routines with the ADOL-C software. The C++
package ADOL-C [14] facilitates the evaluation of first and higher order derivatives of vector func-
tion, defined by programs written in C or C++.
We compare the time needed on a sample problem with respect to five approaches:
ffl AD/bi-coloring (direct)
ffl AD/bi-coloring (substitution)
ffl AD/column coloring (forward mode)
ffl AD/row coloring (reverse mode)
ffl FD (Sparse finite differencing based on column coloring)
The test function F we use is a simple nonlinear function: define
be the index set of nonzeroes in row i of the Jacobian matrix and define F
Table
Totals for Grid Matrices
The Jacobian matrix (and thus the sparsity pattern) is a 10-by-3 block version of Figure 2.3, i.e.,
3. Problem dimensions as indicated in Figure 4.1, were used in the
experiments.
Our results, portrayed in Figure 4.1, suggest the following order of execution time requirement
by different techniques:
Note that FD requires more time than AD=column even though the same coloring is used for both.
This is because the work estimate t V \Delta !(F ) is actually an upper bound on the work required by
the forward mode where t V is the number of columns of V . This bound is often loose in practise
finite-differencing since the subroutine to evaluate F is actually called
times.
Problem Size
Time
(in
seconds)
Performance graph of different sparse approaches
AD Column
AD Row
FD
AD-BiColoring-Direct
AD-BiColoring-Substitution
Figure
4.1: A comparison of different sparse techniques
Another interesting observation is that the reverse mode calculation (AD/Row) is about twice
as expensive as the forward calculation (AD/Column). This is noteworthy because in this example,
based on the structure Figure 2.1, the column dimensions of V and W are equal. This suggests
that it may be practical to weigh the cost of the forward calclulation of Jv versus the calculation
of w T J , where w; v are vectors. We comment further on this aspect in x7.
5 Substitution and round-off
In general, the substitution approach requires fewer colors and therefore is more efficient 3 , in
principle, than direct determination. However, there is a possibility of increased round-off error due
to the substitution process. In fact an analogous issue arises in the sparse Hessian approximation
context [4, 7, 16] where, indeeed, there is considerable cause for concern. The purpose of this
section is to examine this question in the AD context. The bottom line here is that there is less to
worry about in this case. In the sparse Hessian approximation case significant error growth occurs
when the finite-difference step size varies over the set of finite-difference directions; however, in our
current setting there this is not an issue since the "step size" is equal to unity in all cases.
First we consider the number of substitutions required to determine any nonzero of J from
are chosen using our substitution stategy. There is good news:
similar to the sparse Hessian approximation situation [4, 7, 16], the number of dependencies, or
substitutions, to resolve a nonzero of J can be bounded above by 1b(m
Theorem 5.1 Let OE be a bipartite cyclic p-coloring of G b (J). Then, OE corresponds to a substitution
determination of J and each unknown in J is dependent on at most m+n \Gamma 2 unknowns. Moreover,
it is possible to order the calculations so that the maximum number of substitutions is less than or
equal to 1
First, edges (nonzeros) with one end assigned color "0" can be determined directly: by
the definition of bi-coloring there will be no conflict. Second, every pair of positive colors induces a
forest (i.e., a collection of trees in G b (J)); therefore, the edges (nonzeros) in the induced forest can
be resolved via substitution [4]. Hence, all edges (nonzeros) can be resolved either directly or by
a substitution process and the worst case corresponds to a tree with m yielding an
upper bound of m+n \Gamma 2 substitutions. However, it is easy to see that the substitution calculations
can be ordered to halve the worst-case bound yielding at most 1
substitutions.
Next we develop an expression to bound the error in the computed Jacobian. Except for the
elements that can be resolved directly, the nonzero elements of the Jacobian matrix can be solved
for by considering each subgraph induced by 2 positive colors (directions), one color corresponding
to a subset of rows, one color corresponding to a subset of columns. Let us look at the subgraph
G p;q induced by colors p (columns), q (rows). Let z
q J , R q be the set of rows
colored q, and C p be the set of columns colored p, where
Let
denote the quantities computed via AD. Note that the errors introduced here are only due to the
automatic differentiation process and are typically very small.
In the solution process an element J ij is determined:
z
3 Of course a substitution method does incur the extra cost of performing the substitution calculation. However,
this can be done very efficiently and the subsequent cost is usually negligible.
depending on whether a column equation (of form Jv) or a row equation (of form w T J) is used.
Here, N(r i ) denotes set of neighbours of row i in G p;q , and N(c j ) denotes set of neighbours of
column j in G p;q .
Assume that J actual denotes the actual Jacobian matrix; hence,
J actual
J actual
or J actual
J actual
Define an error matrix ij be the difference
z
depending on the way element J ij was computed.
We take into account the effect of rounding errors by letting -
ij to be equal to ffl ij plus the
contribution from rounding errors due to use of the equation that determines J ij . We can now
again, depending on the way J ij is calculated.
Moreover, we let ffl max be the constant :
Note that ffl max has no contribution from step sizes, unlike results for finite- differencing [4, 16].
Theorem 5.2 If J is obtained by our substitution process then
Proof: From equation (5.1),
or
Let us assume, without loss of generality, that equation (5.2) holds. This implies a bound
But the same decomposition can be applied recursively to each E ik , and using Theorem 5.1, the
result follows.
There are two positive aspects to Theorem 5.2. First, unlike the sparse finite-difference substitution
method for Hessian matrices [4, 7, 16], there is no dependence on a variable "step size": in AD
the "step size" is effectively uniformly equal to unity. Second, there is no cumulative dependence
on nnz(J) but rather just on the matrix dimensions, m+ n. However, there is one unsatisfactory
aspect of the bound in Theorem 5.2: the bound is expressed in terms of ffl max , but ffl max is not
known to be restricted in magnitude. A similar situation arises in the [4, 7, 16]. Nevertheless, as
illustrated in the example discussed below, ffl max is usually modest in practise.
We conclude this section with a small experiment where we inspect final accuracies of the
computed Jacobian matrices. The test function F is a simple nonlinear function as described in
x4.2.
In
Table
7 "FD1" is the sparse finite difference computation [8] using a fixed stepsize
"FD2" refers to the sparse finite difference computation [8] using a variable stepsize: ff is uniformly
varied in the range [
ffl]. The column labelled "Rel error" records kERRk 2 , where the
nonzeros of ERR correspond to the nonzeros of J : for
computed
J actual
The general trends we observe are the following. First, similar to the results reported in [1]
for forward-mode direct determination, the Jacobian matrices determined by our bi-coloring/AD
approach are significantly and uniformly more accurate than the finite-difference approximations.
This is true for both direct determination and the substitution approach. Second, the direct
approach is uniformly more accurate than the substitution method; however, the Jacobian matrices
determined via substitution are sufficiently accurate for most purposes, achieving at least 10 digits
of accuracy and usually more. Finally, on these problem there is relatively little difference in
accuracy between the fixed step method and the variable step method. However, as
illustrated in [6], it is easy to construct examples where the variable step approach produces
unacceptable accuracy.
Direct Substitution FD1 FD2
size Rel error Rel error Rel error Rel error
Table
7: Errors (sample nonlinear problem)
6 Concluding remarks
We have proposed an effective way to compute a sparse Jacobian matrix, J , using automatic
differentiation. Our proposal uses a new graph technique, bi-coloring, to divide the differentiation
work between the two modes of automatic differentiation, forward and reverse. The forward mode
computes the product JV for a given matrix V ; the reverse mode computes the product W T J
for a given matrix W . We have suggested ways to choose thin matrices V; W so that the work
to compute the pair (W T J; JV ) is modest and so that the nonzero elements of J can be readily
extracted.
Our numerical results strongly support the view that bi-coloring/AD is superior to one-sided
computations (both AD and FD) with respect to the order of work required. Of course AD
approaches offer additional advantages over FD schemes: significantly better accuracy, no need to
heuristically determine a step size rule, and the sparsity pattern need not be determined a priori
[14].
Implicit in our approach is the assumption that the cost to compute Jv by forward mode AD
is equal to the cost of computing w T J by reverse mode AD, where v; w are vectors. This is true
in the order of magnitude sense - both computations take time proportional to !(F ) - but the
respective constants may differ widely. It may be pragmatic to estimate "weights" w
respect to a given AD tool, reflecting the relative costs of forward and reverse modes. It is very
easy to introduce weights into algorithm MNCO (x4:3) to heuristically solve a "weighted" problem,
is the number of row groups (or colors assigned to the rows), and - 2
is the number of column groups (or colors assigned to the columns). The heuristic MNCO can be
changed to address this problem by simply changing the conditional (LB) to:
R
Different weights produce different allocations of work between forward and reverse modes, skewed
to reflect the relative costs. For example, consider a 50-by-50 grid matrix with
x5), and let us vary the relative weighting of forward versus reverse mode: w
. The results of our weighted bi-coloring approach are given in Table 8.
Table
8: Weighted problem results
Finally, we note that the bi-coloring ideas can sometimes be used to efficiently determine relatively
dense Jacobian matrices provided structural information is known about the function F .
For example, suppose is a partially separable function,
t, and each component function F i typically depends on only a few
components of x. Clearly each Jacobian function J i is sparse while the summation,
may or may not be sparse depending on the sparsity patterns. However, if we define a "stacked"
function ~
F ,
~
then the Jacobian of ~
F is
~
J is sparse and the bi-coloring/AD technique can be applied to ~
J, possibly yielding a
dramatic decrease in cost. Specifically, if J is dense (a possibility) then the work to compute J
without exploiting structure is n \Delta !(F ) whereas the cost of computing ~
J via bi-coloring/AD is
approximately
J)) is the minimum number of colors required
for a bipartite cyclic coloring of graph G b ( ~
J). Typically, - c (G b ( ~
n. The idea of applying the
bi-coloring/AD technique in a structured way is not restricted to partially separable functions [9].
Acknowledgements
We are very grateful to Andreas Griewank, his student Jean Utke, and his colleague David Juedes
for helping us with the use of ADOL-C.
--R
New methods to color the vertices of a graph
The cyclic coloring problem and estimation of sparse Hessian matrices
Structure and efficient Jacobian calculation
On the estimation of sparse Jacobian matrices
Direct calculation of Newton steps without accumulating Jacobians
Computing a sparse Jacobian matrix by rows and columns
On the estimation of sparse Hessian matrices
--TR
--CTR
Shahadat Hossain , Trond Steihaug, Sparsity issues in the computation of Jacobian matrices, Proceedings of the 2002 international symposium on Symbolic and algebraic computation, p.123-130, July 07-10, 2002, Lille, France
Dominique Villard , Michael B. Monagan, ADrien: an implementation of automatic differentiation in Maple, Proceedings of the 1999 international symposium on Symbolic and algebraic computation, p.221-228, July 28-31, 1999, Vancouver, British Columbia, Canada
L. F. Shampine , Robert Ketzscher , Shaun A. Forth, Using AD to solve BVPs in MATLAB, ACM Transactions on Mathematical Software (TOMS), v.31 n.1, p.79-94, March 2005
Shaun A. Forth, An efficient overloaded implementation of forward mode automatic differentiation in MATLAB, ACM Transactions on Mathematical Software (TOMS), v.32 n.2, p.195-222, June 2006
Thomas F. Coleman , Arun Verma, ADMIT-1: automatic differentiation and MATLAB interface toolbox, ACM Transactions on Mathematical Software (TOMS), v.26 n.1, p.150-175, March 2000 | sparse finite differencing;NP-complete problems;ADOL-C;automatic differentiation;nonlinear systems of equations;bicoloring;computational differentiation;sparse Jacobian matrices;partition problem;nonlinear least squares;graph coloring |
294403 | Inertias of Block Band Matrix Completions. | This paper classifies the ranks and inertias of hermitian completion for the partially specified 3 x 3 block band hermitian matrix (also known as a "bordered matrix") P=\pmatrix{A&B&?\cr B^*&C&D\cr ?&D^*&E}. The full set of completion inertias is described in terms of seven linear inequalities involving inertias and ranks of specified submatrices. The minimal completion rank for P is computed.We study the completion inertias of partially specified hermitian block band matrices, using a block generalization of the Dym--Gohberg algorithm. At each inductive step, we use our classification of the possible inertias for hermitian completions of bordered matrices. We show that when all the maximal specified submatrices are invertible, any inertia consistent with Poincar's inequalities is obtainable. These results generalize the nonblock band results of Dancis [SIAM J. Matrix Anal. Appl., 14 (1993), pp. 813--829]. All our results remain valid for real symmetric completions. | Introduction
We address the following completion problem: given a partially specified hermitian matrix
characterize all the possible inertias In d) of the various hermitian completions H of
P: We call this set the "inertial set" or "inertial polygon" of P .
The issue of classifying positive definite and semidefinite completions of partial matrices is relevant
to various applications involving interpolation, and has been studied thoroughly, e.g. [AHMR],
[D5], [GJSW]. Invertible completions have been studied in [DG] and [EGL2], for band patterns, in
[L] for general patterns, and are associated with maximum entropy and statistics. For other results
concerning ranks and general inertias, see [D1-D6], [EL], [G], [H], [JR1], [CG3].
Following some preliminary material (Sections 2-4), we present in Sections 5 and 6 several
contributions to the inertia classification problem.
will denote the inertia, that is the number of positive, negative,
and zero eigenvalues of a hermitian matrix H. They are also called the positivity, negativity and
nullity of H.
The main result is:
Theorem 1.1 Given the block "bordered" matrix
@
Z D EC
A
are Hermitian (real or complex) matrices of sizes ff \Theta ff, fi \Theta fi and fl \Theta fl,
respectively, and B and D are matrices of sizes ff \Theta fi and fi \Theta fl, respectively. Set:
In
and (- In
For given integers n and p, there exist an ff \Theta fl (real or complex, respectively) matrix Z such
that
In
if and only if
Figure
1.1. Graph of the inertial polygon for the bordered matrix
The block partition is not required to be uniform; rectangular (non-square) blocks are permitted.
For this partial matrix P (Z), Theorem 1.1 shows that the inertial set is a (possibly degenerate)
convex seven sided lattice polygon as depicted above.
The proof of Theorem 1.1 is presented in Section 5, along with a variety of corollaries including
a small application to the algebraic matrix Riccati equation.
Cain and Sa established the 2x2 case (i.e., [CS]. The result was generalized to an
arbitrary number of diagonal blocks by Cain in [C], with further results by Dancis in [D1]. The
case with one given diagonal block was proven as Theorem 1 of [S] and as Theorem 1.2 of
[D1]. These cases are reviewed in detail in Section 4, and are later used as milestones in computing
the inertial polygon of Theorem 1.1.
The possible inertias for a bordered matrix missing a single (scalar) entry was catologued by
the second author in [D6], mostly using his Extended Poincar'e's Inequalities (3.3). In [JR] the
lower bounds in (5.5) and (5.6) were determined for the case when the given principal blocks are
invertible (i.e., ff extends to the case of "chordal
patterns".
In computing the inertial set for Theorem 1.1, we combine four simple elements: (i) Schur
complements, (ii) Poincar'e and Extended Poincar'e Inequalities (3.3) as necessary conditions on
the inertia, (iii) the technique of "restricted congruence" (presented in Subsection 2.4), including
a new formula (2.7), for simplifying a partial hermitian matrix and (iv) elimination of variables in
systems of linear inequalities (see Sections 2-3 for details). These techniques enable us to reduce
Theorem 1.1 to a combination of the simpler cases presented in Section 4. These four elements
of the proof, without (2.7), are commonly used in the matrix literature, and in particular in the
completion literature cited above.
Staircase hermitian matrices are the mild generalization of block band matrices described as
generalized block band matrices in the appendix of [JR2]; they look like a double staircase which is
symmetric about and includes the main diagonal.
A staircase (or generalized block band) matrix with s steps is a partial hermitian n \Theta n
R with precisely s specified hermitian submatrices, fR 1
are defined by
@
a
a k i
1. The inertia of each of the
R i 's is denoted by In R
Staircase matrices allow the non-diagonal blocks to be non-square rectangles. Note that R 1
need not overlap R 3 as would be required in a block band matrix. In fact, R 1 need not even
but the main diagonal must be contained in the union of the R i 's. Also the definition
includes block diagonal matrices.
The next theorem shows that a staircase matrix, with all maximal submatrices being invertible,
has hermitian completions with all the inertias consistent with Poincare's Inequalities.
Theorem 1.2 (An inertial triangle) Given an s-step hermitian staircase m \Theta m matrix R.
Suppose that each of the maximal submatrices R 1 s of R is invertible. Then the
inertial polygon of R is the triangle
The proof of Theorem 1.2 is presented in Section 6, along with several theorems about the
possible inertias of hermitian completions of staircase matrices. We will employ the method of
Dym and Gohberg [DG], which decomposes the completion process into a succession of simple
steps, each of which is a Theorem 1.1 step.
These results generalize the (scalar) band hermitian completion results of the second author in
[D6]. A related of Johnson and Rodman is restated as Lemma 5.7 herein.
We state our results for complex hermitian matrices, but they are all equally valid in the real
symmetric case.
Preliminaries
2.1. Notation: We shall denote by p; n and p; n the minimal, resp. maximal possible values
of the positivity and the negativity of the completions of a given partially specified matrix.
Similarly, r and r will denote the minimal and maximal possible values for the rank of
completion matrices of a given partially specified matrix. We have the obvious inequality r - p+n;
which is generally strict. The determination of the maximal rank r for arbitrary (including non-
band) hermitian completion problems is done in [CD]. In fact, it is shown there that the maximal
completion rank does not increase if the assumption that the completion is hermitian is dropped;
consequently, this rank can be computed explicitly using a result of [CJRW].
The inequality r - p + n is similarly obvious; however, it becomes an equality (i.e.
in many cases, including that of Theorem 1.1, see Corollary 5.4, and certain block band matrices,
see Theorem 6.2.
d) will denote the square matrix I p \Phi \GammaI n \Phi 0 d of inertia (p; n; d) . Sometimes we
shall use the triple (-; ffi) to denote the inertia of a given (maximal) specified submatrix and
d) to denote the inertia of a hermitian completion of a given partial matrix. Congruence of
matrices is denoted by - =.
If a square matrix X is written in block form, say ij is of
size a i \Theta a j ; we shall describe X as having block sizes (a
2.2. Schur Complements: Let
If A is invertible then the Schur complement
of A is
Haynesworth [H] has shown that H is congruent to A \Phi C \Theta . In particular,
In
More generally, if H is a k \Theta k block matrix, and T is a subset of f1; :::; kg; let A be the
principal submatrix whose block indices are in T : We can move A to the left upper corner by a
permutation of the coordinates, and proceed as before, provided A is invertible. This procedure
will be referred to as complementation with respect to coordinates T . The block division will always
be clear from the context.
A similar procedure is available for non-hermitian matrices, yielding the weaker identity for
and C
We shall refer to this procedure as non-hermitian complementation.
2.3. Canonical Forms: We shall repeatedly use the following terminology:
(i) Equivalence canonical form: It is well known that two matrices A and B are equivalent if
their exist two invertible matrices S and T such that A = SBT . Every matrix A can be transformed
by equivalence to the form
I a 0
, where a = rank A.
We shall also need the following case of "block equivalence": For every matrix in block form
are invertible matrices
respectively, such that
@
I a
A
where
(ii) Strong congruence canonical form: Every hermitian matrix A of inertia (p; n; d) is congruent
to a matrix of the form J(p; n; d):
(iii) Weak congruence canonical form: Every hermitian matrix A of rank r is congruent to a
matrix of the form A 0 \Phi 0, where A 0 is an invertible r \Theta r matrix.
2.4. Restricted Congruence: If P is a partial matrix, and S is invertible, the matrix
can be interpreted as a partial matrix, in the following sense: an entry p 0
ij of P 0 is
determined if it is equal to fS HSg ij for every possible completion H of P: We call
a restricted congruence if p ij being a specified entry of P implies that p 0
ij is specified in P 0 .
There are some similarities between our concept of "restricted congruence" and Ball et al's
concept of "lower similarity" in [BGRS].
We will use restricted congruence in two ways:
block-diagonal congruence is used to put some (specified or unspecified) blocks of P in
canonical form;
some row and column operations are used to annihilate blocks in P:
In some cases, unspecified blocks may become specified (in fact, annihilated) by congruence.
For example, the 1,1 block of
I
I 0
can be annihilated by the process
I
!/
Z I
I 0
!/
I 0
I
I 0
I 0 0C C A can be simplified toB B @
I 0 0C C A as follows:B
@
I
I \GammaX
@
I 0 0C
@
I
A =B
@
I 0 0C
Inequalities
Necessary conditions. The shape of an inertial polygon for a hermitian completion problem is,
to a large extent, determined by a few inequalities relating matrices and submatrices:
(i) Rank Interlacing. If A is a k \Theta l rectangular block of the m \Theta m matrix H; we have
Inequalities. Let A be a k \Theta k principal submatrix of the m \Theta m hermitian
matrix H: Let - i and - i be the ordered eigenvalues of A and H; respectively. The Cauchy
interlacing Theorem states that - i - i - i+m\Gammak (see e.g. Theorem 4.3.15 in [HJ]). An equivalent
statement are the Poincar'e inequalities:
The upper and lower bounds of (3.2) were strengthened in Theorem 1.2 of [D2]; the lower bound
was strengthened as follows:
(iii) Extended Poincar'e's Inequalities. ([D2]) Given a hermitian matrix in block form,
set
Then
In H - In
Inequalities (3.1), (3.2), (3.3) form a set of a priori bounds on completion inertias. In fact, in
[CJRW] it is proved that the upper bound in (3.1) is sufficient for the determination of the maximal
completion rank in the case of non-hermitian completions, and in [CD] the same is shown in the case
of hermitian completions. It turns out that the necessary conditions of type (3.1), (3.2), (3.3) are
also sufficient in determining the full inertial polygon in many cases, including the bordered matrix
case (Theorem 1.1) and the block diagonal case ([D1]). We shall emphasize cases of sufficiency of
these conditions in the text.
4 Towards the 3 \Theta 3 Bordered Case
This section paves the way for the analysis of the bordered case, which is carried out in Section
5. The material in this section has independent value, and much of it is well-known. We shall
compute the inertial polygon for a 3 \Theta 3 block pattern of the form
@
as Lemma 4.8. The special cases
A ?
and
A ?
, originally due to Cain and S'a, will
also be reviewed, as Lemmas 4.1 and 4.5. We shall also compute the possible inertias of a matrix
of the form A+X with inertia limitations on the unknown matrix X as Lemma 4.3. The results
of Lemma 4.1 will be used to establish Lemmas 4.3 and 4.8 . The result of Lemma 4.3 will be used
to establish Lemma 4.5 which will be used in the proof of Lemma 4.8 which will be used in the
proof of Theorem 1.1. The proof of Theorem 1.1 and Lemma 5.10 consist of reduction to the case
of (4.1), which itself, is of independent interest.
Lemma 4.1 Let
be a partially specified hermitian matrix of block sizes (ff; fi).
Then the inertial polygon for H is the pentagon that contains all the lattice points (-(H); -(H))
which satisfy the inequalities:
Proof. The necessity of (4.2) follows from (3.1) and (3.2). For sufficiency, put H 1 in diagonal
form, and complete H to a diagonal matrix. It is easy to show that every inertia in (4.2) is obtained.
See also Theorem 1 in [S] and Theorem 1.2 in [D2].
Corollary 4.2 In Lemma 4.1 the extremal values are
Moreover, H in Lemma 4.1 admits positive definite, non-negative definite, or invertible completions
if and only if H 1 is positive definite, non-negative definite, or rank(H 1
respectively.
The following result can be deduced with some effort from Theorem 2 in [S]:
Lemma 4.3 Let A and X be m \Theta m hermitian matrices. We consider A to be fixed and X to be
a variable matrix with -(X) - a and -(X) - b: Then the possible inertias of are the
nonnegative lattice points satisfying the following inequalities:
Proof. Necessity is obvious due to Sylvester's inertia principle. For sufficiency, take A to be
diagonal, and restrict X to be diagonal as well. It is easy to show that every inertia in (4.3) is
obtained.
Corollary 4.4 In Lemma 4.3, 0g. Also, A admits positive
definite completions if and only if - a and - non-negative definite completions if
and only if - a; and invertible completions if and only if ffi - a
For the values of n; n in Lemma 4.3, see also [CG3] Lemma 2.2.
The following result is due to Cain and S'a.
Lemma 4.5 ([CS]) Let
be hermitian of block sizes (ff; fl): Then the inertial
polygon of H is determined by these inequalities:
\Gamma-
A generalization of Lemma 4.5 and (4.4) to more than two diagonal blocks can be found in
Brian Cain's paper [C]. A short proof of the necessity part of Cain's result was obtained by J.
Dancis ([D1] Corollary 11.1 and Lemma 11.2). In the 2 \Theta 2 block case, this proof is given below.
Proof of necessity of Inequalities (4.4).
be a completion of H. Set
F
G
Noting that
rank
F
we apply the Extended Poincare Inequalities (3.3) to both F and G and we obtain
Subtracting this inequality from -(H) yield the right side of (4:4). A
symmetric argument produces the left inequality.
Proof of Lemma 4.5. We show by reduction to Lemma 4.3 that Inequalities (4.4) describe the
inertial polygon. Let
be a completion of H. Putting F in weak canonical
form, and taking the Schur complement of the new first coordinate, F 0 , we calculate, using (2.2):
@
A
Hence
Putting G \Theta in weak canonical form, we get H
@
Putting X 4 in
equivalence canonical form, we get
@
I r
A
We set
In(G \Theta
Removing coordinate 5 and taking the Schur complement of coordinates 1,3, and 4, in H 0 , we
calculate using Equations (2.2) and (2.7):
We now develop the relevant inequalities involving the dummy variables in (4.6). By Lemma 4.3
the inertial polygon of G \Theta is determined by the inequalities on
\Gamma-
By Lemma 4.3 again, we compute the inertial polygon for
The only restriction on r is the size of X
Now (4.4) is obtained from Equations (4.5) - (4.9) by eliminating -
- and r:
The values of p; p; n; n can easily be computed from Lemma 4.5. The values n and n; were
computed in [CS].
Lemma 4.6 [D1] In Lemma 4.5 we have
Moreover, there exists a matrix X which simultaneously achieves the minimal possible ranks
for
F
and
G
namely
rank
G
Proof. The values of p and n follows directly from Lemma 4.5. To show the rest, we may put F
and G in strong canonical form:
@
I
@
I
@
Choose the diagonal entries of X 1;1 and X 2;2 by the rule (X 1;1 1. Choose all
other entries of X to be zero. We get a completion with the desired minimal ranks.
The original result of J. Dancis ([D1] Theorem 1.3) is in fact more general in two respects: first,
it extends to more than two block diagonals. Moreover, it is not restricted to minimal ranks: it
shows, more generally, that any choice of kernels of a column decomposition as well as any choice
of inertia which is consistent with the Extended Poincar'e inequalities, can be obtained:
Theorem 4.7 constrained hermitian completion.) [D1] Given hermitian matrices H ii ;
In H ii be the block diagonal
matrix of size Choose a subspace K i ae ker H ii such that dim ker H ii \Gamma dim K i -
s:
Then an integer triple (-; ffi) satisfying the equality - is the inertia of a hermitian
completion H of S with column block structure:
each M i is an n \Theta n i matrix; and ker M
if and only if (-; ffi) satisfies the inequalities
(The notation here is different than the one used in [D1]: the r i and \Delta i here correspond to -
The next lemma is the main result of this section; it combines Lemmas 4.1 and 4.5.
Lemma 4.8 Let P be a partial matrix of the formB B
@
A and of block sizes (ff; fl; ffl):
Then the inertial polygon of P consists of the lattice points determined by these inequalities:
The sufficiency proof of Inequalities (4.12) is the same as for Inequalities (4.4).
Proof. Every completion H of P has the form
F
G
was computed in Lemma 4.5. By Lemma 4.1, In(H 1 ) and In(H) are connected by
and eliminating In(H 1 ) from Inequalities (4.4) and (4.13), and using the identities
we get Inequalities (4.12).
5 Inertias of Block Bordered Matrices
In this section we establish Theorem 1.1 using the results stated in Sections 3 and 4. The material
in this section is new. The scalar case was classified in [D3]. Other special cases of Theorem 1.1
occur in [D1], [L],[G], and [CG3].
Subsections 5.3-5.5 contain additional results and corollaries of of Theorem 1.1 concerning
minimal rank completions of various types, and the case where the two maximal specified hermitian
submatrices R 1 and R 2 of P (Z) of Theorem 1.1 are invertible. In Subsection 5.6 we present a
small application to the algebraic matrix Riccati equation A+AZ +ZB +ZCZ 0: a criterion
for solvability and a characterization of the possible inertias of the solution matrix Z (which need
not be hermitian).
5.1. Internal Relations for Bordered Matrices.
For the the bordered matrix P (Z) of Theorem 1.1, we note that R
; and R
are the maximal specified hermitian submatrices of P (Z) and
the maximal specified non-hermitian submatrix of P (Z).
Observation 5.1 (Internal relations for a bordered matrix) With the notation of Theorem
1.1, we define
and
Then
Proof. The inequality of (5.1) follows from rank considerations. The equality, as well as (5.4),
follows from the definitions. Applying the Extended Poincar'e's Inequalities to C as a submatrix
of R 1 or R 2 provides (5.2). (5.3) also follows from the Extended Poincar'e's Inequalities.
5.2. Proof of Theorem 1.1.
Before proving the theorem, let us comment on the necessity and minimality of its conditions:
Minimality: The inertial polygon for a simple matrix, with all seven edges present, is illustrated
in Fig. 5.1. The matrix chosen was of block sizes (6; 1;
and B; C and D are zero matrices of appropriate order. This results in r and the three \Delta's
being zero. This example shows that the set of seven inequalities defining the inertial diamond is
not redundant.
Figure
5.1. A seven-sided inertial polygon.
Necessity: Necessity of each one of the seven inequalities can be easily demonstrated: inequality
(1.4) follows from (3.1). The upper bounds in Inequalities (1.1) - (1.2) are a consequence of (3.2).
The lower bounds in (1.1) and (1.2) are just the Extended poincar'e's inequalities (3.3). It remains
to derive (1.3).
5.2 The Extended Poincar'e's Inequalities (3.3) imply (1.3).
Proof. We define the partial matrices
and their completions
Z D E
Let
Using the Extended Poincar'e's Inequalities(3.3) twice, we have:
Substitute for one of the p's on the left hand side
Using the identities -
We note that
But this translates into:
or equivalently
Finally, (5.8) and (5.9) establish (1.3).
We will establish Theorem 1.1 by using Schur complements (Equation (2.2)) and row and
column operations (Equation (2.7)) and the other forms presented in Section 2, repeatedly, in
order to reduce Theorem 1.1 to Lemma 4.8.
Proof of Theorem 1.1 We begin by putting C in weak canonical form:
with block sizes (ff; rank C; Taking the Schur complement of C 0 in P (Z) as in
yields
In In -
In
where
@
Y D" GC C
Next we put [B" ; D"] in the canonical form (2.4). Using (5.10), we obtain the matrix
@
F
F
I
22 X
A
The block sizes here are are
blocks of Y , the F ij 's and G ij 's are conforming blocks of F and G. The new block sizes are related
to ff; fi; fl via
Next we use restricted congruence, see Section 2. By row and column operations based on
and H
, we may assume without loss of generality, that
are all zero. This modifies the matrix -
without changing its inertia, to
@
I \Delta 00
22 X
A
We may discard row 7 and column 7, which are all zero. Next we complement H 0 with respect
to the block
[1;2;4;5;6;10] =B
@
I \Delta 00
A
The Schur complement turns out to be
@
@
22
@
I
@
A
@
22
@
with block sizes (ff . By the Schur complement inertia formula (2.2), we get
In In H 0
In
In H 00
(The fi 0 accounts for removing coordinate 7 from H 0 ). Thus we have
In In In
We will use Equation (5.17) and Lemma 4.8 to calculate In P (Z); to find In H 00 , we must first
calculate In F 33 and In G 11 :
5.3
In F In In C 0
In G In In
Proof. By restricting Equations (5.10)-(5.14) to the upper left corner R 1 , we may take the
Schur complement of C 0 as a submatrix of R 1 ; this yields:
In
@
In In
F B"
where F is as in Equation (5.11). Using elimination, Equation (2.7), we note that
In
F B"
@
I
A
In F 33
where F 33 is as in (5.12). Solving for In F 33 in Equations (5.19) and (5.20) yields the first part
of (5.18). A similar argument holds for the second part.
We proceed with the proof of Theorem 1.1. Applying the size identities
and Lemma 4.8 (with its submatrices F and G corresponding to F 33 and G 11 here), we obtain these
inequalities for In H 00 :
Inequalities are obtained by plugging Inequalities (5.21) and Equation (5.18) into
Equation (5.17) and then using Equations (5.1), (5.4) and (5.13) to eliminate all the intermediary
inertias.
5.3. Extremal inertia values and inertia preserving completions
In this subsection and the next, we use the geometry of the inertial polygon as the basis (i) for
establishing a minimum rank completion for the bordered matrix P of Theorem 1.1 (Corollary
5.5); and (ii) for showing that, assuming invertibility of R 1 and R 2 , all the inertias which are
consistent with Poincare's Inequalities can be obtained by completion (Corollary 5.8). In Section
6 we will use these results as building blocks for our proofs of completion theorems for "staircase"
matrices.
First we show, under the notation of Theorem 1.1, that there is always a completion whose
positivity and negativity are the minimal ones allowed by the Extended Poincar'e's Inequalities.
This implies that the minimal rank is
Corollary 5.4 (Minimal rank completions.) With the notation of Theorem 1.1, there exists a
hermitian completion
Proof. We argue by inspection on Figure 5.1. If the vertex not in the inertial
polygon for P (z), this vertex must be cut off by one of the extremal lines defining Inequalities (1.2
- 1.4). Since these inequalities are consistent, it is clear that (p must satisfy (1.2) and (1.4).
It remains to check the two inequalities (1.3). We have to consider four possible choices for p 0 and
in the notation of Theorem 1.1:
Suppose that
Rank considerations imply that Rank C Equation (5.1) implies that -
This and Equation (5.22) imply that
proving the right inequality in (1.3). Interchanging the roles of the - 0 s and the - 0 s
establishes the left inequality.
(II) A similar proof applies to the case (p 0
then a combination of Equations (5.1),(5.3) and (5.4) yields rank R 1
algebra we get - This and (5.23) implies that
proving the right inequality in (1.3). Interchanging the roles of the - 0 s and the - 0 s
establishes the left inequality.
(IV) A similar proof applies to the case
Remark. This Corollary 5.4 implies that the minimal rank of the set of hermitian completions of
Constantinescu and A. Gheondea presented in [CG3] another formula for n.
Of particular interest is the case where the minimal rank solutions inherit their inertia values
and n from the specified blocks R 1 and R 2 , i.e.
We call such completions inertia preserving. Note that (5.24) does not guarantee that the minimal
completion rank is g: The following simple result will play a major role in
finding inertia preserving completions for block band matrices in Section
Corollary 5.5 (Inertia preserving completions.) Assume the notation of Theorem 1.1. Suppose
that P satisfies an "equality of ranks" condition:
Then P admits inertia preserving completions.
Indeed, under condition (5.25), the formulas of the inertial polygon simplify, since we have
In particular, (5.24) holds.
Condition (5.24) and Corollary 5.5 are implied by the stronger condition
In the notation of Observation 5.1, Equation 5.26 is equivalent to any of the following:
This condition is satisfied if e.g. C is invertible.
Condition (5.25) is not necessary for the existence of inertia preserving completions. As an
example, consider the partial matrix
@
A simple argument, using the Extended Poincar'e's Inequalities(3.3), shows that if (5.25) is not
satisfied, then the inertia preserving completion must inherit both p and n, from the same block.
Hence In the above example,
5.4. The width of the inertial set
We define the width w of the inertial polygon to be the maximal value of
all pairs of points (p; n) and (p belonging to the inertial polygon. See Figure 1.1. This width
equals the sum of the lengths of the two perpendicular sides of the inertial polygon. It is clear
that Inequality (1.3) puts a limitation on the width; namely, w cannot exceed the modulus of the
difference of the right and left hand sides in this inequality:
(it can be shown directly that this value is always non-negative).
The sides, with slope of minus 1, come from Inequality (1.3), which may be rewritten as
In this way
is related to the width of the inertial polygon. The width of the inertial polygon tends to increase
as we increase the ranks of R 1 and R 2 : In this section we study the two extreme cases. The "slim"
case is when Rank R Here the polygon degenerates to a segment with a
degree inclination. The "fat" case occurs under the maximal rank condition det R 1 det R 2 6= 0;
here the polygon extends to maximum capacity, and fills a triangle (Corollary 5.8 ). We start with
the "slim" case.
Corollary 5.6 Given the notation of Theorem 1.1. Suppose that
Then the inertial polygon coincides with the segment
Moreover, the minimal rank completion is unique.
Proof. Condition (5.30) together with the Poincar'e inequalities imply that C; R 1 and R 2 all have
the same inertia (-; ) (with possibly different nullities). The same condition also implies (5.26),
hence (5.25), and Corollary 5.5 can be used. We conclude that
That condition (5.31) implies zero width, is clear from (5.27). The rest which restricts the
polygon to a line segment of the form (- K: The value
follows from Theorem 1.1 with some algebra. It can also be deduced from the maximal rank
considerations in [CD].
To prove uniqueness of the minimal rank completion, we note that (5.30) forces the factorizations
I
I
can check that
the completion rank is rank C the unique solution requires that Z
Observation 5.6 represents the extreme case of a "slim" inertial set. We now turn to examine
the other extreme case of a "fat" inertial set. Under the assumption that R 1 and R 2 are invertible
matrices, four inequalities among (1.1-1.4) are redundant, and the inertial polygon becomes a
triangle, admitting any inertia compatible with the Poincar'e inequalities and the size limitation.
First we quote the following known result about matrices with chordal graphs. Chordality is
discussed in [GJSW], [JR1] and [JR2], and it suffices to say that block bordered 3 \Theta 3 patterns (and
in fact the general staircase patterns of Section have chordal graphs.
Lemma 5.7 . (Corollary 6 of [JR1]) In any hermitian partial matrix P of size m \Theta m whose
pattern has a chordal graph, and all its maximal hermitian specified submatrices are invertible, the
points together with all the lattice points on the straight
line segment connecting them, belong to the inertial set of P:
In the bordered case we can say more:
Corollary 5.8 Assume the notation of Theorem 1.1. Suppose that R 1 and R 2 are invertible.
Then the inertial polygon is the triangle whose vertices are
In other words, every inertia consistent with the Poincar'e inequalities
Proof. Let T be the triangle defined by the above three inequalities. Let D be the inertial polygon.
It is easy to check that v 0 are the three vertices of T : Since the Poincar'e inequalities are a
subset of (1.1) - (1.4), we get the inclusion D ae T : Note that in (1.4) our hypothesis implies that
On the other hand, v By convexity, we
conclude that T ae D:
5.5. Simultaneous rank minimization
We now strengthen the minimal rank result obtained in the last subsection (Corollary 5.4). Consider
the partial matrices N 0 and N 00 of Equation 5.5. We wish to find a matrix Z which will
simultaneously induce minimal rank completions in N 0 and N 00 as well as in the full bordered
matrix P .
Before we tackle the general case, let us make the simplifying assumption (5.26), for which a
slightly stronger result is available. This very simple special case also serves as an outline and
motivation for the general case. Also readers who are only interested in Theorems 6.2 and 1.2 and
Corollary 6.4 but not in Theorem 6.7 may read the proof of Lemma 5.9 and skip the calculations
of Lemma 5.10.
Lemma 5.9 Assume, along with the notation of Theorem 1.1, that
Then there exists a matrix Z 0 satisfying simultaneously the inertia preserving condition
and (using the notation of Equations 5.5 and 5.6) the two minimal rank conditions
Such a completion also satisfies the kernel condition
Ker
Proof. Since Equation (5.26) implies Corollary 5.5, P admits inertia preserving completions,
(5.32). The fact that rank(R 1 ) and Rank(R 2 ) are the minimal completion ranks for N 0 and N 00
is obvious. To prove that conditions (5.32-5.33) are attainable simultaneously, we re-examine the
proof of Theorem 1.1, and reduce the situation to Lemma 4.6, where a positive answer is available.
We assume the rank condition (5.26), which implies -
follow the proof of Theorem
1.1. The matrices B 00 and D 00 in (5.10) turn out to be zero:
@
A
of block sizes (ff; rank C; ffi(C); fl) . Now removing the zero row and column and then taking the
Schur complement with respect to C 0 yields
Y G
We get therefore
Y
G
Lemma 4.8 shows that a matrix exists for which -
G
are simultaneously
minimum rank completions. Choosing Z we see from (5.34) that Z 0
minimizes all the three ranks involved.
It remains to show the kernel condition. First, we observe, for all Z, that
Ker P (Z) oe Ker (N 0 (Z) \Phi I) and Ker(N 0 (Z)) ae Ker(R 1
For Z 0 just obtained, we actually have (5.33), hence the second containment must be equality:
Now the first containment becomes
Ker
Similarly Ker P (Z 0 ) oe Ker (I \Phi R 2
Ker
In the general situation, when the simplifying assumption rank
is not assumed, a simultaneous minimal rank solution still exists, but it is not necessarily an inertia
preserving solution, and the additional kernel condition cannot be guaranteed.
Lemma 5.10 simultaneous minimal rank completion lemma) With the notation of Theorem
1.1, the minimal completion ranks for are, respectively,
Moreover, there exists a matrix Z 0 which produces these ranks simultaneously.
Proof. Assume the notation of Theorem 1.1 and Observation 5.1. First we verify the expressions
for the minimal ranks involved. Corollary 5.4 implies that matrices.
Using our definitions of \Delta 0 and \Delta 00 , the identities r(N 0
are obvious.
Having computed the minimum completion ranks for these 3 matrices, we now demonstrate
that the three minimum ranks can be achieved simultaneously. Our plan is to perform all the
steps of the proof of Theorem 1.1 simultaneously on the three matrices involved. We call a step
permissible if a completion exists which preserves the three minimal ranks. As will be seen, not all
steps are permissible, and some modification will be necessary.
The reduction of H to -
H in (5.12) is permissible, since C is a common block in all three
matrices. Besides -
H; this reduction applied to N 0 and N 00 yields
@
F
F
I
A
@
I
A
Using (2.3), these operations preserve ranks:
Our aim now is to minimize simultaneously rank -
H in (5.12) and rank -
The passage from -
H in (5.12) to H 0 in (5.14) is also permissible, and may be followed by a
similar passage from -
H i to new matrices H 0
all the F; G; X entries located in first and
last block rows and columns in -
H i are made zero. We also discard zero rows and columns in these
matrices (the seventh block coordinate in H 0 ).
Permissibility is violated in the passage from H 0 to H 00 in (5.15). More precisely, complementation
of H 0 with respect to coordinates 1,4,6,10 is permissible; unfortunately, symmetric
complementation with respect to coordinates 2 and 5 is not permissible, since these coordinates are
not present in both H 0
Consequently, the proof of Theorem 1.1 has to be modified: we
perform on H 0 non-hermitian complementation (2.3) with respect to block rows 1,2,5,6 and block
columns 4,5,9,10, i.e. with respect to the matrix
[1;2;5;6][4;5;9;10] =B
@
I \Delta 00
A
. The Schur complement of H 0 with respect to H 0
[1;2;5;6][4;5;9;10] isB
@
I \Delta 00
22 X
A
\GammaB
@
A
@
I \Delta 00
A
@
A
@
I
22 X
A
This matrix is of the general formB
@
I
A
where the W i 's are unspecified. Indeed, it is easy to see that any arbitrary choice of the W i 's
can be achieved by appropriate choice of the X i 's. The respective Schur complements of H 0
1 and
2 with respect to H 0
[1;2;5;6][4;5;9;10] turn out to be
~
I
@
A
Taking zero is obviously a minimal rank choice for all three matrices.
We have reduced the original problem to the simpler problem of simultaneously minimizing the
ranks of the three matrices
F
G 11
Reduction to Lemma 4.8 is completed.
In Section 6 we shall use the following weakened form of Lemma 5.10, which has better propagation
properties.
Corollary 5.11 (Propogation of internal inequalities of Bordered Matrices) Given the notation
of Theorem 1.1 and Observation 5.1, then there exists a matrix Z 1 such that
Corollary 5.11 follows directly from Lemma 5.10 and Observation 5.1.
5.6. Solvability of the Ricatti equation.
We end this section with a small contribution connected to the theory of Lyapunov and Riccati
equations.
Lemma 5.12 Given matrices A; B, and C of sizes ff \Theta ff; al \Theta fi; fi \Theta fi; respectively, with
A and C hermitian, define
Then the possible inertias of matrices of the form P arbitrary
Z, form the septagon
Proof. This is an easy corollary of Theorem 1.1, using complementation on the last two block
coordinates of the bordered matrixB B
@
Z I 0C C
In [CG3] the values of n and n were determined for this case.
Corollary 5.13 The Riccati equation A solvable if and only if
These results apply also for the Lyapunov or Stein equations: simply assume that C or B is a
zero matrix. We emphasize, however, that in the classical context of these equations Z is assumed
hermitian (at least), and then it is not clear whether this puts additional restrictions on the set of
inertias.
6 Some Completion Results for General Band Matrice
In this section, we consider hermitian matrices with general block band or "staircase structure",
where again the blocks may vary in size. We follow the method of Dym and Gohberg [DG], which
decomposes the completion process into a succession of simple steps, each involving the completion
of one bordered submatrix. Combining this procedure with the results of Section 5, we are able to
draw several interesting conclusions:
In Subsection 6.3 we identify certain classes of staircase hermitian matrices which admit
inertia preserving completions; these are completions which inherit their inertia values p and n
from (possibly two different) specified blocks of the given partial matrix P . Such completions are
obviously minimal rank completions (See Theorem 6.2 and Corollaries 6.3 and 6.9). These results
generalize the (scalar) band hermitian completion results of the second author in [D6].
In Subsection 6.3 we consider block band or staircase matrices for which all maximal
specified hermitian submatrices are invertible. We show that such matrices admit all the possible
completion inertias consistent with Poincar'e's inequalities (See Theorem 1.2) which includes inertia
preserving completions. These results generalize the (scalar) band hermitian completion results of
the second author in [D6].
(III) Not every partial matrix admits inertia preserving completions. In Subsection 6.4 we
establish modest upper bounds on the minimal posssible rank for hermitian completions of staircase
hermitian matrices.
6.1. The Staircase Matrix Notation
In dealing with staircase partial matrices, we shall adhere to the following notation and observations,
which shall collectively be referred to as the Staircase Notation.
We recognize the bordered matrix of Theorem 1.1 in each pair (R i and R i+1 ) of successive
maximal hermitian submatrices. We therefore define the s bordered partial submatrices P i to be:
@
where these specified submatrices A of R are
As in the notation of Theorem 1.1, we observe that the R i are the specified block submatrices:
and that each C i is the overlap of R i and R i+1 . The submatrices R 1 ; C and R 2 of the
notation of Theorem 1.1 correspond to the submatrices R of each bordered
submatrix P i .
We shall denote by P i (Z i ) the completions of P i , using Z i in the right upper block of
We denote the inertias of P i (Z i ) by (- 0
In dealing with the i'th bordered submatrix P i ; the incremental ranks
d defined in the notation of Theorem 1.1 and Observation 5.1will be distinguished by the
subscript i.
6.2. The Diagonal Completion Formalism
diagonal [partial] completion R+ of an s-step staircase partial matrix R is an
staircase partial hermitian matrix obtained by completing all the s bordered matrices P i of R. This
entails the addition of a matched pair of whole block diagonals alongside the specified band. In a
diagonal completion the different bordered completions P i (Z i ) are independent of each other, that
is the Z i do not overlap.
(II) A standard procedure for completing a staircase matrix is by a succession of diagonal
completions. Let F denote a full hermitian completion of R. We may obtain F via a chain of s
staircase partial matrices,
Each matrix in (6.1) is obtained from its precursor via diagonal completion, and its staircase pattern
is reduced by one step. We shall distinguish between the relevant submatrices of each
matrix in this chain by attaching to them the appropriate number of (subscript) plus signs.
(III) The N st matrix (1 - in the above chain is called an N-diagonal [partial]
completion of R.
We will identify certain submatrices of R with their counterparts in R+ : For example,
the completed bordered matrices P i (Z i ) of R will be identified with the matrices R+i of R+ : In
addition, the maximal submatrices R i of R will be identified with the submatrices C+i of R+ :
The ordering of these matrices will always be from top left to bottom right.
This technique of completing a hermitian scalar band matrix by adding successive pairs of
diagonals was developed by Dym and Gohberg in [DG]. This technique of completing a hermitian
scalar band matrix was later used in many papers, including [D5], [D6], [DG], [EGL1], [EGL1] and
[EL]. We shall use the more general staircase or general block band approach of the appendix of
[JR2].
In applying the bordered case (for example, Theorem 1.1) to sections of a band matrix or the
more general staircase matrices, the key concept is propagation. Those properties which survive a
single (Theorem 1.1) completion step, will by induction, survive the full completion process. Our
results in this section are all based on properties which propagate.
6.3. Inertia preserving completions
We call an N-step completion R 0 of R inertia preserving if it satisfies the equations:
In particular, if F is a fully specified completion then it is inertia preserving if
Such a completion is necessarily a minimal rank completion.
Not every partial staircase matrix admits an inertia preserving completion. In fact, Example
6.10 will present an infinite sequence of partial staircase matrices whose maximal specified
submatrices all have rank 2 but all the full hermitian completions are invertible.
Lemma 6.1 (Propagation of inertia preservation) Given an s-step hermitian staircase matrix
R. Using the notation of Subsection 6.1, suppose that the blocks of R satisfy these propagation
equations:
Rank (B
for each (using the notation of Subsection 6.2):
(i) There exists a one step inertia preserving completion R+ of R for which
Rank (B
for each
(ii) This completion R+ satisfies the kernel condition
Ker R+i oe Ker (R i \Phi I)
Proof. The proof of (i) is a straightforward application of Lemma 5.9 repeated s times. (6.4)
implies that each P i fulfills the hypothesis of that lemma. Therefore each P i admits a completion
simultaneously
using the notation of Equations 5.5 and 5.6. For the condition rank N
into
For the condition rank N
Rank (B
Together this is precisely (6.5). The condition -(P of Lemma 5.7 applied to each
Part (ii) follows from part (i), (3.3) and Lemma 5.9.
We observe that (6.4) is a propagation condition: Lemma 6.1 shows that this condition can be
made to survive a single diagonal completion. Repeating Lemma 5.12 s times along the chain (6.1),
we get
Theorem 6.2 (Inertia preserving completions) Given an s-step hermitian staircase matrix
R fulfilling the propagation Equation (6.4):
rank (B
Then R admits an inertia preserving fully specified completion F for which
Moreover, for this completion F , Ker F contains all the appropriate kernels of the form Ker (I \Phi
Theorem 6.2 and its proof are largely a block generalization of Dancis' proof in [D6]. The
inequalities show that the expressions in the theorem are lower bounds for
Theorem 6.2 shows that they are achieved.
Corollary 6.3 If all the matrices, C i and R i in a staircase matrix R, have the same rank r,
then there exists a hermitian completion F with rank r.
Proof. The condition that all the C i and R i matrices have the same rank implies (6.4) and
hence Theorem 6.2 is applicable.
This corollary was established for (hermitian and nonhermitian) completions of hermitian and
nonhermitian, resp., band matrices in [EL].
An important special case where Theorem 6.2 is applicable is when all the C i submatrices of
R are invertible.
Corollary 6.4 (Inertia preserving completions) Given an s-step staircase hermitian matrix R.
Suppose that the s submatrices C i of R are all invertible. Then R admits an inertia preserving
completion F whose kernel contains all the appropriate kernels of the form Ker (I \Phi R i \Phi I):
Proof Since all the C i are invertible, the propogation Equation (6.4) holds and Theorem 6.2
applies.
6.4. Incremental bounds on inertia growth
In general, even assuming a minimal rank completion in each step, the ranks of the matrices in (6.1)
may increase. At present, we cannot compute the minimal rank for the completions of a general
staircase matrix or even for a general scalar band matrix. The reason is lack of propagation: the
inertias of P i (Z i ) do not depend exclusively on the inertias of R i and C i ; as is evident from Theorem
1.1. However, use of Corollary 5.11 will enable us to obtain an upper bound on the inertia increase.
In the next observation and lemma, the B+i ; C+i and D+i matrices are the B
matrices of a diagonal completion, R+ ; the d+i will be
which is consistent with our general notation.
Observation 6.5 Let R be a staircase matrix, together with the notation of Subsections 6.1 and
6.2. Suppose that R+ is a diagonal completion of R for which all bordered completions
satisfy the simultaneous minimal rank completion Lemma 5.10. Then the incremental ranks of the
bordered submatrices of R and of R+ are related via
i and d 00
Proof. We have the following connections between R-related and R+ -related objects:
From Lemma 5.10 we note that
Combining these equations yields d 0
. The other equation may be similarly observed.
Lemma 6.6 (Propagation of incremental ranks) Let R be an s-step hermitian staircase ma-
trix. We use the notation
which is consistent with our bordered and band matrix notation. Set -
d+i g:
Then there exists a diagonal completion R+ of R for which
Proof. The proof is a straightforward application of Corollary 5.11. The definition of -
d and
implies that -
Therefore for each P i , there exists a completion P i (Z i ) which achieves
simultaneously
and
The last inequalities (6.9) translate to d 0
d and d 00
d; which repeated over all i implies that
d: The former inequalities (6.8) prove the rest of (6.7).
We observe that Inequalities (??) combine to form a propagation condition: Lemma 6.6 shows
that these inequalities may be transferred from a staircase matrix to a diagonal completion. Repeating
Lemma 6.6 untill R is fully completed, we get:
Theorem 6.7 (Incremental bounds on inertia growth) Given an s-step hermitian staircase
matrix R (together with the notation of Subsection 6.1) then there exists a hermitian completion F
of R whose inertia (p; n; d) satisfies
Corollary 6.8 Given a hermitian staircase matrix R (together with the notation of Subsection
6.1). Suppose there is an integer t such that
then there exists a hermitian completion F of R such that
st st
Proof. The hypotheses and (5.3) provide: rank C
d i are integers, this inequality becomes -
t. In this way, one
sees that t - maxf -
and the theorem is applicable.
The case is of particular interest:
Corollary 6.9 Given a hermitian staircase matrix R (together with the notation of Subsection
6.1). Suppose that
then there exists an inertia preserving hermitian completion F of R .
That Theorem 6.7 is best possible without additional hypotheses is demonstrated by the next
example:
Example 6.10 Consider the matrix
I
where U is an
strictly upper triangular matrix. We consider R(U) as a partial s-step band matrix, in which U is
unspecified. In the notation of Theorem 6.7, all -
Thus, by this theorem,
we expect to find a completion with
However, since det hence (6.10) can only be satisfied with
equality. In fact, using the extended Poincar'e inequalities, we see that every completion R(U)
satisfies (6.10) with equality.
6.5. Proof of Theorem 1.2. - a staircase of invertible maximal
hermitian submatrices
Throughout this subsection we shall assume that R is an s-step staircase partial matrix with all
maximal submatrices R i invertible. In this case, all inertias compatible with Poincares Inequalities
are achievable with a hermitian completion.
First we present the minimal-rank inertia-preserving case as the next lemma.
Lemma 6.11 (An inertia preserving lemma) Given an s-step staircase hermitian matrix R
(together with the notation of Subsection 6.1). Suppose that each of the maximal submatrices
s of R is an invertible matrix. Then there is a hermitian completion F of R such
that
and such that
Ker F contains Ker R 1 +Ker R 2
Proof. We use Corollary 5.13 s times as we construct an inertia preserving diagonal completion
R+ of R. Then R+ will satisfy the hypotheses of Corollary 6.4, which will produce the desired
hermitian completion F with -(F
Proof of Theorem 1.2. We will use Corollary 5.8 repeatedly to construct successive diagonal
completions with invertible maximal matrices and increasing inertias.
Construction of the (first) diagonal completion (R+ ).
Case 1. If the size of P then Corollary 5.8 is used to choose a matrix Z i such that
is an invertible matrix with
Case 2. If the size of P then Corollary 5.8 is used to choose a matrix Z i such that
In
Depending only on its size, P i (Z i ) may be an invertible or a non-invertible matrix.
In both cases, the new fC+i g are the previous maximal submatrices, fR i g, and hence all the
new fC+i g of R+ are invertible matrices.
If Case 2 occurred, at least once, then the desired positivity and negativity has been achieved.
Then Corollary 6.4 applied to R+ will provide the desired full completion F , with In
If no Case 2 has occured, only Case 1, then all the maximal submatrices of R+ are invertible.
In this manner, one constructs a number of successive diagonal completions until Case 2 is
used. With each successive diagonal completion, the values of the positivity and negativity grow.
At some point, at least one of the new -(R i ) and one of the new -(R j ) (for the latest successive
diagonal completion reach the desired values p and n . Then
This will occur when Case 2 is used, possibly sooner. With the possible
exception of the current diagonal completion, all the maximal specified submatrices of the various
successive diagonal completions were invertible (since only Case 1 was used). Therefore all the
C i of the current diagonal completion were the invertible maximal submatrices of the previous
diagonal completion. Therefore Corollary 6.4 is applicable and it completes the proof.
--R
Positive definite matrices with a given sparsity pattern.
On the eigenvalues of matrices with given upper triangular part
Determinantal formulae for matrix completions associated with chordal graphs.
The inertia of a Hermitian matrix having prescribed diagonal blocks.
Maximal rank Hermitian completions of partially specified Hermitian matrices.
Minimal signature in lifting of operators I
Minimal signature in lifting of operators II
The negative signature of some Hermitian matrices
Ranks of completions of partial matrices.
The possible inertias for a Hermitian matrix and its principal submatrices.
On the inertias of symmetric matrices and bounded self-adjoint operators
Several consequences of an inertia theorem.
Positive semidefinite completions of partial Hermitian matrices.
Choosing the inertias for completions of certain partially specified matrices
Ranks and Inertias of Hermitian Toeplitz matrices Report
Extensions of band matrices with band inverses.
maximum entropy and the permanence principle.
Invertible self adjoint extensions of band matrices and their entropy.
Completing Hermitian partial matrices with minimal negative signature.
Determination of the ii
On the inertia of some classes of partitioned matrices.
Matrix Analysis I
Inertia possibilities for completions of partial Hermitian matrices.
Chordal inheritance principles and positive definite completions of partial matrices over function rings
chordal graphs and matrix completions.
On the matrix equation AX
--TR | matrices;minimal rank;inertia;hermitian;completion |
295659 | Finitary fairness. | Fairness is a mathematical abstraction: in a multiprogramming environment, fairness abstracts the details of admissible (fair) schedulers; in a distributed environment, fairness abstracts the relative speeds of processors. We argue that the standard definition of fairness often is unnecessarily weak and can be replaced by the stronger, yet still abstract, notion of finitary fairness. While standard weak fairness requires that no enabled transition is postponed forever, finitary weak fairness requires that for every computation of a system there is an unknown bound k such that no enabled transition is postponed more than k consecutive times. In general, the finitary restriction fin(F) of any given fairness requirement Fis the union of all &ohgr;-regular safety properties contained in F. The adequacy of the proposed abstraction is shown in two ways. Suppose we prove a program property under the assumption of finitary fairness. In a multiprogramming environment, the program then satisfies the property for all fair finite-state schedulers. In a distributed environment, the program then satisfies the property for all choices of lower and upper bounds on the speeds (or timings) of processors. The benefits of finitary fairness are twofold. First, the proof rules for verifying liveness properties of concurrent programs are simplified: well-founded induction over the natural numbers is adequate to prove termination under finitary fairness. Second, the fundamental problem of consensus in a faulty asynchronous distributed environment can be solved assuming finitary fairness. | Introduction
Interleaving semantics provides an elegant and abstract way of modeling concurrent computation.
In this approach, a computation of a concurrent system is obtained by letting, at each step, one of
the enabled processes execute an atomic instruction. If all interleaving computations of a system
satisfy a property, then the property holds for all implementations of the program independent of
whether the tasks are multiprogrammed on the same processor and which scheduling policy is used,
or whether the system is distributed and what the speeds of different processors are. Furthermore,
the interleaving model is very simple as it reduces concurrency to nondeterminism.
A preliminary version of this paper appears in Proceedings of the Ninth IEEE Symposium on Logic in Computer
Science, pp. 52-61, 1994.
y On leave from Bell Laboratories, Lucent Technologies.
z Supported in part by the ONR YIP award N00014-95-1-0520, by the NSF CAREER award CCR-9501708, by
the NSF grant CCR-9504469, by the AFOSR contract F49620-93-1-0056, by the ARPA grant NAG2-892, and by the
contract 95-DC-324A.
The interleaving abstraction is adequate for proving safety properties of systems (a safety property
is of the form "something bad never happens," for example, mutual exclusion). However, it is
usually not suitable to prove guarantee properties (a guarantee property is of the form "something
good will eventually happen," for example, termination) or more general liveness properties. The
traditional approach to establishing guarantee properties is to require that all fair computations,
instead of all computations, satisfy the property. Intuitively, fairness means that no individual
process is ignored forever. Since all reasonable implementations of the system, whether in multi-programming
or in multiprocessing, are expected to be fair, if we prove that a program satisfies
a property under the assumption of fairness, it follows that the property holds for all possible
implementations of the program.
While the theory of specification and verification using different forms of fairness is well understood
(see, for example, [LPS82, Fra86, MP91]), fairness has two major drawbacks. First, the
mathematical treatment of fairness, both in verification and in semantics, is complicated and requires
higher ordinals. Second, fairness is too weak to yield a suitable model for fault-tolerant
distributed computing. This is illustrated by the celebrated result of Fischer, Lynch, and Paterson
that, under the standard fairness assumption, processes cannot reach agreement in an asynchronous
distributed system even if one process fails. We quote from their paper [FLP85]:
These results do not show that such problems [distributed consensus] cannot be solved
in practice; rather, they point out the need for more refined models of distributed
computing that better reflect realistic assumptions about processor and communication
timings.
We propose one such "more refined" model by introducing the notion of finitary fairness. We argue
that finitary fairness (1) is sufficiently abstract to capture all possible implementations, both in
the context of multiprogramming and in the context of distributed computing, and (2) does not
suffer from either of the two aforementioned disadvantages associated with the standard notion of
fairness.
Justification of finitary fairness
A fairness requirement is specified as a subset F of the set of all possible ways of scheduling different
processes of a program. Let us first consider a multiprogramming environment, where all tasks are
scheduled on a single processor. A scheduler that meets a given fairness requirement F is a program
whose language (i.e., set of computations) is contained in F . The language of any program is a
safety property (i.e., it is closed under limits). Furthermore, if the scheduler is finite-state, then
its language is !-regular. Thus, to capture all finite-state schedulers that implement F , it suffices
to consider the (countable) union of all !-regular safety properties that are contained in F . There
are several popular definitions of F , such as strong fairness, weak fairness, etc. [LPS82, Fra86].
For every choice of F , we obtain its finitary version fin(F ) as the union of all !-regular safety
properties contained in F . In the case of weak fairness F , we show that the finitary version fin(F )
is particularly intuitive: while F prohibits a schedule if it postpones a task forever, fin(F ) also
prohibits a schedule if there is no bound on how many consecutive times a task is postponed. In
general, a fairness requirement F is an !-regular liveness property [AFK88]. We show that the
finitary version fin(F ), then, is still live, but not !-regular.
Now let us consider a distributed environment, where all tasks are executed concurrently on
different processors. Here, finitary fairness corresponds to the assumption that the execution speeds
of all processors stay within certain unknown, but fixed, bounds. Formally, a distributed system
can be modeled as a transition system that imposes lower and upper time bounds on the transitions
[HMP94]. We show that a timed transition system satisfies a property for all choices of lower
and upper time bounds iff the underlying untimed transition system satisfies the same property
under finitary weak fairness. This correspondence theorem not only establishes the adequacy of
finitary fairness for distributed systems, but in addition provides a method for proving properties
of timed systems whose timing is not known a priori.
To summarize, finitary fairness abstracts the details of fair finite-state schedulers and the details
of the independent speeds (timings) of processors with bounded drift. The parametric definition
of finitary fairness also lends itself to generalizations such as computable fairness : the computable
version com(F ) of a fairness assumption F is the (countable) union of all recursive safety properties
that are contained in F . In a multiprogramming environment, computable fairness abstracts the
details of fair computable schedulers; in a distributed environment, computable fairness abstracts
the independent speeds of processors whose drift is bounded by any recursive function.
Benefits of finitary fairness
Verification. We address the problem of verifying that a program satisfies a property under a
finitary fairness assumption fin(F ). Since fin(F ) is not !-regular, it is not specifiable in temporal
logic. This, however, is not an obstacle for verification. For finite-state programs, we show that a
program satisfies a temporal-logic specification under fin(F ) iff it satisfies the specification under
F itself. This means that for finite-state programs, the move to finitary fairness does not call for a
change in the verification algorithm.
For general programs, the proof rules for verifying liveness properties are simplified by the
use of finitary fairness. Suppose we wish to prove that a program terminates. To prove that all
computations of a program terminate, one typically identifies a ranking (variant) function from the
states of the program to the natural numbers such that the rank decreases with every transition
of the program. This method is not complete for proving the termination of all fair computations.
First, there may not be a ranking function that decreases at every step. The standard complete
verification rule, rather, relies on a ranking function that never increases and is guaranteed to
decrease eventually [LPS82, Fra86]. For this purpose, one needs to identify so-called "helpful"
transitions that cause the ranking function to decrease. Second, induction over the natural numbers
is not complete for proving fair termination and one may have to resort to induction over ordinals
higher than !.
We show that proving the termination of a program under finitary weak fairness can be reduced
to proving the termination of all computations of a transformed program. The transformed
program uses a new integer variable, with unspecified initial value, to represent the bound on how
many consecutive times an enabled transition may be postponed. Since the termination of all computations
of the transformed program can be proved using a strictly decreasing ranking function on
the natural numbers, reasoning with finitary fairness is conceptually simpler than reasoning with
standard fairness.
Distributed consensus. A central problem in fault-tolerant distributed computing is the consensus
problem, which requires that the non-faulty processes of a distributed system agree on a
common output value [PSL80]. Although consensus cannot be reached in the asynchronous model
if one process fails [FLP85], in practice, consensus is achieved in distributed applications using
constructs like timeouts. This suggests that the asynchronous model with its standard fairness
assumption is not a useful abstraction for studying fault-tolerance. One proposed solution to this
problem considers the unknown-delay model (also called partially synchronous model) in which there
is fixed upper bound on the relative speeds of different components, but this bound is not known a
priori [DLS88, AAT97, RW92]. The asynchronous model with the finitary fairness assumption is an
abstract formulation of the unknown-delay model. In particular, we prove that the asynchronous
model with the finitary fairness assumption admits a wait-free solution for consensus that tolerates
an arbitrary number of process failures, by showing that finitary fairness can substitute the timing
assumptions of the solution of [AAT97].
Informal Motivation: Bounded Fairness
Before introducing the general definition of finitary fairness (Section 3) and its applications (Sec-
tions 4 and 5), we begin by motivating the finitary version of weak fairness through the intuitive
concept of bounded fairness. Consider the following simple program P 0 with a boolean variable x
and an integer variable y:
initially
repeat x := x forever k repeat y
The program P 0 consists of two processes, each with one transition. The transition l complements
the value of the boolean variable x; the transition r increments the value of the integer variable y.
A computation of P 0 is an infinite sequence of states, starting from the initial state
and such that every state is obtained from its predecessor by applying one of the two
transitions. For the purpose of this example, a schedule is an infinite word over the alphabet fl; rg.
Each computation of P 0 corresponds, then, to a schedule, which specifies the order of the transitions
that are taken during the computation. The two processes of P 0 can be executed either by
multiprogramming or in a distributed environment.
Multiprogramming
In a multiprogramming environment, the two processes of P 0 are scheduled on a single processor.
A scheduler is a set of possible schedules. One typically requires that the scheduler is "fair"; that
is, it does not shut out one of the two processes forever. Formally, a schedule is fair iff it contains
infinitely many l transitions and infinitely many r transitions; a scheduler is fair iff it contains only
fair schedules.
be the set of fair schedules. If we restrict the set of computations of the
program P 0 to those that correspond to fair schedules, then P 0 satisfies a property OE iff every
computation of P 0 whose schedule is in F1 satisfies OE. For instance, under the fairness assumption
F1 , the program P 0 satisfies the property
that is, in any fair computation, the value of x is true in infinitely many states, and the value of
y is even in infinitely many states. Note that there are computations of P 0 that correspond to
unfair schedules, and do not satisfy the formula OE 1 . Thus, the fairness assumption is necessary to
establish that the program P 0 satisfies the property OE 1 .
The fairness requirement F1 is an abstraction of all admissible real-life schedulers, namely, those
that schedule each transition "eventually." Any (non-probabilistic) real-life scheduler, however, is
finite-state and therefore must put a bound on this eventuality.
Consider, for instance, a round-robin scheduler that schedules the transitions l and r alternately.
For round-robin schedulers, we can replace the fairness assumption F1 by the much stronger
assumption F 1 that contains only two schedules, (lr) ! and (rl) ! . Under F 1 , the program P 0
satisfies the property
which implies the property OE 1 . We call F 1 a 1-bounded scheduler. In general, for a positive integer
k, a k-bounded scheduler never schedules one transition more than k times in a row. Formally,
a schedule is k-bounded, for k - 1, iff it contains neither the subsequence l k+1 nor r k+1 ; a scheduler
is k-bounded iff it contains only k-bounded schedules (similar definition is considered in [Jay88]).
Let F k be the set of k-bounded schedules. The assumption F k of k-boundedness is, of course,
not sufficiently abstract, because for any k, it is easy to build a fair finite-state scheduler that is
not k-bounded. So let us say a schedule is bounded iff it is k-bounded for some positive integer k,
and a scheduler is bounded iff it contains only bounded schedules. Clearly, every fair finite-state
scheduler is bounded. In order to prove a property of the program for all implementations, then,
it suffices to prove the property for all bounded schedulers.
be the set of bounded schedules. If we restrict the set of computations of
the program P 0 to those that correspond to bounded schedules, then P 0 satisfies a property OE iff
every computation of P 0 whose schedule is in F ! satisfies OE. We call F ! the finitary restriction of
the fairness assumption F1 .
Three observations about F ! are immediate. First, the finitary version F ! is a proper subset
of F1 ; in particular, the schedule is fair but unbounded, and therefore belongs to
. Second, the set F ! itself is not a finite-state scheduler, but is the countable union of
all fair finite-state schedulers. Third, F ! is again a liveness property, in the sense that a stepwise
scheduler cannot paint itself into a corner [AFK88]: every finite word over fl; rg can be extended
into a bounded schedule 1 .
Since the finitary fairness assumption F ! is stronger than the fairness assumption F1 , a program
may satisfy more properties under F ! . Consider, for example, the property
where the state predicate power-of-2 (y) is true in a state iff the value of y is a power of 2. If a
computation of P 0 does not satisfy OE ! , then it must be the case that the transition l is scheduled
only when power-of-2 (y) holds. It follows that for every positive integer k, there is a subsequence of
length greater than k that contains only r transitions. Such a schedule does not belong to F ! and,
hence, the program P 0 satisfies the property OE ! under F ! . On the other hand, it is easy to construct
a fair schedule that does not satisfy OE ! , which shows that P 0 does not satisfy OE ! under F1 .
Multiprocessing
In a distributed environment, the two processes of P 0 are executed simultaneously on two processors.
While the speeds of the two processors may be different, one typically requires of a (non-faulty)
processor that each transition consumes only a finite amount of time. Again, the fairness requirement
F1 is an abstraction of all admissible real-life processors, namely, those that complete each
transition "eventually." Again, the fairness assumption F1 is unnecessarily weak.
Assume that the transition l, executed on Processor I, requires at least time ' l and at most
time u l , for two unknown rational numbers ' l and u l with u l - ' l ? 0. Similarly, the transition r,
It should also be noted that the set F! does not capture randomized schedulers. For, given a randomized scheduler
that chooses at every step one of the two transitions with equal probability, the probability that the resulting schedule
is in F! is 0. On the other hand, the probability that the resulting schedule is in F1 is 1.
executed on Processor II, requires at least time ' r ? 0 and at most time u r - ' r . Irrespective of
the size of the four time bounds, there is an integer k - 1 such that both k \Delta ' l ? u r and k \Delta ' r ? u l .
Each computation corresponds, then, to a k-bounded schedule. It follows that finitary fairness is
an adequate abstraction for speed-independent processors. It should be noted that finitary fairness
is not adequate if the speeds of different processors can drift apart without bound. For this case,
we later generalize the notion of finitary fairness.
Finitary Fairness
3.1 Sets of infinite words
An !-language over an alphabet \Sigma is a subset of the set \Sigma ! of all infinite words over \Sigma. For instance,
the set of computations of a program is an !-language over the alphabet of program states.
Regularity
An !-language is !-regular iff it is recognized by a B-uchi automaton, which is a nondeterministic
finite-state machine whose acceptance condition is modified suitably so as to accept infinite
words [B-uc62]. The class of !-regular languages is very robust with many alternative characterizations
(see [Tho90] for an overview of the theory of !-regular languages). In particular, the
set of models of any formula of (propositional) linear temporal logic (PTL) is an !-regular language
[GPSS80]. The set of computations of a finite-state program is an !-regular language. The
set F1 (Section 2) of fair schedules over the alphabet fl; rg is an !-regular language (23 l - 23 r),
and so is the set F k of k-bounded schedules, for every k - 1.
Safety and Liveness
For an !-language \Pi ' \Sigma ! , let pref (\Pi) ' \Sigma be the set of finite prefixes of words in \Pi. The
!-language \Pi is a safety property (or limit-closed) iff for all infinite words w, if all finite prefixes of
w are in pref (\Pi) then w 2 \Pi [ADS86]. Every safety property \Pi is fully characterized by pref (\Pi).
Since a program can be executed step by step, the set of computations of a program is a safe
!-language over the alphabet of program states.
A safety property is !-regular iff it is recognized by a B-uchi automaton without acceptance
conditions. Properties defined by temporal-logic formulas of the form 2 p, where p is a past formula
of PTL, are safe and !-regular. For every k - 1, the set F k (Section 2) of k-bounded schedules is
an !-regular safety property.
The !-language \Pi is a liveness property iff pref that is, every finite word can
be extended into a word in \Pi. The set F1 (Section 2) of fair schedules is an !-regular liveness
property.
Topological characterization
Consider the Cantor topology on infinite words: the distance between two distinct infinite words
w and w 0 is 1=2 i , where i is the largest nonnegative integer such that w
The closed sets of the Cantor topology are the safety properties; the dense sets are the liveness
properties. All !-regular languages lie on the first two-and-a-half levels of the Borel hierarchy:
every !-regular language is in F oeffi " G ffioe . 2
There is also a temporal characterization of the first two-and-a-half levels of the Borel hierarchy
[MP90]. Let p be a past formula of PTL. Then every formula of the form 2 p defines an
F-set; every formula of the form 3 p, a G-set; every formula of the form 23 p, a G ffi -set; and every
formula of the form 32 p, an F oe -set. For example, the set F1 of fair schedules is a G ffi -set.
3.2 The finitary restriction of an !-language
Now we are ready to define the operator fin:
The finitary restriction fin(\Pi) of an !-language \Pi is the (countable) union of all !-regular
safety languages that are contained in \Pi.
By definition, the finitary restriction of every !-language is in F oe . Also, by definition, fin(\Pi) ' \Pi.
The following theorem states some properties of the operator fin.
Theorem 1 Let \Pi, \Pi 0 be !-languages:
1.
2. fin is monotonic: if \Pi ae \Pi 0 then fin(\Pi) ae fin(\Pi 0 ).
3. fin distributes over intersection: fin(\Pi "
Proof. The first two follow immediately from the definition of fin. Since \Pi " \Pi 0 is contained in
\Pi as well as in \Pi 0 , from the monotonicity, we have To prove the
inclusion From the definition
of fin, there exist !-regular safety properties
1 .
The class of safety properties is closed under intersection, and so is the class of !-regular languages.
Hence,
1 is an !-regular safety property. Since w
The following proposition formalizes the claims we made about the example in Section 2. It also
shows that the finitary restriction of an !-regular language is not necessarily !-regular.
Proposition 2 Let F1 be the set of fair schedules from Section 2, and let F ! be the set of bounded
schedules. Then F ! is the finitary restriction of F1 (that is, neither
!-regular nor safe.
Proof. Recall that F is the union of !-regular safety properties contained
in F1 . Each F k is an !-regular safety property and F k ae F1 . Hence,
Now consider an !-regular safety property G contained in F1 . Suppose G is accepted by a
B-uchi automaton MG over the alphabet fl; rg. Without loss of generality, assume that every state
of MG is reachable from some initial state, and every state is an accepting state (since G is a safety
property). We wish to prove that if MG has k states then G ' F k . Suppose not. Then there is a
word w such that MG accepts w and w contains consecutive symbols of the same type, say
l. Thus Since MG has only k states, it follows
2 The first level of the Borel hierarchy consists of the class F of closed sets and the class G of open sets; the second
level, of the class G ffi of countable intersections of open sets and the class F oe of countable unions of closed sets; the
third level, of the class F oeffi of countable intersections of F oe -sets and the class G ffioe of countable unions of G ffi -sets.
that there is a state s of MG such that there is a path from the initial state to s labeled with w 0 l i
for some 0 - i - k, and there is a cycle that contains s and all of whose edges are labeled with
l. This implies that MG accepts the word w which is not a fair schedule, a contradiction to the
inclusion G ' F1 .
Observe that, for each k - 1, the schedule l k can be extended to a word in F k , and hence
implying that F ! is not closed under limits, that is, F ! is not a
safety property.
Now we prove that F ! is not !-regular. Suppose F ! is !-regular. From the closure properties
of !-regular languages, the set of unbounded fair schedules is also !-regular. We
know that G is nonempty (it contains the schedule From the properties of !-
regular languages it follows that G contains a word w such that for two finite words
contains at least one l and one r symbol. This means that, for
a contradiction to the assumption that w 62 F ! .
In other words, although F ! is a countable union of safety properties that are definable in PTL, F !
itself is neither a safety property nor definable in PTL. To define F ! in temporal logic, one would
need infinitary disjunction. In general, the operator fin does not preserve liveness also. That is,
it may happen that pref However, when applied to !-regular
properties, liveness is preserved.
Theorem 3 If \Pi be an !-regular language then pref
Proof. Since fin(\Pi) ' \Pi and pref is monotonic, pref (fin(\Pi)) ' pref (\Pi). To prove the inclusion
pref (\Pi) ' pref (fin(\Pi)), suppose \Pi is !-regular language over \Sigma, and consider w 2 pref (\Pi). From
!-regularity of \Pi, it follows that there is a word w 0 2 \Pi such that w
2 for finite words
. The language containing the single word w 0 is !-regular, safe, and contained in \Pi.
Hence,
This immediately leads to the following corollary:
Corollary 4 If \Pi is an !-regular liveness property then fin(\Pi) is live.
Observe that the language F1 is !-regular and live, and hence, F ! is also live: pref
This means that when executing a program, the fairness requirement F ! , just like the original
requirement F1 , can be satisfied after any finite number of steps.
The operator fin is illustrated on some typical languages below:
9k such that every subsequence of length k has some pg;
9k such that every subsequence with k p's has some qg.
3.3 Transition systems
From standard fairness to finitary fairness
Concurrent programs, including shared-memory and message-passing programs, can be modeled as
transition systems [MP91].
A transition system P is a triple (Q; a set of states, T is a finite set of
transitions, and Q 0 ' Q is a set of initial states. Each state q 2 Q is an assignment of values to all
program variables; each transition - 2 T is a binary relation on the states (that is, - ' Q 2 ). For a
state q and a transition - , let be the set of -successors of q. A computation
q of the transition system P is an infinite sequence of states such that q 0 2 Q 0 and for all i - 0,
there is a transition - 2 T with q for the set of computations of P . The
set \Pi(P ) is a safe !-language over Q. If Q is finite, then \Pi(P ) is !-regular.
A transition - is enabled at the i-th step of a computation q iff -(q i ) is nonempty, and - is
taken at the i-th step of q iff q loss of generality, we assume that the set of
program variables contains for every transition - 2 T a boolean variable enabled(-) and a boolean
variable taken(- ). Let the scheduling alphabet \Sigma T be the (finite) set of interpretations of these
boolean variables, that is, \Sigma T is the power set of the set fenabled(-); taken(-) j - 2 Tg. Given a
computation q of P , the schedule oe(q) of q is the projection of q to the scheduling alphabet. The
set of schedules of P , then, is a safety property over \Sigma T .
A fairness requirement F for the transition system P is an !-language over the finite scheduling
alphabet \Sigma T . The fairness requirement restricts the set of allowed computations of the program.
In general, F is an !-regular liveness property [AFK88]. The requirement of liveness ensures that,
when executing a program, a fairness requirement can be satisfied after any finite number of steps.
In particular, the requirement of weak fairness WF for P is the set of all infinite words w
such that for every transition - 2 T , there are infinitely many integers i - 0 with taken(-) 2 w i or
no transition is enabled forever without being taken. It is specified by the
following
The requirement WF is !-regular and live.
The requirement of strong fairness SF for P is the set of all infinite words w
T such that
for every transition - 2 T , if there are infinitely many steps
there are infinitely many steps no transition is enabled infinitely
often without being taken. It is a stronger requirement than the weak fairness (SF ae WF ), and is
specified by the formula -
The weak-fairness requirement WF is a G ffi -set; the strong-fairness requirement SF is neither
in G ffi nor in F oe , but lies in F oeffi " G ffioe . Since both sets are !-regular, their finitary restrictions
fin(WF) and fin(SF ), which belong to F oe , are again live (Corollary 4).
The next theorem (a generalization of Proposition 2) shows that the finitary restriction of weak
and strong fairness coincide with the appropriate notions of bounded fairness. Define a schedule
T to be weakly-k-bounded , for a nonnegative integer k, iff for all transitions - of P , - cannot be
enabled for more than k consecutive steps without being taken, that is, for all integers i - 0, there
is an integer j, or enabled(-) 62 w j . A schedule is weakly-bounded
if it is weakly-k-bounded for some k - 0. Similarly, a schedule w is strongly-k-bounded iff for all
transitions - , if a subsequence of w contains k distinct positions where - is enabled, then it contains
a position where - is taken, that is, for all integers
for
A schedule is strongly-bounded if it is
strongly-k-bounded for some k - 0.
Theorem 5 Let P be a transition system with the transition set T and the weak and strong fairness
requirements WF and SF . For all infinite words w over \Sigma T , w
and w 2 fin(SF) iff w is strongly-bounded.
Proof. We will consider only the weak fairness. The set of weakly-k-bounded schedules, for a
fixed k, is defined by the formula
where wf (-) stands for the disjunction taken(-:enabled(- ). It follows that the set of weakly-k-
bounded schedules is safe and !-regular, for all k - 0. Thus, every weakly-bounded schedule is in
Now consider an !-regular safety property G contained in WF . Suppose G is accepted by a
B-uchi automaton MG over the alphabet \Sigma T . Without loss of generality, assume that every state
of MG is reachable from its initial state, and every path in MG is an accepting path. It suffices to
prove that if MG has k states then every schedule in G is weakly-k-bounded. Suppose not. Let us
say that a symbol of \Sigma T is weakly-unfair to a transition - , if it contains enabled(-) and does not
contain taken(- ). From assumption, there is a word w accepted by MG and a transition - such
that w contains k consecutive symbols all of which are weakly-unfair to - . Since MG has only
states, it follows that there is a cycle in MG all of whose edges are labeled with symbols that
are weakly-unfair to - . This implies that MG accepts a schedule that is not weakly-fair to - , a
contradiction to the inclusion G ' WF .
Observe that for the example program P 0 of Section 2, rg, the propositions enabled(l) and
are true in every state, and the fairness requirement F1 equals both WF and SF . This
implies that
Corollary 6 If P is a transition system with at least two transitions, then both fin(WF ) and
fin(SF) are neither !-regular nor safe.
A computation q of the transition system P is fair with respect to the fairness requirement F
We write \Pi F (P ) for the set of fair computations of P . A specification \Phi for the
transition system P is a set of infinite words over the alphabet Q. The transition system P satisfies
the specification \Phi under the fairness requirement F iff \Pi F (P ) ' \Phi. If we prove that P satisfies \Phi
under the fairness assumption F , then P satisfies \Phi for all implementations of F ; if we prove that P
satisfies \Phi under the finitary restriction fin(F ), then P satisfies \Phi for all finite-state implementations
of F . In Section 4, we show that proving the latter is conceptually simpler than proving the former.
3.4 Timed transition systems
From timing to finitary fairness
Standard models for real-time systems place lower and upper time bounds on the duration of
delays [HMP94, MMT91]. Since the exact values of the time bounds are often not known a priori,
it is desirable to design programs that work for all possible choices of time bounds. It has been
long realized that the timing-based model with unknown delays is different from, and often more
appropriate than, the asynchronous model (with standard fairness) [DLS88, AAT97, RW92]. We
show that the unknown-delay model is equivalent to the asynchronous model with finitary fairness.
Real-time programs can be modeled as timed transition systems [HMP94]. A timed transition
system P ';u consists of a transition system two functions ' and u from the set
T of transitions to the set Q?0 of positive rational numbers. The function ' associates with each
transition - a lower bound ' - ? 0; the function u associates with - an upper bound u - .
The interleaving semantics of transition systems is extended to timed transition systems by
labeling every state of a computation with a real-valued time stamp. A time sequence t is an
infinite nondecreasing and unbounded sequence of real numbers. For t to be consistent with a
given computation q of the underlying transition system P , we require that a transition - has
to be enabled continuously at least for time ' - before it is taken, and it must not stay enabled
continuously longer than time u - without being taken. Note that if a transition - is enabled in
all states q n , for i - n - j, and - is not taken in all states q n , for then it has been
continuously enabled for t Then, the time
sequence t is consistent with the computation q iff for every transition - 2 T ,
[Lower bound ] if taken(-) 2 q j , then for all steps i with
and taken(-) 62 q
[Upper bound ] if enabled(-) 2 q k for all steps k with i - k - j, and taken(-) 62 q k for all steps k
with
A timed computation (q; t) of the timed transition system P ';u consists of a computation q of P
together with a consistent time sequence t. The first component of each timed computation of P ';u
is an untimed computation of P ';u . We write \Pi ';u (P ) for the set of untimed computations of P ';u .
In general, \Pi ';u (P ) is a strict subset of \Pi(P ); that is, the timing information ' and u plays the same
role as fairness, namely, the role of restricting the admissible interleavings of enabled transitions.
If Q is finite then, like \Pi(P ), \Pi ';u (P ) is also !-regular [AD94] (but not necessarily safe).
While the timed computations are required for checking if a system satisfies a specification that
refers to time, the untimed computations suffice for checking if the system satisfies an untimed
specification \Phi ' timed transition system P ';u satisfies the specification \Phi iff \Pi ';u (P
In the unknown-delay model, we do not know the bound functions ' and u, but rather wish to prove
that a transition system P satisfies the specification \Phi for all possible choices of bound functions;
that is, we wish to prove that the union [ ';u \Pi ';u (P ) is contained in \Phi. The following theorem
shows that in order to verify a system in the unknown-delay model, it suffices to verify the system
under finitary weak fairness; that is, the union [ ';u \Pi ';u (P ) is same as the set \Pi fin(WF) (P ).
Theorem 7 Let P be a transition system with the set T of transitions, let WF be the weak fairness
requirement for P , and let q be a computation of P . Then q 2 \Pi fin(WF) (P ) iff for some function '
and u from T to Q?0 , q 2 \Pi ';u (P ).
Proof. Consider a weakly-bounded computation q 2 \Pi fin(WF) (P ). From Theorem 5, there is a
nonnegative integer k such that the schedule corresponding to q is weakly-k-bounded. Let the
bound functions be defined as ' . Consider the time
sequence increases by 1 at every step, it is clear that the
lower bound requirement is trivially satisfied. Since q is weakly-k-bounded, no transition is enabled
for more than k consecutive steps (and hence, for more than k being taken.
Thus, the consistency requirements are satisfied, and (q; t) is a timed computation of P ';u , implying
To prove the converse, suppose q 2 \Pi ';u (P ) for some choice of ' and u. Let t be a time
sequence such that (q; t) is a timed computation of P ';u . Let ' be the (nonzero) minimum of all
the lower bounds ' - , and u be the (finite) maximum of all the upper bounds u - . Let the number
of transitions in T be n. Let k be an integer such that k ? nu
' . We claim that the schedule
corresponding to q is weakly-k-bounded. Suppose not. Then there is a transition - and i - 0 such
that - is enabled but not taken at all states q . At every step of q, taken(- 0 ) holds for
some transition - 0 . Since k ? nu
' , it follows that, there is a transition - 0 such that taken(- 0 ) holds
at more than u
' distinct states between q i and q i+k . Since '(- 0 , from the assumption that t
satisfies the lower bound requirement, we have t
implies that t violates the upper bound requirement of consistency, a contradiction. In conclusion,
q is weakly-k-bounded, and hence, q 2 \Pi fin(WF) (P ).
We point out that all lower bounds are, although arbitrarily small, nonzero, and all upper bounds
are finite. This is necessary, and justified because we universally quantify over all choices of bound
functions. We also point out that reasoning in a timing-based model with specific bound functions-
i.e., reasoning about timed computations-can be significantly more complicated than untimed
reasoning [HMP94]. Our analysis shows, therefore, that the verification of specifications that do
not refer to time is conceptually simpler in the unknown-delay model than in the known-delay
model.
3.5 The gap between finitary and standard fairness
The definition of finitary fairness replaces a given !-language \Pi by the union of all !-regular safety
properties contained in \Pi. While this definition seems satisfactory in practice, there are obvious
mathematical generalizations.
First, observe that the (uncountable) union of all safety properties contained in \Pi is \Pi itself.
Not all safety properties, however, are definable by programs. We can obtain the computable
restriction com(\Pi) of \Pi by taking the (countable) union of all recursive safety properties that are
contained in \Pi (an !-language is recursive iff it is the language of a Turing machine). Clearly,
com(\Pi) captures all possible implementations of \Pi, finite-state or not, and typically falls strictly
between fin(\Pi) and \Pi. Computable fairness, however, does not have the two advantages of finitary
fairness, namely, simpler verification rules and a solvable consensus problem.
There are also alternatives between fin(\Pi) and com(\Pi), which capture all implementations of
\Pi with limited computing power. Recall the sample program P 0 from Section 2. For every schedule
in fin(\Pi), there is a bound, unknown but fixed, on how long a transition can be postponed. Suppose
that we let this bound vary, and call a schedule linearly bounded iff the bound is allowed to increase
linearly with time. While every bounded schedule is linearly bounded, the schedule
is linearly bounded but not bounded. In general, given a function f(n) over the natural numbers,
a schedule w O(f)-bounded iff there exists a constant k such that each of the two
transitions l and r appears at least once in the subsequence w n w
Finitary fairness, then, is O(1)-fairness. Moreover, for any fairness requirement F , we obtain a
strict hierarchy of stronger fairness requirements f(F ), where f(F ) is the union of all O(f)-bounded
schedulers that are contained in F . The algorithm presented in Section 5 can be modified so that it
solves distributed consensus under the fairness requirement f(F ) for any fixed, computable choice
of f .
4 Application: Program Verification
We now consider the problem of verifying that a program satisfies a specification under a finitary
fairness assumption.
4.1 Model checking
If all program variables range over finite domains, then the set of program states is finite. The
problem of verifying that such a finite-state program satisfies a temporal-logic specification is called
model checking . Automated tools for model checking have been successfully used to check the
correctness of digital hardware and communication protocols [CK96]. Here we examine the effects
of finitary fairness on the algorithms that underlie these tools.
Untimed systems
Consider a finite-state transition system P with the state set Q. The set \Pi(P of computations
of P is an !-regular safety property. Since Q is finite, we choose the scheduling alphabet
to be Q itself. Let F ' Q ! be an !-regular fairness requirement, and let \Phi ' Q ! be an !-regular
specification (given, say, by a PTL formula or a B-uchi automaton). The verification question, then,
is a problem of language inclusion: P satisfies \Phi under F iff \Pi(P This problem can be
solved algorithmically, because all involved languages are !-regular. Assuming finitary fairness, we
need to check the language inclusion \Pi(P It is, however, not obvious how to check
this, because fin(F ) is not necessarily !-regular (Corollary 6). The following theorem shows that
finite-state verification for finitary fairness can be reduced to verification under standard fairness.
Theorem 8 For all !-regular languages \Pi 1 and \Pi 2 ,
Proof. We have then so is fin(\Pi
suppose that \Pi 1 " \Pi 2 is nonempty. Since both \Pi 1 and \Pi 2 are !-regular, so is
contains a word w such that
. The language containing the
single word w is safe, !-regular, and contained in \Pi 1 . Hence, w 2 fin(\Pi 1 ), and also in fin(\Pi 1
implying that fin(\Pi 1 nonempty.
As a corollary we obtain that for model checking under finitary fairness, we can continue to use
the algorithms that have been developed to deal with standard fairness:
Corollary 9 For a finite-state program P with set Q of states, an !-regular fairness requirement
and an !-regular specification \Phi ' under the fairness assumption F iff
satisfies \Phi under the fairness assumption fin(F ).
Proof. We want to show that
Let G be the !-language \Pi(P From the assumption that P is finite-state, \Pi(P ) is
!-regular. Since \Phi is also !-regular, so is G. Now \Pi(P
is empty (by Theorem
Timed systems
Consider a finite-state timed transition system P ';u with set Q of states. Suppose the specification
does not refer to time at all, and is given as an !-regular specification \Phi ' Q ! . To verify that P ';u
satisfies \Phi, we want to check the inclusion \Pi ';u (P This problem is solved by constructing
a B-uchi automaton that recognizes the language \Pi ';u (P ) [AD94]. This method is applicable only
when the bound functions ' and u are fully specified. In the parametric verification problem, the
bound maps are not fully specified [AHV93]. The bounds are viewed as parameters:
values of these parameters are not known, but are required to satisfy certain (linear) constraints
(such as ' - 1
,
). The parametric verification problem, then, is specified by
1. a finite-state transition system
2. an !-regular specification \Phi '
3. a set LU consisting of pairs ('; u) of functions from T to Q?0 .
The verification problem is to check that, for every choice of ('; u) 2 LU , the resulting timed
transition system P ';u satisfies the specification \Phi. Define
(';u)2LU
\Pi ';u (P
Then, we want to check \Pi LU (P Theorem 8, together with Theorem 7, implies that the
parametric verification problem is decidable when the set LU consists of all function pairs.
If P is a finite-state transition system, \Phi is !-regular, and
the parametric verification problem of checking the inclusion \Pi LU (P ) ' \Phi is decidable.
The parametric verification problem, in general, is undecidable if the class LU constrains the allowed
choices of the bound maps [AHV93]. For instance, if LU requires that '
then the parametric verification problem is undecidable.
4.2 Proof rules for termination
We now turn to the verification of programs that are not finite-state. Since safety specifications
are proved independent of any fairness assumptions, we need to be concerned only with liveness
specifications. We limit ourselves to proving the termination of programs (or, equivalently, to
proving specifications of the form 3 p for a state predicate p) under finitary weak fairness. It
is straightforward to extend the proposed method to the verification of arbitrary temporal-logic
specifications under the finitary versions of both weak and strong fairness.
Total termination versus just termination
system. The standard method for proving the termination of
sequential deterministic programs can be adopted to prove that all computations of the (nondeter-
ministic) transition system P terminate, which is called the total termination of P . Essentially, we
need to identify a well-founded domain (W; OE) and a ranking (variant) function from the program
states to W such that the rank decreases with every program transition. As an example, consider
the rule T from [LPS82].
Rule T for proving total termination:
Find a ranking function ae from Q to a well-founded domain (W; OE), and a state predicate
(T2) For all states q; q
Figure
The rule T is complete for proving total termination; that is, all computations of a transition
system P terminate iff the rule T is applicable [LPS82]. Furthermore, it is always sufficient to
choose the set N of natural numbers as the well-founded domain W .
Now consider the requirement that all weakly-fair computations of P terminate, which is called
the just termination of P . While the rule T is obviously sound for proving just termination, it is
not complete. The problem is that there may not be a ranking function that decreases with every
program transition. The standard solution is to identify a ranking function that never increases, and
that is guaranteed to decrease eventually. The decrease is caused by so-called "helpful" transitions,
whose occurrence is ensured by the weak-fairness requirement. As an example, consider the rule J
from [LPS82].
Rule J for proving just termination:
Find a ranking function ae from Q to a well-founded domain (W; OE), and a set R - of
state predicates, one for each transition - 2 T . Let R be the union of all R - for - 2 T .
Show for all states q; q 0 2 Q and all transitions -
some transition is enabled in q, then - is enabled in q.
The rule J is complete for proving just termination: all weakly-fair computations of a transition
system P terminate iff the rule J is applicable. Completeness, however, no longer holds if we require
the well-founded domain to be the set N of natural numbers: there are transition systems for which
transfinite induction over ordinals higher than ! is needed to prove just termination.
An example
Before we present the method for proving termination of finitary fair computations, let us consider
an example. Consider the transition system P 1 of Figure 1. A state of the program P 1 is given by
the values of its two variables: the location variable - ranges over f0; 1g, and the data variable x
is a nonnegative integer; initially The four transitions e 1 are as shown in the
figure.
We want to prove that all weakly-fair computations of P 1 terminate. Initially
transitions e 1 and e 2 are continuously enabled. Fairness to e 2 ensures that eventually
e 4 is enabled as long as x is positive, and decrements x each time, fairness to e 4 ensures that
Figure
2: Transformed program fin(P 1 )
eventually resulting in termination. To prove termination formally, we apply the rule J. As
the well-founded domain, we choose the set N[f!g of natural numbers together with the ordinal !.
Choose R e 1
and R e 3
to be the empty sets; a state (-; x) belongs to R e 2
belongs to R e 4
1. The ranking function is defined as: ae(0; x. The transitions e 1 and
e 3 leave the rank unchanged, while e 2 and e 4 cause a decrease. The reader can check that the five
premises (J1)-(J5) of the rule J are indeed satisfied. Notice that there is no bound on the number
of steps before P 1 terminates. This unbounded nondeterminism is what makes the mathematical
treatment of fairness difficult.
Proving the termination of P 1 under finitary weak fairness-that is, proving the finitary just
termination of P 1 -is conceptually simpler. Recall that for every computation in fin(WF ), there is
an integer k such that a transition cannot be enabled continuously for more than k steps without
being taken (Theorem 5). It follows that, under finitary weak fairness, P 1 must terminate within a
bounded number of steps, where the bound depends on the unknown constant k. To capture this
intuition, we transform the program P 1 by introducing the two auxiliary variables b and c. The
initial value of the variable b is an unspecified nonnegative integer, and the program transitions do
not change its value. The integer variable c is used to ensure that no transition is enabled for more
than b steps without being taken. We thus obtain the new program fin(P 1 ) of Figure 2.
The original program P 1 terminates under the finitary weak fairness assumption fin(WF) iff all
computations of the transformed program fin(P 1 ) terminate. Thus, we have reduced the problem
of proving the finitary just termination of P 1 to the problem of proving the total termination of
That is, the simple rule T with induction over the natural numbers is sufficient to prove
finitary just termination. A state of fin(P 1 ) is a tuple (-; x; c; b). To apply the rule T, we choose
the set R to be the set of reachable states: (0; x; c; b) 2 R iff
b. The ranking function is a mapping from R to the natural numbers defined by
ae(0; c. The reader should check that every
transition, applied to any state in R, causes the ranking function to decrease.
Notice that the transformed program fin(P 1 ) has infinitely many initial states, but for any
given initial state, it terminates within a bounded number of steps. Consequently, fin(P 1 ) does not
suffer from the problems caused by unbounded nondeterminism. The tradeoff between proving just
termination of P 1 , and total termination of fin(P 1 ) should be clear: while the rule J used for the
former is more complex than the rule T used for the latter, the program fin(P 1 ) is more complex
than
Finitary transformation of a program
Let us consider a general transformation for a given transition system
consist of m transitions . The finitary transformation fin(P ) of the transition system
P is obtained by introducing a new integer variable b, and for each transition -
new integer variable c i . Thus, the state space of fin(P ) is Q \Theta N m+1 . The initial value of b is
arbitrary; the initial value of each c i is 0. Thus, the set of initial states of fin(P ) is Q 0 \Theta N \Theta f0g m .
For every transition - 2 T , the transition system fin(P ) contains a transition fin(-) such that
1.
2.
3. for 1
The following theorem establishes the transformation fin together with the simple rule T as a sound
and complete proof method for finitary just termination.
Theorem 11 A transition system P terminates under finitary weak fairness iff all computations
of the transition system fin(P ) terminate.
Proof. Suppose the program fin(P ) has a nonterminating computation q. Consider the projection
q 0 of q on the state-space of P . From the transition rules of fin(P ), it is clear that q 0 is also a
computation of P . The value of the bound variable b stays unchanged throughout q, let it be
k. Furthermore, c i - k, is an invariant over the computation q for all 1 - i - m. Since for each
transition - i , c i is incremented each time - i is enabled but not taken, it follows that the computation
q 0 is weakly-k-bounded. Hence, P has a weakly-fair nonterminating computation.
Conversely, consider a weakly-fair nonterminating computation q of P . From Theorem 5, q is
weakly-k-bounded for some k. Define the sequence q 0 over the state-space of fin(P ) as follows.
For all i - 0, q 0
j is the maximum nonnegative integer n such that the
transition - j is enabled but not taken in all states q n 0 for It is easy to check that,
since q is weakly-k-bounded, if the transition - j is enabled, but not taken, in state q i then c i
Consequently, q 0 is a (nonterminating) computation of fin(P ).
Thus, the language \Pi fin(WF) (P ) of finitary weak-fair computations of the transition system P is
the projection of the language \Pi(fin(P )) of the transformed program fin(P ). It is known that given
a transition system P and a fairness requirement F , there exists a transition system P 0 such that
requires uncountably many states (the
transformed program P 0 has one initial state for every fair computation in F ), and does not yield
a proof principle for which well-founded induction over N is adequate.
5 Application: Distributed Consensus
We consider the consensus problem in a shared-memory model where the only atomic operations
allowed on a shared register are read and write. Formally, the consensus problem is defined as
follows. There are n processes each with a boolean input value in i 2 f0; 1g. The
process decides on the value v 2 f0; 1g by executing the statement decide(v). To model failures,
we introduce a special transition fail i for each process. The transition fail i is enabled only if the
Shared registers: initially: out =?, y[
1. while out =? do
2. x[r
3. if y[r i
4. if x[r
5. else for do skip od;
7. r i := r
8. fi
9. od;
10. decide(out).
Figure
3: Consensus, assuming finitary weak-fairness (program for process P i with input in i )
process P i has not yet decided on a value. When P i takes the transition fail i , all of its transitions
are disabled, and P i stops participating.
A solution to the consensus problem must satisfy agreement-that is, no two processes decide
on conflicting values-and validity-that is, if a process decides on the value v, then v is equal to
the input value of some process. Apart from these two safety requirements, we want the nonfailing
processes to decide eventually: wait-freedom asserts that each process P i eventually either decides
on some value or fails. Thus a process must not prevent another process from reaching a decision,
and the algorithm must tolerate any number of process failures. The implicit fairness assumption
in the asynchronous model is the weak-fairness requirement WF for all program transitions except
the newly introduced fail i transitions. It is known that, even for there is no program that
satisfies all three consensus requirements under the weak-fairness assumption WF [FLP85, LA87].
On the other hand, consensus can be solved in the unknown-delay model, where it is assumed
that there is an upper bound \Delta on memory-access time, but the bound is unknown to the processes
a priori and a solution is required to work for all values of \Delta [AAT97]. We show that the consensus
algorithm of [AAT97] for the unknown-delay model solves, in fact, consensus under the finitary
weak-fairness requirement fin(WF ). The algorithm is shown in Figure 3. The algorithm proceeds
in rounds and uses the following shared data structures: an infinite two-dimensional array x[ ; 2]
of bits, and an infinite array y[ ] whose elements have the value ?, 0, or 1. The decision value
(i.e., the value that the processes decide on) is written to the shared bit out, which initially has the
value ?. In addition, each process P i has a local register v i that contains its current preference for
the decision value, and a local register r i that contains its current round number.
If all processes in a round r have the same preference v, then the bit x[r; -v] is never set to 1,
and consequently, processes decide on the value v in round r. Furthermore, if a process decides on
a value v in round r, then y[r] is never set to the conflicting value -
v, and every process that reaches
round has the preference v for that round. This ensures agreement (see [AAT97] for more
details of proofs). It is easy to check that if all processes have the same initial input v, then no
process will ever decide on - v; implying the requirement of validity.
It is possible that two processes with conflicting preferences for round r cannot resolve their
conflict in round r, and proceed to round (r conflicting preferences. This happens
only if both of them find y[r] =? first (line 3), and one of them proceeds and chooses its preference
for the next round (line 7) before the other one finishes the assignment to y[r]. The finitary fairness
requirement ensures that this behavior cannot be repeated in every round. In every finitarily fair
computation, there is a bound k such that every process that has neither failed nor terminated
takes a step at least once every k steps. Once the round number exceeds the (unknown) bound k,
while a process is executing its for loop, all other processes are forced to take at least one step.
This suffices to ensure termination.
Theorem 12 The program of Figure 3 satisfies the requirements of agreement, validity, and wait-
freedom under the finitary fairness assumption fin(WF ).
By contrast, the program does not satisfy wait-freedom under the standard fairness assumption WF .
Also observe that the algorithm uses potentially unbounded space, and therefore is not a finite-state
program. The results of Section 4 imply that there is no algorithm that uses a fixed number of
bounded registers and solves consensus under finitary fairness.
Theorem 13 For two processes, there is no algorithm that uses finite memory, and satisfies the requirements
of agreement, validity, and wait-freedom under the finitary fairness assumption fin(WF)
(or equivalently, in the unknown-delay model).
The unknown-delay model of [DLS88] consists of distributed processes communicating via messages,
where the delivery time for each message is bounded, but is not known a priori. They establish
bounds on the number of process-failures that can be tolerated by consensus protocol under various
fault models. These bounds can be established using the finitary weak-fairness. Similar observation
applies to the results on the session problem for the unknown-delay model [RW92].
Acknowledgments
. Notions that are similar to k-bounded fairness, for a fixed k, have been
defined in several places [Jay88]; the notion of bounded fairness seems to be part of the folklore, but
we do not know of any published account. We thank Leslie Lamport, Amir Pnueli, Fred Schneider,
Gadi Taubenfeld, and Sam Toueg for pointers to the literature and for helpful discussions.
--R
A theory of timed automata.
Safety without stuttering.
Appraising fairness in languages for distributed programming.
Parametric real-time reasoning
Defining liveness.
On a decision method in restricted second-order arithmetic
Consensus in the presence of partial synchrony.
Impossibility of distributed consensus with one faulty process.
On the temporal analysis of fairness.
Temporal proof methodologies for timed transition systems.
Communication and Synchronization in Parallel Computation.
Memory requirements for agreement among unreliable asynchronous processes.
Time constrained automata.
A hierarchy of temporal properties.
The temporal logic of reactive and concurrent systems.
Reaching agreement in the presence of faults.
The impact of time on the session problem.
Automata on infinite objects.
Verification of concurrent programs: the automata-theoretic framework
--TR
Fairness
Safety without stuttering
Consensus in the presence of partial synchrony
A hierarchy of temporal properties (invited paper, 1989)
Automata on infinite objects
The temporal logic of reactive and concurrent systems
The impact of time on the session problem
Parametric real-time reasoning
A theory of timed automata
Temporal proof methodologies for timed transition systems
Impossibility of distributed consensus with one faulty process
Computer-aided verification
Formal methods
Time-Adaptive Algorithms for Synchronization
Reaching Agreement in the Presence of Faults
On the temporal analysis of fairness
Impartiality, Justice and Fairness
Time-Constrained Automata (Extended Abstract)
Design and Synthesis of Synchronization Skeletons Using Branching-Time Temporal Logic
Communication and synchronization in parallel computation | distributed consensus;modeling of asynchronous systems;fairness;program verification |
295660 | Space/time-efficient scheduling and execution of parallel irregular computations. | In this article we investigate the trade-off between time and space efficiency in scheduling and executing parallel irregular computations on distributed-memory machines. We employ acyclic task dependence graphs to model irregular parallelism with mixed granularity, and we use direct remote memory access to support fast communication. We propose new scheduling techniques and a run-time active memory management scheme to improve memory utilization while retaining good time efficiency, and we provide a theoretical analysis on correctness and performance. This work is implemented in the context of the RAPID system which uses an inspector/executor approach to parallelize irregular computations at run-ti me. We demostrate the effectiveness of the proposed techniques on several irregular applications such as sparse matrix code and the fast multipole method for particle simulation. Our experimental results on Cray-T3E show that problems large sizes can be solved under limited space capacity, and that the loss of execution efficiency caused by the extra memory management overhead is reasonable. | INTRODUCTION
Considerable effort in parallel system research has been spent on time-efficient par-
allelizations. This article investigates the trade-off between time and space efficiency
in executing irregular computations when memory capacity on each processor is
limited, since a time-efficient parallelization may lead to extra space requirements.
The definition of "irregular" computation in the literature is actually not very clear.
Normally in scientific computing, code with chaotic or adaptive computation and
communication patterns is considered "irregular". Providing effective system support
with low overhead for irregular problems is difficult and has been identified
as one of the key issues in parallel system research [Kennedy 1996]. A number
of projects have addressed software techniques for parallelizing different classes of
irregular code [Wen et al. 1995; Das et al. 1994; Lain and Banerjee 1994; Gerasoulis
Authors' addresses: T. Yang, Department of Computer Science, University of California, Santa
Barbara, CA 93106. C. Fu, Siemens Pyramid, Mailstop SJ1-2-10, San Jose, CA 95134. This work
was supported in part by NSF CCR-9409695, CDA-9529418, INT-9513361, and CCR-9702640,
and by a DARPA subcontract through UMD (No. Z883603). A preliminary version of this
article appeared in the 6th ACM Symposium on Principles & Practice of Parallel Programming
(PPoPP'97).
Yang and C. Fu
et al. 1995; Fink et al. 1996].
This article addresses scheduling issues for parallelism that can be modeled as
directed acyclic dependence graphs (DAGs) [Sarkar 1989] with mixed granularity.
This model has been found useful in performance prediction and code optimization
for parallel applications which have static or slowly changing dependence patterns,
such as sparse matrix problems and particle simulations [Chong et al. 1995; Fu and
Yang 1996b; Gerasoulis et al. 1995; Jiao 1996]. In these problems, communication
and computation are irregularly interleaved with varying granularity. Asynchronous
scheduling and fast communication techniques are needed for exploiting
data locality and balancing loads with low synchronization costs. However, these
techniques may impose extra space requirements. For example, direct memory access
normally has much lower software overhead than high-level message-passing
primitives. Remote addresses however need to be known at the time of performing
data accesses, and thus accessible remote space must be allocated in advance.
A scheduling optimization technique may prefetch or presend some data objects
to overlap computation and communication, but it may require extra temporary
space to hold these data objects. Therefore, using advanced optimization techniques
adds difficulties in designing software support to achieve high utilization of
both processor and memory resources.
In this article, we present two DAG scheduling algorithms that minimize space
usage while retaining good parallel time efficiency. The basic idea is to use volatile
objects as early as possible so that their space can be released for reuse. We
have designed an active memory management scheme that incrementally allocates
necessary space on each processor and efficiently executes a DAG schedule using
direct remote memory access. We provide an analysis on the performance and
correctness of our techniques.
The proposed techniques are implemented in the RAPID programming tool [Fu
and Yang 1996a] which parallelizes irregular applications at run-time. The original
schedule execution scheme in RAPID does not support incremental memory allo-
cation. With sufficient space, RAPID delivers good performance for several tested
irregular programs such as sparse Cholesky factorization, sparse triangular solvers
[Fu and Yang 1997], and Fast Multipole Method for N-body simulation [Fu 1997].
In particular we show that RAPID can be used to parallelize sparse LU (Gaussian
elimination) with dynamic partial pivoting, which is an important open parallelization
problem in the literature, and deliver high megaflops on the Cray-T3D/T3E [Fu
and Yang 1996b]. Another usage of RAPID is performance prediction, since the
static scheduler in RAPID can predict the potential speedup for a given DAG with
reasonable accuracy. For example, the RAPID sparse LU code achieves 70% of
the predicted speedup [Fu and Yang 1996b]. We have found that the size of problems
that RAPID can solve is restricted by the available amount of memory, which
motivates us to study space optimization.
It should be noted that other forms of space overhead exist, such as space for
the operating system kernel, hash tables for indexing irregular objects, and task
graphs. This article focuses on the optimization of space usage dedicated to storing
the content of data objects. The rest of the article is organized as follows. Section 2
summarizes related work. Section 3 describes our computation and memory model.
Section 4 presents space-efficient scheduling algorithms. Section 5 discusses the use
T. Yang and C. Fu \Delta 3
of scheduling algorithms in the RAPID and the active memory management scheme
for executing schedules. Section 6 gives experimental results.
2. RELATED WORK
Most of the previous research on static DAG scheduling [Sarkar 1989; Wolski and
Feo 1993; Yang and Gerasoulis 1992; 1994] does not address memory issues. A
scheduling algorithm for dynamic DAGs proposed Blelloch et al. [1995] requires
space usage on each processor, where S 1 is the sequential space re-
quirement, p is the total number of processors, and D is the depth of a DAG. This
work provides a solid theoretical ground for space-efficient scheduling. Their space
model is different from ours, since it assumes a globally shared memory pool. In
our model, a space upper bound is imposed to each individual processor, which is
a stronger constraint. The Cilk run-time system [Blumofe et al. 1995] addresses
space efficiency issues, and its space complexity is O(S 1 ) per processor, which is
good in general, but too high for solving large problems if space is limited. The
memory-optimizing techniques proposed in this article has space usage close to S 1 =p
per processor. Another distinct difference between our work and that of Blelloch et
al. [1995] and Blumofe et al. [1995] is that for RAPID, a DAG is obtained at the run-time
inspector stage before its execution, thus making our scheduling scheme static.
In the other models, DAGs grow on-the-fly as computation proceeds, and dynamic
scheduling is preferred. There are two reasons why we use static scheduling: (1) in
practice it is difficult to minimize the run-time control overhead of dynamic scheduling
when parallelizing sparse code with mixed granularity on distributed memory
machines; (2) the application problems we consider all have an iterative nature, so
optimized static schedules can be used for many iterations. Our work uses hardware
support for directly accessing remote memory, which is available in several modern
parallel architectures and workstation clusters [Ibel et al. 1996; Stricker et al. 1995;
Schauser and Scheiman 1995]. Advantages of direct remote memory access have
been identified in fast communication research such as active messages [von Eicken
et al. 1992]. Thus, we expect that other researchers can benefit from our results
when using fast communication support to design software layers.
3. THE COMPUTATION AND MEMORY MODEL
Our computation model consists of a set of tasks and a set of distinct data objects.
Each task reads and writes a subset of data objects. Data dependence graphs
(DDG) derived from partitioned code normally have three types of dependence
between tasks: true dependence, antidependence, and output dependence [Poly-
chronopoulos 1988]. In a DDG, some anti- or output dependence edges may be
redundant if they are subsumed by other true data dependence edges. Other
anti/output dependence edges can be eliminated by program transformation. This
article deals with a transformed dependence graph that contains only acyclic true
dependencies. An extension to the classical task graph model is that commuting
tasks can be marked in a task graph to capture parallelism arising from commutative
operations. The details on this parallelism model are described in Fu and
Yang [1996a;1997].
We define some terms used in our task model as follows:
Yang and C. Fu
-A DAG contains a set of tasks, directed dependence edges between tasks, and
data objects accessed by each task. Let R(T x ) be a set of objects that task T x
reads and W (T x ) be a set of objects that task T x writes.
-Each data object is assigned to a unique owner processor. Each processor may
allocate temporary space for objects which it does not own. If a processor P x
owns m, then m is called a permanent object of P x is called a
volatile object of P x . Define PERM(P x ) as the set of permanent data objects
on processor P x and V OLAT ILE(P x ) as the set of volatile data objects on
processor P x . A permanent object will stay allocated during execution on its
owner processor.
-A static schedule gives a processor assignment of all tasks and an execution
order of tasks on each processor. Define TA(P x ) as the set of tasks executed
on processor P x . Let term "T x
executed before
T y on the same processor in a schedule, and let "T x ! T y " denote that task T x
is executed immediately before T y . Given a DAG G and a static schedule for
G, a scheduled graph is derived by marking execution edges in G (e.g., add an
edge from T x to T y in G if T x ! T y ). A schedule is legal if the corresponding
scheduled graph is acyclic and T x
dependence edge G.
-Each task T x has a weight denoting predicted computation time (called - x ) to
complete this task. Each edge from T x to T y has a weight denoting predicted
communication delay (called c x;y ) when sending a corresponding message between
two processors. Using weight information, a static scheduling algorithm can
optimize the expected parallel time of execution.3711158T 1
(a)
(b) (c)
Fig. 1. (a) A DAG. (b) A schedule for the DAG on 2 processors. (c) Another schedule.
T. Yang and C. Fu \Delta 5
Figure
1(a) shows a DAG with 20 tasks that access 11 data objects d
Each task is represented as either T i;j that reads d i and d j , and writes d j , or T j that
reads and writes d j . Table I lists read and write sets of a few tasks. Parts (b) and
(c) of
Figure
are two schedules for the DAG in (a). Permanent objects are evenly
distributed between two processors.
g.
g. The "owner computes" rule is used to assign tasks to processors.
In this example, we assume that each task and each message cost one unit of time,
messages are sent asynchronously, and processor overhead for sending and receiving
messages is ignored.
Table
I. Read and Write Sets of a Few Tasks in Figure 1(a)
Notice that our task graph model does not use the SSA form (Static Single
Assignment) [Cytron et al. 1991], and notice that different tasks can modify the
same data object. The main reason is that in our targeted applications, a task
graph derived at run-time is normally data dependent; if the SSA form were used,
there would be too many objects to manage.
A space/time-efficient scheduling algorithm needs to balance two conflicting goals:
shortening the length of a schedule and minimizing the space requirement. If memory
is not sufficient to hold all objects during the execution stage, space recycling
for volatile data will be necessary. The following two strategies can be used to release
the space of a volatile data object at a certain execution point: (1) the value
of this object is no longer used; (2) this object is no longer accessed. As we discuss
in Section 5, the second strategy reduces the overhead of data management, and
we therefore choose it in our approach. For such a strategy, the space requirement
is estimated as follows.
Definition 1. Given a task execution order (!) on each processor, a volatile
object m on a processor is called live at some task T y if it is accessed at this task
or it has been accessed before and will still be accessed in the future. Formally, if
the following holds, then m is live:
9T x 9T z ((T x
otherwise m is called dead.
Definition 2. Let size(m) be the size of data object m. For any task Tw on
processor P x (i.e., Tw 2 TA(P x )), we compute the volatile space requirement
at Tw on P x as
m is a live volatile object at Tw
Yang and C. Fu
Then the memory requirement of a schedule is
8Px
f
In the schedule of Figure 1(b), volatile object d 3 on processor P 1 is dead after
task T 3;10 , and d 5 is dead after T 5;10 . If we assume each data object is of unit
size, it is easy to calculate that MEM 9. However, for the schedule in
Figure
1(c), MEM REQ is 8. This is because the lifetime of volatile objects d 7
and d 3 are disjoint on P 1 , and space sharing between them is possible.
4. SPACE- AND TIME- EFFICIENT SCHEDULING
Given a DAG and p processors, our scheduling approach contains two stages:
-The "owner computes" rule is used to assign tasks to processors. Tasks that
modify the same data objects are mapped to the same cluster, and clusters are
then evenly assigned to processors. For example, given the DAG in Figure 1(a)
and two processors, the set of tasks assigned to processor P 0 is
g, and the set of tasks for processor P 1 is fT
g. Based on the above task assign-
ment, we can determine permanent and volatile data objects on each processor
-Tasks at each processor are ordered, following DAG dependence edges.
We focus on the optimization of task ordering. Previously, an ordering algorithm
called [Yang and Gerasoulis 1992] has been proposed. It is time efficient, but
it may require extra space to hold volatile objects in order to aggressively execute
time-critical tasks. In this section, we propose two new ordering algorithms called
MPO and DTS. The idea is to have volatile objects referenced as early as possible
once they are available in the local memory. This shortens the lifetime of volatile
objects and potentially reduces the memory requirement on each processor.
4.1 The RCP Algorithm
We briefly discuss this algorithm; a detailed description can be found in Yang and
Gerasoulis [1992]. This heuristic orders tasks by simulating the execution of tasks.
Let an exit task be a task with no children in a given DAG. The time priority of a
task T x , called TP (T x ), is the length of the longest path from this task to an exit
task. Namely, TP (T x ) is - x if it is an exit task. Otherwise,
Ty is a child of
During simulated execution a task is called ready if its parents have been selected
for execution and the needed data objects can be received at this point. At each
scheduling cycle, each processor selects the ready task with the highest time priority.
T. Yang and C. Fu \Delta 7
(1) while there is at least one un-scheduled task
(2) Find the processor Px that has the earliest idle time;
(3) Schedule the ready task Tx that has the highest priority on processor Px;
Update the ready task list on each processor;
end-while
Fig. 2. The RCP algorithm.
The RCP algorithm is summarized in Figure 2. Lines (4) and (5) do not exist in this
description, because we want to illustrate the difference between RCP and MPO.
4.2 MPO: Memory-Priority-Guided Ordering
The MPO ordering algorithm is listed in Figure 3. The difference between MPO
and RCP is (1) the priority used in MPO for selecting a ready task is the size of
total object space that has been allocated divided by the size of total object space
needed to execute this task (if there is a tie, the RCP time priority (TP ()) is used
to break the tie); (2) MPO needs to estimate the total object space allocated at
each scheduling cycle and update the memory priority of tasks (lines (4) and (5)).
(1) while there is at least one un-scheduled task
(2) Find the processor Px that has the earliest idle time;
(3) Schedule the ready task Tx that has the highest priority on processor Px;
Allocate all volatile objects that Tx uses and that have not been
allocated yet on processor Px;
(5) Update the priorities of Tx's children and siblings on processor Px;
Update the ready task list on each processor;
end-while
Fig. 3. The MPO algorithm.
Example. Figure 1(c) shows a schedule produced by MPO while (b) is produced
by RCP. The ordering difference between Figure 1(b) and (c) starts from time 6 on
processor 1. At that time, T 7;8 is selected by RCP while T 3;10 is chosen by MPO.
As illustrated in Figure 4, there are three ready tasks on processor 1 at time
selects T 7;8 because it has the longest path to an exit
task (the path is T length 4). For MPO, T 3;10 has the highest space
priority (1) because d 3 and d 10 are available locally at time 6. T 7;8 's space priority
is 0.5 because the space for d 7 has not been allocated before time 6, and we assume
each object has a unit size. As a result, the MPO schedule requires less memory,
but longer parallel time.
Algorithm Complexity. A potentially time consuming part is the update of space
priorities of unscheduled tasks. At line (5), it is sufficient to update the space
priorities of the children and the siblings of the newly scheduled task, because only
those tasks are possible candidates for ready tasks in the next round. The space
priorities of the children of these candidate tasks will be updated later after the
candidate tasks are scheduled. We use e in (x) and e out (x) to denote the number
Yang and C. Fu
(0.5,
Ready task list on Proc 1:
(a) (b)
Fig. 4. The scheduling scenario at time 6. Numbers in parentheses next to the ready tasks are the
MPO space priority and RCP time priority at that time. (a) The remaining unscheduled tasks.
(b) A partial schedule at time 6.
of incoming edges and outgoing edges of task T x , respectively. The complexity for
line (5) is O(
is the number of tasks
and e is the number of dependence edges. In term e in (x) v, we have used v
to bound the number of children that any parent of T x may have. Additionally,
the complexities for lines (2), (3), (4), and (6) are v log p, v log v, m, and vp log v,
respectively. Here, p is the total number of processors, and m is the total number
of data objects. Since p is usually small compared to v, the total time complexity
of MPO is O(ve
4.3 DTS: Data-Access-Directed Time Slicing
DTS is more aggressive in space optimization and is thus intended for cases when
memory usage is of primary importance. Its design is based on the fact that memory
usage of a processor can be improved if each volatile object has a short life
span. In other words, the time period from allocation to deallocation is short. The
basic idea of DTS is to slice computation based on data access patterns of tasks, so
that all tasks within the same slice access a small group of volatile objects. Tasks
are scheduled on physical processors slice by slice, and tasks within each slice are
ordered using dependence and critical-path information.
Algorithm. For a given DAG in which a set of tasks operates
on a set of data objects we describe the steps of the
DTS algorithm as follows.
1: Construct a data connection graph (DCG). Each node of a DCG represents
a distinct data object, and each edge represents a temporal order of data
access during computation. A cycle may occur if accesses of two data objects
are interleaved. For simplicity, we use the same name for a data object and its
corresponding data node unless it will cause confusion. To construct a DCG, the
following rules are applied based on the original DAG.
-If a task T x 2 V uses but does not modify data object d i , T x is associated with
data object node d i .
T. Yang and C. Fu \Delta 9
-If T x modifies more than one objects, it may use those objects during compu-
tation, and it does not use other objects, then T x is associated with each of
those modified objects.
-It is possible that a task is associated with multiple data nodes. In that case,
doubly directed edges are added among those data nodes to make them strongly
connected.
-A directed edge is added from data node d i to data node d j if there exists a
task dependence edge (T x ; T y ) such that T x is associated with data node d i
and T y is associated with data node d j .
The last two rules reflect the temporal order of data access during computation.
-Step 2: Derive strongly connected components from a DCG, the edges between
the components constitute a DAG. A task only appears in one component. Each
component is associated with a set of tasks that use/modify data objects in
this component, and is defined as one slice. All tasks in the same slice will
be considered for scheduling together. At run-time, each processor will execute
tasks slice by slice, following a topological order of slices imposed by dependencies
among corresponding strongly connected components. It should be noted that a
topological order of slices only imposes a constraint on task ordering. A processor
assignment of tasks following the "owner computes" rule must be supplied before
using DTS to produce an actual schedule.
-Step 3: Use a priority based precedence scheduling approach to generate a DTS
schedule from the slices derived at Step 2. Priorities are assigned to tasks based
on the slices they belong to. For two ready tasks in the same slice, the task with
a higher critical-path priority is scheduled first. If there is a ready task that has
a slice priority lower than some other unscheduled tasks on the same processor,
this task will not be scheduled until all the tasks that have higher slice priorities
on this processor are scheduled. Using such a priority can guarantee that tasks
are executed according to the derived slice order.
Example. Figure 5 shows an example of DTS ordering for the DAG in Figure 1(a).
Part (a) is the DCG, and we mark each node with the corresponding data name.
Tasks within a rectangle are associated with a corresponding data object. Since
this DCG is acyclic, each data node is a maximal strongly connected component
and is treated as one slice. A topological ordering of these nodes produces a slice
Each processor will execute tasks following this slice
order, as shown in part (b), where slices are marked numerically to illustrate their
execution order. The memory requirement MEM REQ is 7, compared to 9 in Figure
1(b) produced by RCP and 8 in Figure 1(c) by MPO. On the other hand, the
schedule length increases from RCP through MPO to DTS, because less and less
critical path information is used.
Algorithm Complexity. For Step 1, the complexity of deriving the access pattern
of each task (i.e., read and/or write access to each data object) is O(e log v);
the complexity of mapping tasks to data nodes is O(v); and the complexity of
generating edges between data nodes is O(e log m), because a check is needed to
prevent duplicate edges from being added. Thus, the total complexity for Step 1 is
v). For Step 2, deriving strongly connected components costs
Yang and C. Fu3711158t1416
d
d
d
d
d
d357
slice 1
slice 2
slice 4
slice 5
slice 6
slice 1
slice 2
slice 3
slice 4
slice 5
slice 6
slice 7
(b)
(a)
Fig. 5. (a) A sample DCG derived from a DAG. (b) A DTS schedule for the DAG on 2 processors.
generating precedence edges among slices costs O(m log m); and the
topological sorting of slices costs O(m+ e). Therefore, the total complexity of Step
2 is O(m m). The cost of Step 3 is O(v log v + e). This gives an overall
complexity for DTS O(e(log v is the number of tasks, e
is the number of edges, and m is the number of data objects in the original DAG.
Space Efficiency. DTS can lead to good memory utilization; the following theorem
gives a memory bound for DTS. First a definition is introduced.
Definition 3. Given any processor assignment R for tasks, the volatile space
requirement of slice L on processor P x , denoted as VPx (R; L), is defined as the
amount of space needed to allocate for the volatile objects used in executing tasks
of L on P x . The maximum volatile space requirement for L under R is then defined
as
Assuming that a processor assignment R of tasks using the "owner computes"
rule produces an even distribution of data space for permanent data objects among
processors, we can show the following results.
Theorem 1. Given a processor assignment R of tasks and a DTS schedule on
processors with slices ordered as L 1 , this schedule is executable with
space usage per processor, where S 1 is the sequential space complexity and
PROOF: First of all, since R leads to an even distribution of permanent objects,
the permanent data space needed on each processor is S 1 =p. Suppose that a task
T. Yang and C. Fu \Delta 11
T x in slice L i needs to allocate space for a volatile object d. If there should be
enough space for d according to the definition of h. If i ? 1, then we claim that all
the space allocated to the volatile data objects associated with slices L 1
can be freed. Therefore the extra h space on each processor will be enough to
execute tasks in L i .
Now we need to show the above claim is correct. Suppose not, and there is a
volatile data object d 0 that cannot be deallocated after slice L i\Gamma1 , and d 0 is associated
with L a , a ! i. Then there is at least one task T y 2 L j , j - i, that uses d 0 . If
T y modifies d 0 , then d 0 is a permanent data object. If T y does not modify d 0 , then
according to the DTS algorithm, T y should belong to slice L a instead of L j . Thus
there is contradiction. 2
If a DCG is acyclic, then each data node in the DCG constitutes a strongly
connected component. Therefore, each slice is associated with only one data object,
and this implies that the h defined in Theorem 1 will be the size of the largest data
object. Thus, we have the following corollary.
Corollary 1. If the DCG of a task graph is acyclic and the maximum size
of an object is of unit 1, the DTS produces a schedule which can be executed on
processors using S 1 per processor, where S 1 is the sequential space
complexity.
We can apply Theorem 1 and Corollary 1 to some important application graphs.
For the 1D column-block-based sparse LU DAGs in Fu and Yang [1996b], a matrix
is partitioned into a set of sparse column blocks, and each task T k;j uses one sparse
column block k to modify column block j. Figure 1 is actually a sparse LU graph.
DTS produces an acyclic DCG for a 1D column-block-based sparse LU task graph
as illustrated in Figure 5(a). Let w be the maximum space needed for storing a
column block. According to Corollary 1, each processor will at most need w volatile
space to execute a DTS schedule for sparse LU.
For the 2D block-based sparse Cholesky approach described in Fu and Yang
[1997], a matrix A is divided into N sparse column blocks, and a column block
is further divided into at most N submatrices. The submatrix with index (i; j) is
marked as A i;j . A Cholesky task graph can be structured with N layers. Layer
represents an elimination process that uses column block k
to modify column blocks from k + 1 to N . More specifically, the Cholesky factor
computed from the diagonal block A k;k will be used to scale all nonzero submatrices
within the kth column block, i.e., A i;k where Those nonzero submatrices
will then be used to update the rest of the matrix, i.e., A i;j where
update tasks at step k belong to the same slice associated with
data objects A i;k where . Hence, the extra space needed to execute a slice
is the summation of the submatrices in column block k. According to Theorem 1, a
DTS schedule for the Cholesky can execute with S 1 =p +w space on each processor
where w is again the maximum space needed to store a column block.
The above results are summarized in the following corollary. Normally, w -
and a DTS schedule is space efficient for these two problems.
Corollary 2. For 1D column-block-based sparse LU task graphs and 2D block-based
sparse Cholesky graphs, a DTS schedule is executable using S 1
Yang and C. Fu
space
1 be
for i=2 to k do
Merge L i to L 0
space
else
space
Fig. 6. The DTS slice-merging algorithm.
per processor, where w is the size of the largest column block in the partitioned input
matrix.
Further Optimization. If the available memory space for each processor is known,
say AV AIL MEM , the time efficiency of the DTS algorithm can be further optimized
by merging several consecutive slices if memory is sufficient for those slices,
and then applying the priority-based scheduling algorithm on the merged slices. Assuming
there are k slices and a valid slice order is for a given task
assignment R, the merging strategy is summarized in Figure 6. A set of new slices
will be generated. Since calculating memory requirements for all
slices takes O(e log m) time, the complexity of the merging process is O(v+e log m).
It can be shown that the merging algorithm above produces an optimal solution
for a given slice ordering.
Theorem 2. Given an ordered slice sequence the slice-merging
algorithm in Figure 6 produces a solution with the minimum number of slices.
PROOF: The theorem can be proven by contradiction. Let the new slice sequence
produced by the algorithm in Figure 6 be
be an optimal sequence where t ? u. Each E or F slice contains a set of
consecutive L slices from L 1 . The merging algorithm in Figure 6 groups
as many of the first L slices as possible, H(R;E 1 We can take some
of original L slices from slice F 2 , add them to F 1 so that the new F 1 is identical
to E 1 . Let new F 2 be called
2 . Thus we can produce an optimal sequence
We can apply the same transformation to F 0
2 by comparing it with
Finally we can transform sequence F 1 another optimal sequence
That is a contradiction, since this new sequence cannot
completely cover all L slices unless slices are empty. 2
It should be noted that the optimality of the above merging algorithm is restricted
for a given slice ordering. An interesting topic is to study if there exists a slice-
sequencing algorithm which follows the partial order implied by the given DCG and
leads to the minimum number of slices or the minimum parallel time. It can be
shown [Tang 1998] that a heuristic using bin-packing techniques can be developed,
and the number of slices is within a factor of two of the optimum.
T. Yang and C. Fu \Delta 13
5. MEMORY MANAGEMENT FOR SCHEDULE EXECUTION
In this section, we first briefly describe the RAPID run-time system to which our
scheduling techniques are applied. Then, we discuss the necessary run-time support
for efficiently executing DAG schedules derived from the proposed scheduling
algorithms.
5.1 The RAPID System
Dependence
transformation
analysis
Dependence Task
scheduling
clustering &
User specification:
tasks, data objects,
data access patterns.
Data dependence
graph (DDG)
complete task
graph
Iterative
asynchronous
Task assignments,
data object owners
schedules and
execution
Fig. 7. The process of run-time parallelization in RAPID.
Figure
7 shows the run-time parallelization process in RAPID. Each circle is an
action performed by the system, and boxes on either side of a circle represent the
input and output of the action. The API of RAPID includes a set of library functions
for specifying irregular data objects and tasks that access these objects. At
the inspector stage depicted in the left three circles of Figure 7, RAPID extracts
a DAG from data access patterns and produces an execution schedule. At the
executor stage (the rightmost circle), the schedule of computation is executed it-
eratively, since targeted applications have such an iterative nature. For example,
sparse matrix factorization is used extensively for solving a set of nonlinear differential
equations. A numerical method such as Newton-Raphson iterates over
the same sparse dependence graph derived from a Jacobian matrix, and the sparse
pattern of a Jacobian matrix remains the same from one iteration to another. In
the nine applications studied in Karmarkar [1991], the typical number of iterations
for executing the same computation graphs ranges from 708 to 16,069, and the
average is 5973. Thus the optimization cost spent for the inspector stage pays off
for long simulation problems. It is shown in Fu [1997] that the RAPID inspector
stage for the tested sparse matrix factorization, triangular solvers, and fast multipole
method (FMM) with relatively large problem sizes takes 1-2% of the total
time when the schedules are reused for 100 iterations. The inspector idea can be
found in the previous scientific computing research [George and Liu 1981], where
inspector optimization is called "preprocessing." Compared with the previous in-
spector/executor systems for irregular computations [Das et al. 1994], the executor
phase in RAPID deals with more complicated dependence structures.
At the executor stage, RAPID uses direct Remote Memory Access (RMA) to
execute a schedule derived at the inspector stage. RMA is available in modern
multiprocessor architectures such as Cray-T3E (SHMEM), Meiko CS-2 (DMA),
and SCI clusters (memory mapping). With RMA, a processor can write to the
memory of any other processor, given a remote address. RMA allows data transfer
directly from source to destination location without buffering, and it imposes much
lower overhead than a higher-level communication layer such as MPI. The use of
RMA complicates the design of our run-time execution control for data consistency.
Yang and C. Fu
However, we find that a DAG generated in RAPID satisfies the following properties,
which simplifies our design.
(Distinct data objects): A task T x does not receive data objects with the same
identification from different parents.
(Read/write there is a dependence
path either from T x to T y or from T y to T x .
there is a dependence
path either from T x to T y or from T y to T x .
-D4 (DAG sequentialization): Tasks can be executed consistently during sequential
execution following a topological sort of this DAG. Namely, if m 2 R(T x ),
the value of m that T x reads from memory during execution is produced by one
of T x 's parents. In general, a DAG that satisfies D1, D2, and D3 may not always
be sequentializable [Yang 1993].
A DAG with the above properties is called dependence-complete. For example,
DAGs discussed in Figure 1 and in Section 6 are dependence-complete. A DDG derived
from sequential code can be transformed into a dependence-complete DAG [Fu
and Yang 1997].
5.2 The Execution Scheme with Active Memory Management
Maintaining and reusing data space during execution is not a new research topic,
but it is complicated by using low-overhead RMA-based communication, since remote
data addresses must be known in advance. We discuss two issues related to
executing a DAG schedule with RMA:
(1) Address consistency: An address for a data object at a processor can become
stale if the value for this data object is no longer used and the space for this
object is released. Using a classical cache coherence protocol to maintain address
consistency can introduce a substantial amount of overhead. We have
taken a simple approach in which a volatile object is considered dead only if
the object with the same name will not be accessed any more on that processor.
In this way, a volatile object with the same name will only be allocated once at
each processor. This strategy can lead to a slightly larger memory requirement,
but it reduces the complexity of maintaining address consistency. The memory
requirement estimated in Section 3 follows this design strategy.
(2) Address buffering: We also use RMA to transfer addresses. Since address packages
are sent infrequently, we do not use address buffering, so a processor
cannot send new address information unless the destination processor has read
the previous address package. This reduces management overhead.
The execution model using the active memory management scheme is presented
below. A MAP (memory allocation point) is inserted dynamically between two
consecutive tasks executed on a processor. The first MAP is always at the beginning
of execution on each processor. Each MAP does the following:
-Deallocate space for dead volatile objects. The dead information can be statically
calculated by performing a data flow analysis on a given DAG, with a complexity
proportional to the size of the graph.
T. Yang and C. Fu \Delta 15
MAP
allocate d8 MAP allocate d1,
d3 and d5
addrs. for d1
d3 and d5
d8
addr. of
d3 is
suspended
send d3, d5
suspended
d7 is
MAP allocate d7
addr. of d7
send d7
send d8
(a)
MAP
stop
no
all data objects ready
MAP? yes
REC
END RA and CQ
next task
send out addrs.
complete computation
(b)
Fig. 8. (a) MAPs in executing the schedule of Figure 2(c). (b)The control flow on each processor.
-Allocate volatile space for tasks to be executed after the current point in the execution
chain. Assuming that are the remaining tasks on this processor,
the allocation will stop after T k if there is not enough space for executing T k+1 .
The next MAP will be right before T k+1 .
-Assemble address packages for other processors. Address packages may differ
depending on what objects are to be accessed at other processors.
Figure
8(a) illustrates MAPs and address notification when executing the schedule
in
Figure
1(c). If the available amount of memory is 8 for each processor, then
there are 2 units of memory for volatile objects on P 1 . In addition to the MAPs
at the beginning of each task chain, there is another MAP right after task T 5;10 on
1 , at which space for d 3 and d 5 will be freed and space for d 7 will be allocated.
The address for d 7 on P 1 is then sent to P 0 . P 0 will send the content of d 7 to P 1
after it receives the address of d 7 .
Figure
8(b) shows the control flow in our execution scheme. The system has five
different states of execution:
(1) REC. Waiting to receive desired data objects. If a processor is in the REC
state, it cannot proceed until all the objects the current task needs are available
locally.
(2) EXE. Executing a task.
(3) SND. Sending messages. If the remote address of a message is not available,
this message is enqueued.
MAP. A processor could be blocked in the MAP state when it attempts to
send out address packages to other processors but a previous address package
has not been consumed by a destination processor.
END. At this state, the processor has executed all tasks, but it still needs
to clear the send queue, which might be blocked if addresses for suspended
Yang and C. Fu
messages are still unavailable.
For the three blocking states (i.e., the states with self-cycles in Figure 8(b)), the
following two operations must be conducted frequently in order to avoid deadlock
and make the execution evolve quickly: RA (Read any new address package ) and
(Deliver suspended messages when addresses are available).
5.3 An Analysis on Deadlock and Consistency
Theorem 3. Given a DAG G and a legal schedule, execution with the active
memory management is deadlock free. Namely, the system eventually executes all
tasks.
PROOF: We assume that communication between processors is reliable, and we
prove this theorem by induction. We will use the following two facts in our proof.
-Fact 1. If a deadlock situation happens, there are a few processors blocked in a
waiting cycle (e.g., a circular chain) in state REC, MAP, or END. Eventually,
all processors in this circular chain only do two things (RA and CQ). The space
allocation and releasing activities should complete if there is any.
-Fact 2. If a processor is waiting to receive a data object, the local address for this
data object must have already been notified to other processors. This is because
each processor always allocates space and sends out addresses of objects before
using those objects.
Let G 0 be the scheduled graph of G; G 0 is acyclic because the given schedule is
legal. Without loss of generality, we assume that a topological sort of G 0 produces
a linear task order our induction follows this order.
Induction Base. T 1 must be an entry task in G 0 , i.e., a task without parents.
Before the execution of T 1 on some processor called a MAP is executed. After
completing space allocation, P x only has to send out the newly created addresses.
If a deadlock occurs, there are a few processors involved in a circular waiting chain.
When P x is blocked in state MAP, awaiting the availability of address buffers of
some destination processors, it will just do RA and CQ. Since the destination processors
must be in the circular chain and they are also doing RA and CQ (according
to Fact 1), their address buffers should eventually be free. Then the newly created
on P x can be sent out, and P x should be able to leave the MAP state.
does not have any parent, all data that T 1 needs is available locally. Hence,
T 1 can complete successfully.
Induction Assumption. Assume that all tasks T x for 1 - x
execution. Then if T k has parents in G 0 , all of them have completed execution. We
show that T k will be executed successfully.
Suppose not, i.e., a deadlock occurs. Let P x be T k 's processor. The state of P x
can be either MAP or REC; it cannot be state END. We discuss the following two
cases.
Case 1. If P x is in state REC, under the induction assumption, the only reason
that P x cannot receive a data object for T k is that this object has not been sent
T. Yang and C. Fu \Delta 17
out from a remote processor P y . Since all T k 's parents are finished, the only cause
for P y not to send a data object out is the unavailability of its remote address
on P x . According to Fact 2, the address must be already sent out to P y if P x is
waiting to receive the object. Hence P y will eventually read that address through
operation RA (Fact 1) and deliver the message to P x . Therefore, P x can execute T k .
Case 2. If P x is in state MAP, the situation is the same as the one discussed for
the induction base. This processor should be able to leave the MAP state. 2
Theorem 4. Given a dependence-complete DAG G and a legal schedule, execution
with the active memory management is consistent. Namely, each task reads
data objects produced by its parents specified by G.
PROOF: By Theorem 3, all tasks are executed by our run-time scheme. For each
dependence edge (T x ; T y ), we check if T y indeed reads object m produced by T x
during execution. As illustrated in Figure 9, there are two cases in which T y could
read an inconsistent copy of m, and we prove by contradiction that both of them
are impossible. We assume that T y is scheduled on processor P y and T x is scheduled
on P x .
Time
Time
Dependence path/edge
Write/send an object
Case 1 Case 2
or or
Fig. 9. An illustration of the two cases in proving Theorem 4.
-Case 1 (Sender-side inconsistency): If P y 6= P x , after execution of T x , P x tries to
send m to P y . This message may be suspended because the destination address
may not be available. Since buffering is not used, content of m on P x may be
modified before it is actually sent out to P y at time t.
Assume that this case is true. Let T u be the task that overwrites m on processor
sent out to P y . Then T u must intend to produce m for another
task T v on P x . Since execution of T u and T x happens before T v and T y , according
to Property D2, there must exist a dependence path from T x to T v and from T u
to T y . According to Property D3, there must exist a dependence path between
T u and T x .
Yang and C. Fu
If there exists a dependence path from T x to T u , the order among T x ; T u , and T y
during sequential execution must be T x
would not be able
to read the copy of m produced by T x , which contradicts Property D4.
If there exists a dependence path from T u to T x , similarly we can show that T v
would not be able to read m produced by T u during sequential execution, which
contradicts Property D4.
-Case 2 (Receiver-site inconsistency):. After object m produced by T x is successfully
delivered to the local memory of P y , the content of m on P y may be
overwritten by another task (called T u it at time t.
Assume that this case is true. Let T v be the task assigned to P y , and this task
is supposed to read m produced by T u . According to Property D1, T v 6= T y .
As illustrated in Figure 9, according to Properties D2 and D3, the dependence
structure among T is the same as Case 1. Similarly, we can find
a contradiction. 2
6. EXPERIMENTAL RESULTS
We have implemented the proposed scheduling heuristics and active memory management
scheme in RAPID on Cray-T3D/T3E and Meiko CS-2. In this section we
report the performance of our approach on T3E for three irregular programs:
-Sparse Cholesky factorization with 2D block data mapping [Rothberg 1992; Rothberg
and Schreiber 1994; Fu and Yang 1997]. The task graph has a static dependence
structure as long as the nonzero pattern of a sparse matrix is given at the
run-time preprocessing stage.
-Sparse Gaussian Elimination (LU factorization) with partial pivoting. This problem
has unpredictable dependence and storage structures due to dynamic piv-
oting. Its parallelization on shared-memory platforms is addressed in Li [1996].
However, its efficient parallelization on distributed-memory machines still remains
an open problem in the scientific computing literature. We have used a static
factorization approach to estimate the worst-case dependence structure
and storage need. In Fu and Yang [1996b], we show that this approach does not
overestimate the space too much for most of the tested matrices, and the RAPID
code can deliver breakthrough performance on T3D/T3E. 1
-Fast Multipole Method (FMM) for simulating the movement of nonuniformly
distributed particles. Given an irregular particle distribution, the spatial domain
is divided into boxes in different levels which can be represented as a hierarchical
tree. Each leaf contains a number of particles. At each iteration of a particle
simulation, the FMM computation consists of upward and downward passes in
this tree. At the end of an iteration, a particle may move from one leaf to another,
and the computation and communication weights of the DAG which represents
the FMM computation may change slightly. Since the particle movement is
normally slow, the DAG representing the FMM computation can be reused for
many iterations. It has been found [Jiao 1996] that static scheduling can be reused
for approximately 100 iterations without too much performance degradation. A
We have recently further optimized the code by using a special scheduling mechanism and eliminating
RAPID control overhead, and set a new performance record [Shen et al. 1998].
T. Yang and C. Fu \Delta 19
detailed description of FMM parallelization using RAPID can be found in Fu
[1997].
We first examine how the memory-managing scheme impacts parallel performance
when space is limited. We then study the effectiveness of scheduling heuristics
in reducing memory requirements. The reason for this presentation order is
that the proposed scheduling algorithms will not be effective without proper run-time
memory management. This presentation order also allows us to separate the
impact of run-time memory management and new scheduling algorithms.
The T3E machine we use has 128MB memory per node, and the BLAS-3 GEMM
routine [Dongarra et al. 1988] can achieve 388 megaflops. The RMA primitive
SHMEM PUT can achieve 0.5-2-s overhead with 500MB/sec. peak bandwidth.
The test matrices used in this article are Harwell-Boeing matrix BCSSTK29 arising
from structural engineering analysis for sparse Cholesky, and the "goodwin" matrix
from a fluid mechanics problem for sparse LU. These matrices are of medium size 2
and solvable with any one of the three scheduling heuristics so that we can compare
their performance. For FMM, we have used a distribution of 64K particles.
Experiments with other test cases reach similar conclusions.
In reporting parallel time under different memory constraints, we manually control
the available memory space on each processor to be 75%, 50%, 40%, or 25%
of TOT , where TOT is the total memory space needed for a given task schedule
without any space recycling. To obtain TOT , we calculate the sum of the space
for permanent and volatile objects accessed on each processor and let TOT be the
maximum value among all processors.
6.1 RAPID with and without Active Memory Management
7051525Performance for Sparse Cholesky Factorization on T3E
#processors
70103050Performance for Fast Multipole Method on T3E
#processors
(a) (b)
Fig. 10. Speedups without memory optimization. (a) Sparse Cholesky. (b) FMM.
2 BCSSTK29 is of dimension 13,992 and has 1.8 million nonzeros including fill-ins; goodwin is of
dimension 7320 and has 3.5 million nonzeros including fill-ins.
Yang and C. Fu
Table
II. Absolute Performance (megaflops) for Sparse LU with Partial Pivoting
Matrix P=2 P=4 P=8 P=16 P=32 P=64
goodwin 73.6 135.7 238.0 373.7 522.6 655.8
RAPID without Memory Management. Figure 10 and Table II show the overall
performance of RAPID on T3E without using any memory optimization for the
three test programs. This version of RAPID does not recycle space at the executor
stage. The results serve as a comparison basis when assessing the performance
of our memory management scheme. Note that the speedups for Cholesky and
FMM are compared with high-quality sequential code, and the results are consistent
with the previous work [Rothberg 1992; Jiao 1996]. The speedup for Cholesky
is reasonable, since we deal with sparse matrices. The speedup for FMM is high, because
leaf nodes of an FMM hierarchical tree are normally computation-intensive
and have sufficient parallelism. For sparse LU, since our approach uses a static
symbolic factorization which overestimates computation, we only list the megaflops
performance. In calculating megaflops, we use more accurate operation counts from
SuperLU [Li 1996] and divide them by corresponding numerical factorization time.
RAPID with Active Memory Management. Table III examines performance
degradation after using active memory management. RCP is still used for task
ordering, and we show later on how much improvement on space efficiency can be
obtained by using DTS and MPO. The results in this table are for sparse Cholesky
(BCSSTK29), sparse LU (goodwin), and FMM under different space constraints.
Column "PT inc." is the ratio of parallel time increase after using our memory management
scheme. The comparison base is the parallel time of a RCP schedule with
100% memory available and without any memory management overhead. Entries
marked with "1" imply that the corresponding schedule is nonexecutable under
that memory constraint. The results basically show the trend that performance
degradation increases as the number of processors increases and the available memory
space decreases, because more overhead is contributed by address notification
and space recycling. However, degradation is reasonable considering the amount
of memory saved. For example, the memory management scheme can save 60% of
space for Cholesky, while the parallel time is degraded by 64-93%. Observe that
a schedule is more likely to be executable under reduced memory capacity when
the number of processors increases. This is because more processors lead to more
volatile objects on each processor, which gives the memory management scheme
more flexibility to allocate and deallocate space. That is why even with 40% of
the maximum memory requirement, schedules with active memory management
are still executable on 16 and 32 processors, while RAPID without such support
fails to execute. In Table III, we also list the average number of MAPs required
one each processor. The more processors are used, the fewer MAPs are required,
since less space is needed to store permanent objects on each processor.
Note that for FMM, execution time with active memory management is sometimes
even shorter than without memory management. An explanation for this
T. Yang and C. Fu \Delta 21
Table
III. Performance Degradation after Using Active Memory Management
#MAP PT inc. #MAP PT inc. #MAP PT inc.
P=8 2.00 38.1% 3.00 42.1% 5.00 64.1%
P=32 2.00 49.2% 2.94 72.7% 3.22 94.3%
LU 75% 50% 40%
#MAP PT inc. #MAP PT inc. #MAP PT inc.
FMM 75% 50% 40%
#MAP PT inc. #MAP PT inc. #MAP PT inc.
P=32 2.00 -11.5% 3.00 11.5% 5.00 18.5%
is that although computation associated with leaf nodes of a particle partitioning
tree is intensive, it does not mix much with intensive communication incurred in
the downward and upward passes. Compared with Cholesky and LU, there are
more interprocessor messages in FMM during the downward and upward passes,
and the insertion of memory-managing activities enlarges gaps between consecutive
communication messages, which leads to less network contention.
Overhead of MAP. There are three types of memory management activities that
result in time increase: RA, CQ, and MAP. Through the experiments we have
found that the delivery of address packages by an MAP has never been hindered
by waiting for the previous content of address buffers to be consumed. Table IV
reports the overhead imposed by MAPs. It is clear that the overhead is insignificant
compared with the total time increase studied in Table III. However, this activity
and frequent address checking/delivering operations prolong message sending and
cause the execution delay of tasks on critical paths.
6.2 Effectiveness and Comparisons of Memory-Scheduling Heuristics
In this subsection, we compare the memory and time efficiency of RCP, MPO, and
DTS.
Memory Scalability. First we examine how much memory can be saved by using
MPO and DTS. We define memory scalability (or memory reduction ratio) as
, where S 1 is the sequential space requirement, and S A
p is the space requirement
per processor for a schedule produced by algorithm A on p processors.
22 \Delta T. Yang and C. Fu
Table
IV. MAP Overhead in Terms of Percentage of the Total Execution Time
75% 50% 40% 75% 50% 40% 75% 50% 40%
P=32 15.4% 12.0% 10.0% 10.2% 6.8% 5.0% 4.3% 3.2% 3.3%
Comparison of memory requirements for sparse Cholesky
#processors
Memory
requirement
reduction
ratio
x: MPO
Comparison of memory requirements for sparse LU
#processors
Memory
requirement
reduction
ratio
x: MPO
(a) (b)
Comparison of memory requirements for FMM
#processors
Memory
requirement
reduction
ratio
x: MPO
(c)
Fig. 11. Memory scalability comparison of the three scheduling heuristics. (a) Sparse Cholesky.
(b) Sparse LU. (c) FMM.
Figure
11 shows the memory reduction ratios of the three scheduling algorithms for
Cholesky, LU, and FMM. The uppermost curve in each graph is for S 1 =p, which is
the perfect memory scalability. The figure shows that both MPO and DTS significantly
reduce the memory requirement while DTS has a memory requirement close
to the optimum in the Cholesky and LU cases. This is consistent with Corollaries 1
and 2. On the other hand, RCP is very time efficient, but not memory scalable,
particularly for sparse LU. For FMM, we find that the DTS algorithm results in a
single slice, i.e., all tasks belong to the same slice. The reason is that there are a lot
T. Yang and C. Fu \Delta 23
of dependencies among tasks, so DTS is actually reduced to RCP. Thus, this experiment
shows that if we allow the complexity to increase from O(e(log(vm))+v log v)
to O(ev +m), MPO can be applied to scheduling tasks within each slice instead of
RCP, which further improves space efficiency.
Time Difference between RCP, MPO, and DTS. We have also compared the
parallel time difference among three heuristics in Tables V and VI under different
memory constraints. In these two tables, if algorithm A is compared with B (i.e.,
A vs. B), an entry marked by "*" indicates that the corresponding B schedule is
executable under that memory constraint while the A schedule is not. Mark "-"
indicates that both A and B schedules are nonexecutable.
Table
V. Increase of Parallel Time from RCP to MPO (RCP vs. MPO). The ratio is
P=4 9.6% 11.0% 11.1% * -
LU 100% 75% 50% 40% 25%
FMM 100% 75% 50% 40% 25%
Table
shows actual parallel time increase when switching from RCP to MPO.
The average increase is reasonable. Sometimes MPO schedules outperform RCP
schedules even though the predicted parallel time of RCP is shorter. This is because
although MPO does not use as much critical-path information as RCP does, it
reduces the number of MAPs needed, and this can improve execution efficiency.
Furthermore, reusing an object as soon as possible potentially improves caching
performance. These factors are mixed together, making actual execution time of
MPO schedules competitive to RCP.
DTS is aggressive in memory saving, but it does not utilize the critical-path information
in computation slicing. Table VI shows time slowdown using DTS instead
of MPO. It is clear that MPO substantially outperforms DTS in terms of execution
time, even though DTS is more efficient in memory usage. The difference is especially
significant for a large number of processors. This is because MPO optimizes
both memory usage and parallel time. However, there are times when we need
DTS. For instance, in the LU case with 25% available memory, the DTS schedule
Yang and C. Fu
Table
VI. Increase of Parallel Time from MPO to DTS (MPO vs. DTS). The ratio is
LU 100% 75% 50% 40% 25%
P=8 43.5% 37.4% 32.2% 5.5% -
is executable on 16 processors, while the MPO schedule is too space costly to run.
Note that DTS space efficiency can be further improved by using MPO to schedule
each slice.
Slice Merging in DTS. If the available amount of memory space is known, DTS
schedules can be further optimized by the slice-merging process (called DTSM )
discussed in Section 4.3. We list the time reduction ratio by using slice merging in
Table
VII (DTS vs. DTSM ), and the results are very encouraging. For most cases,
substantial improvement is obtained. As a result, parallel time of DTS schedules
with slice merging can get very close to RCP schedules. This is because merged
slices give the scheduler more flexibility in utilizing critical-path information, and
DTS is also effectively improving cache performance. Thus, the DTS algorithm
with slice merging is very valuable when the problem size is big and the available
amount of space is known.
Table
VII. Reduction of Parallel Time from DTS to DTSM . The ratio is
P=4 6.13% 4.85% -2.90% 7.29% -
LU 100% 75% 50% 40% 25%
P=32 50.55% 39.96% 38.85% 34.56% 23.95%
Impact on Solvable Problem Sizes. The new scheduling algorithms can help solve
problems which are unsolvable with the original RAPID system which does not optimize
space usage. For example, previously the biggest matrix that could be solved
T. Yang and C. Fu \Delta 25
using the RAPID LU code was e40r0100, which contains 9.58 million nonzeros
with fill-ins. Using the run-time active memory management and DTS scheduling
algorithm, RAPID is able to solve a larger matrix called ex11 with 26.8 million
nonzeros, and it achieves 978.5 megaflops on 64 T3E nodes. In terms of single-node
performance, we get 38.7 megaflops per node on 16 nodes and 13.7 megaflops per
node on 64 nodes. Considering that the code has been parallelized by a software
tool, these numbers are very good for T3E.
7. CONCLUSIONS
Optimizing memory usage is important to solve large parallel scientific problems,
and software support becomes more complex when applications have irregular computation
and data access patterns. The main contribution of our work is the development
of scheduling optimization techniques and an efficient memory managing
scheme that supports the use of fast communication primitives available on modern
processor architectures. The proposed techniques integrated with the RAPID
run-time system achieve good time and space efficiency. The theoretical analysis
on correctness and memory performance corroborates the design of our techniques.
Experiments with sparse matrix and FMM code show that the overhead introduced
by memory management activities is reasonable. The MPO heuristic is competitive
to the critical-path scheduling algorithm, and it delivers good memory and time
efficiency. The DTS is more aggressive in memory saving; it achieves competitive
time efficiency when slice merging is conducted, and its space efficiency can be
further improved by incorporating MPO for slice scheduling.
It should be noted that the proposed techniques are useful for semiautomatic programming
tools such as RAPID. It is still challenging to develop a fully automatic
system. In the future, it is interesting to study automatic generation of coarse-grained
DAGs from sequential code [Cosnard and Loi 1995], extend our results
for more complicated dependence structures [Chakrabarti et al. 1995; Girkar and
Polychronopoulos 1992; Ramaswamy et al. 1994], and investigate use of the proposed
techniques in performance engineered parallel systems [DARPA 1998]. While
massively parallel distributed-memory machines will still be valuable for high-end
large-scale application problems in the future (e.g., the DOE ASCI program), an
extension for SMP clusters will be useful. DTS scheduling actually also improves
caching performance, and the use of this result for data placement in SMPs with
memory hierarchies needs further study.
ACKNOWLEDGEMENTS
We would like to thank Apostolos Gerasoulis, Keshav Pingali, Ed Rothberg, Vivek
Sarkar, Rob Schreiber, and Kathy Yelick for their comments on this work, the
anonymous referees, and Siddhartha Chatterjee, and Vegard Holmedahl for their
valuable feedbacks to improve the presentation. Theorem 2 was pointed out by one
of the referees. We also thank Xiangmin Jiao for his help in implementing RAPID,
Jia Jiao for providing us the FMM code and test cases, and Xiaoye Li for providing
the LU test matrices.
--R
Provably Efficient Scheduling for
Cilk: An Efficient Multithreaded Runtime System.
Modeling the Benefits of Mixed Data and Task Parallelism.
Multiprocessor Runtime Support for Fine-Grained Irregular DAGs
Automatic Task Graph Generation Techniques.
Efficiently computing static single assignment form and the control dependence graph.
http://www.
Communication Optimizations for Irregular Scientific Computations on Distributed Memory Architectures
An Extended Set of Basic Linear Algebra Subroutines.
Flexible Communication Mechanismsfor Dynamic Structured Applications
Scheduling and Run-time Support for Parallel Irregular Computations
Sparse LU Factorization with Partial Pivoting on Distributed Memory Machines.
Also as UCSB technical report TRCS97-03
Computer Solution of Large Sparse Positive Definite Systems.
Scheduling of Structured and Unstructured Computation
Automatic Extraction of Functinal Parallelism from Ordinary Programs.
Implementing Active Messages and Split-C for SCI Clusters and Some Architectural Implications
Software Support for Parallel Processing of Irregular and Dynamic Computations.
A New Parallel Architecture for Sparse Matrix Computation Based on Finite Project Geometries
High Performance Fortran: Problems and Progress.
Sparse Gaussian Elimination on High Performance Computers.
Parallel Programming and Compilers.
Exploiting the Memory Hierarchy in Sequential and Parallel Sparse Cholesky Factorization
Improved Load Distribution in Parallel Sparse Cholesky Factorization.
Partitioning and Scheduling Parallel Programs for Execution on Multiproces- sors
Experience with Active Messages on the Meiko CS-2
Elimination Forest Guided 2D Sparse LU Factorization.
Decoupling Synchronization and Data Transfer in Message Passing Systems of Parallel Computers.
Personal Communication.
Active Messages: a Mechanism for Integrated Communication and Computation.
Runtime Support for Portable Distributed Data Structures.
Program Parititoning for NUMA Multiprocessor Computer Sys- tems
Scheduling and Code Generation for Parallel Architectures.
of Computer Science
List Scheduling with and without Communication Delays.
Parallel Computing
DSC: Scheduling Parallel Tasks on An Unbounded Number of Processors.
revised July
--TR
Algorithm 656: an extended set of basic linear algebra subprograms: model implementation and test programs
Efficiently computing static single assignment form and the control dependence graph
A new parallel architecture for sparse matrix computation based on finite projective geometries
Active messages
Program partitioning for NUMA multiprocessor computer systems
List scheduling with and without communication delays
Techniques to overlap computation and communication in irregular iterative applications
Communication optimizations for irregular scientific computations on distributed memory architectures
Scheduling and code generation for parallel architectures
Improved load distribution in parallel sparse Cholesky factorization
Provably efficient scheduling for languages with fine-grained parallelism
Modeling the benefits of mixed data and task parallelism
Decoupling synchronization and data transfer in message passing systems of parallel computers
Run-time compilation for parallel sparse matrix computations
Run-time techniques for exploiting irregular task parallelism on distributed memory architectures
Elimination forest guided 2D sparse LU factorization
Sparse LU factorization with partial pivoting on distributed memory machines
Partitioning and Scheduling Parallel Programs for Multiprocessors
Parallel Programming and Compilers
Computer Solution of Large Sparse Positive Definite
Automatic Extraction of Functional Parallelism from Ordinary Programs
Experience with active messages on the Meiko CS-2
Flexible Communication Mechanisms for Dynamic Structured Applications
Software support for parallel processing of irregular and dynamic computations
Sparse gaussian elimination on high-performance computers
Scheduling and run-time support for parallel irregular computations
--CTR
Roxane Adle , Marc Aiguier , Franck Delaplace, Toward an automatic parallelization of sparse matrix computations, Journal of Parallel and Distributed Computing, v.65 n.3, p.313-330, March 2005
Heejo Lee , Jong Kim , Sung Je Hong , Sunggu Lee, Task scheduling using a block dependency DAG for block-oriented sparse Cholesky factorization, Proceedings of the 2000 ACM symposium on Applied computing, p.641-648, March 2000, Como, Italy
Heejo Lee , Jong Kim , Sung Je Hong , Sunggu Lee, Task scheduling using a block dependency DAG for block-oriented sparse Cholesky factorization, Parallel Computing, v.29 n.1, p.135-159, January | irregular parallelism;run-time support;DAG scheduling;direct remote memory access |
295662 | Equality-based flow analysis versus recursive types. | Equality-based control-flow analysis has been studied by Henglein, Bondorf and Jrgensen, DeFouw, Grove, and Chambers, and others. It is faster than the subset-based-0-CFA, but also more approximate. Heintze asserted in 1995 that a program can be safety checked with an equality-based control-flow analysis if and only if it can be typed with recursive types. In this article we falsify Heintze's assertion, and we present a type system equivalent to equality-based control-flow analysis. The new type system contains both recursive types and an unusual notion of subtyping. We have s t if s and t unfold to the same regular tree, and we have ⊥t⊤ where t is a function type. In particular, there is no nontrivial subtyping between function types. | Introduction
Control-flow analysis is done to determine approximate sets of functions that may be called
from the call sites in a program. In this paper we address an instance of the question:
Question: How does flow analysis relate to type systems?
Our focus is on:
1. equality-based control-flow analysis which has been studied by Henglein [9], Bondorf
and J-rgensen [3], DeFouw, Grove, and Chambers [5], and others, and
2. recursive types which, for example, are present in a restricted form in Java [6], in the
form of recursive interfaces where equality and subtyping is based on names rather
than structure.
Equality-based control-flow analysis is a simplification of subset-based control-flow analysis
[16, 11, 8]. We will use the abbreviations:
subset-based control-flow analysis, and
equality-based control-flow analysis.
0-CFA' is also known as, simply, 0-CFA. We can illustrate the difference between 0-CFA'
and 0-CFA= by considering how they analyze a call site e 1 e 2 in a functional program.
Suppose -x:e is a function in that program. We want a flow analysis to express that:
if -x:e becomes the result of evaluating e 1 , then flow relations are established
between the actual argument e 2 and the formal argument x, and 2) between
the body e and the call site e 1 e 2 .
With a subset-based analysis, the flow relations are subset inclusions. This models that
values flow from the actual argument to the formal argument, and from the body of the
function back to the call site. With an equality-based analysis, the flow relations are
equations. Thus, the flow information for the actual and formal argument are forced to be
the same, and the flow information for the body and the call site are also forced to be the
same. Intuitively, the equations establish a bidirectional flow of information.
0-CFA= is more approximate than 0-CFA' . Both have been implemented many times
for various purposes. In general, for functional and object-oriented languages, 0-CFA'
can be executed in cubic time. For programs with finite types, 0-CFA' can be executed
in quadratic time [8], and specific flow-oriented questions such as "identify all functions
called from only one call site" can be answered in linear time [8]. For comparison, 0-
CFA= can always be executed in almost-linear time [9]. Which one of 0-CFA' and 0-
CFA= is the better choice in practice? For a language like ML [10] where functions have
finite polymorphic types and data may have recursive types, experiments by Heintze and
McAllester [8] indicate that it is a good choice to use 0-CFA' . They implemented a variant
of the quadratic-time algorithm for 0-CFA' which treated data in a much simplified way.
For the problem of pointer analysis, there are algorithms which are close cousins of 0-CFA'
and 0-CFA= [17]. For this problem, the condition of finite types does not hold in general.
Shapiro and Horwitz [15] presented an experimental comparison of the two algorithms,
and it confirms the theoretical conclusion that 0-CFA= is faster and more approximate
than 0-CFA' . For an object-oriented language like Java, the condition of finite types
is seldomly satisfied because of, for example, binary methods [4]. DeFouw, Grove, and
Chambers [5] experimentally compared a family of flow-analysis algorithms whose time
complexities are at most cubic time. Both 0-CFA= and some of its variants do well in
that comparison. Ashley [2] has also presented a flow analysis with time complexity less
than cubic time. It remains open how it relates to 0-CFA= . Bondorf and J-rgensen [3]
implemented both 0-CFA' and 0-CFA= for Scheme as part of the partial evaluator Similix.
For Scheme, the condition of finite types does not hold in general. They concluded that
the two analyses have comparable precision for their application, and that 0-CFA= is much
faster. In summary, 0-CFA= has in experiments proved to be a preferable alternative to
0-CFA' for many applications.
Flow analyses such as 0-CFA can be formulated using constraints, see for example
[11, 14]. This approach proceeds in two steps: 1) derive flow constraints from the program
text, and 2) compute the least solution of the constraints. The least solution is the desired
flow information. The precision of the analysis stems from the choice of constraints. For
example, one choice leads to 0-CFA' , and another choice leads to 0-CFA= . The kind of
flow constraints used in, for example, the paper [11] always admits a least solution.
We can turn a flow analysis into a predicate which accepts and rejects programs, by
extending it with safety constraints. For example, for a call site e 1 e 2 in a functional
program, a safety constraint might express: "does the flow information for e 1 denote only
Safety constraints do not always have a solution. They can be derived from
the program text, just like flow constraints. This means that we can do a flow-based safety
analysis of a program in two steps: 1) derive flow and safety constraints from the program
text, and 2) decide if the constraints are satisfiable. Such a safety analysis performs a task
akin to type inference, in the sense that "safe" is like "typable."
Palsberg and O'Keefe [12] showed that a program can be safety checked with 0-CFA'
if and only if it can be typed in Amadio and Cardelli's type system with subtyping and
recursive types [1]. The proof of this connection makes explicit the close relationship
between flow and subtyping.
Heintze asserted in 1995 [7] that a program can be safety checked with 0-CFA= if and
only if it can be typed with recursive types. This assertion is reasonable because it says
that, intuitively, if we replace subset inclusions by equalities, then the need for subtyping
disappears. Heintze's assertion is also consistent with the observation that both 0-CFA=
and type inference with recursive types can be executed in almost-linear time. Perhaps
surprisingly, Heintze's assertion is false. For example, consider the -term:
The variable f is applied to both the number 0 and the function -x:x. Thus, the -term
does not have a type in a type system with recursive types but no subtyping. Still, a
-based safety analysis accepts this program, by assigning both f and g the empty
flow set, see Section 2 for details.
For another example, consider the -term:
It reminds a bit of the previous example, but now f is applied to (-a:0) and (-b:-x:x).
Again, the -term e 2 does not have a type in a type system with recursive types but no
subtyping. For conservative flow analysis cannot assign the empty flow set to f
because that flow set should at least contain (-y:0). Still, a 0-CFA= -based safety analysis
accepts this program, by assigning y a flow set which contains both (-a:0) and (-b:-x:x).
Given that Heintze's assertion is false, we are left with two questions:
1. which type system corresponds to 0-CFA= ?, and
2. which control-flow analysis corresponds to recursive types?
Palsberg and O'Keefe's result [12] implies that E 1 and E 2 can be typed if we have both
recursive types and Amadio/Cardelli subtyping. Their result also seem to indicate that
adding both recursive types and all of the Amadio/Cardelli subtyping to match 0-CFA=
would be overkill. Thus, to answer the first question, it makes sense to ask: how much
subtyping is necessary and sufficient to match 0-CFA= ? To answer the second question we
must ask: what restrictions on 0-CFA= must we impose to match recursive types?
In this paper we answer the first question and we give a partial answer to the second
question. We show that a program can be safety checked with 0-CFA= if and only if it can
be typed with recursive types and an unusual restriction of Amadio/Cardelli subtyping.
We have s - t if s and t unfold to the same regular tree, and we have
is a function type. In particular, there is no non-trivial subtyping between function types.
To see why non-trivial subtyping between function types is not required to match 0-CFA= ,
consider the program (-x:e)e 0 . Let hxi be a flow variable for the binding occurrence of x,
and let [[(-x:e)e 0 ]], [[-x:e]], flow variables for the occurrences (-x:e)e 0 , -x:e, e,
e 0 , respectively. If ' is a map from flow variables to flow sets, which satisfies the 0-CFA=
constraints, then in particular it satisfies
We can also use hxi, variables, and for a type system
such as simple types where there is no non-trivial subtyping between function types, we
get, among others, the following constraints on type correctness:
Unification gives that a typing must satisfy the constraints:
Thus, we get the same form of relationships between the types as there are between the
flow sets. If we allow non-trivial subtyping between function types, then the constraints
on type correctness become [12]:
In particular, this opens the possibility for a non-trivial relationship:
and hence
These constraints are closely related to the flow constraints used in 0-CFA' [12].
We also show that if a program can be safety checked with a certain restriction of
0-CFA= , then it can be typed with recursive types. Our restriction of 0-CFA= is that all
flow sets must be nonempty and consistent. Consistency means that if two functions -x:e
and -y:e 0 occur in the same flow set, then the flow sets for x and y are equal, and also the
flow sets for e and e 0 are equal.
In slogan-form, our results read:
tiny drop of subtyping.
Recursive types '
The key to understanding the second result is that both empty flow sets and flow sets
with two or more inconsistent functions have no counterparts in a type system with just
recursive types. The restricted version of 0-CFA= does not fully match recursive types,
because a program may have a type for which no flow set exists.
In the next section, we present Heintze's definition of 0-CFA= , in Section 3 we present
the new type system, and in Section 4 and 5 we prove our results. Our example language
is a -calculus, defined by the grammar:
where succ denotes the successor function on integers.
Equality-Based Control-Flow Analysis
Given a -term P , assume that P has been ff-converted such that all bound variables are
distinct and different from the free variables. Let Var(P ) be the set of -bound variables
in P . Let X P be the set of variables consisting of one variable hxi for each x 2 Var(P ). Let
Y P be a set of variables disjoint from X P consisting of one variable each occurrence
of a subterm e of P . (The notation ambiguous because there may be more than one
occurrence of e in P . However, it will always be clear from context which occurrence is
meant.) The set Abs(P ) is the set of occurrences of subterms -x:e of P . The set CL(P ) is
Flow-based safety analysis of a -term P can be phrased in
terms of a constraint system over the variables range over CL(P
ffl For every occurrence in P of a subterm of the form 0, the constraint
ffl for every occurrence in P of a subterm of the form succ e, the two constraints
ffl for every occurrence in P of a subterm of the form -x:e, the constraint
ffl for every occurrence in P of a subterm of the form e 1 e 2 , the constraint
ffl for every occurrence in P of a -variable x, the constraint
ffl for every occurrence in P of a subterm of the form -x:e, and for every occurrence in
P of a subterm of the form e 1 e 2 , the constraints
The last two constraints create a connection between a call site e 1 e 2 and a potential callee
-x:e. Notice that two of the constraints are not equalities, but subset inclusions. This is
the key reason why subtyping is needed to match this safety analysis.
This constraint system mixes flow constraints and safety constraints. The safety constraints
are:
ffl for succ e: [[succ
ffl for e 1
and the rest are flow constraints. Notice that because Int and functions cannot occur in
the same flow set we have that a constraint such as has the same effect as
fIntg.
Denote by C(P ) the system of constraint generated from P in this fashion. Let Cmap(P )
be the set of total functions from
all constraints in C(P ). We say that P is 0-CFA= safe if C(P ) is
For example, consider again
where we have labeled the two occurrences of f as f 1 and f 2 , for notational convenience.
We have:
The constraint system C(E 1 ) has the point-wise '-least solution
Next, consider again:
where we have labeled the occurrences of f as f 1 and f 2 , for notational convenience. The
constraint system C(E 2 ) has the point-wise '-least solution
etc.
3 The Type System
We use v to range over type variables drawn from a countably infinite set Tv. Types are
defined by the grammar:
with the restriction that a type is not allowed to contain anything of the form
We identify types with their infinite unfoldings under the rule:
Such infinite unfolding eliminates all uses of - in types. It follows that types are a class of
regular trees over the alphabet
There is a subtype relation - on types:
It is straightforward to show that - is a partial order. Notice that ? is a lower bound and
? is an upper bound for only the function types but not Int. A more suggestive notation
might be ?! for ?, and ?! for ?.
A type environment is a partial function with finite domain which maps -variables to
types. We use A to range over type environments. We use the notation A[x : t] to denote
an environment which maps x to t, and maps y, where y 6= x, to A(y). A type judgment
has the form A ' e : t, and it means that in the type environment A, the expression e has
type t. Formally, this holds when it is derivable using the rules below.
Notice that there is no subsumption rule; instead subtyping can only be used in a
restricted way in rules 2 and 3. We say that e is RS-typable if A ' e : t is derivable
for some A; t. (RS stands for "restricted subtyping.") The type system has the subject
reduction property, that is, if A ' e : t is derivable, and e beta-reduces to e 0 , then A ' e
is derivable. This can be proved by straightforward induction on the structure of the
derivation of A ' e : t.
Here follow type derivations for the two -terms Section 1. The first type
derivation uses the abbreviation:
Notice the four uses of subtyping. Notice also that the only possible type for f is ?.
The second derivation uses the abbreviation: A
Notice that the only possible common type for both (-a:0) and (-b:-x:x) is ?.
The reason why there is no subsumption rule of the form
is that we want to disallow the use of subsumption immediately after a use of the rule for
variables. If we add a subsumption rule, then more -terms become typable. For example,
consider:
If we have a subsumption rule, then we can give -y:y the type ? ! ?, we can give both
-x:0 and the last occurrence of f the type ?, and it is then straightforward to complete
a type derivation for E 3 . Notice that the fragment of the type derivation for the last
occurrence of f is of the form:
Without a subsumption rule, this type derivation is not possible. Indeed, no type derivation
using rules (1)-(5) is possible. To see that, let s 1 be the type of -y:y, let s 2 be the type of
f . From -y:y we have is the type of x. Moreover, from (ff) we have
where u is the type of (ff ). We have
hence ff). Consider now
(f(-x:0)). The type of -x:0 is of the form s or ?. In both cases, it cannot be an
argument of a function of type -ff:(ff ! ff). We conclude that E 3 is not RS-typable.
4 The Equivalence Result
Theorem 4.1 A -term P is 0-CFA= safe if and only if P is RS-typable.
We prove this theorem in two steps. Lemma 4.3 shows that if P is 0-CFA= safe, then
P is RS-typable. To prove that lemma we use the technique from [13]. Lemma 4.4 shows
that if P is RS-typable, then P is 0-CFA= safe. To prove that lemma we use a technique
which is more direct than the one used to show a similar result, for 0-CFA' , in [12].
From Flows to Types
First we consider the mapping of flows to types. Given a program P , a map ' 2 Cmap(P ),
and S ' Abs(P ), we say that S is '-consistent if for all -x 1
Given a program P and ' 2 Cmap(P ), define
the equation system \Gamma(P; '):
ffl For each S 2 range('), let v S be a type variable, and
contains the equation
contains the equation
there are two cases: either S is '-
consistent and then \Gamma(P; ') contains the equation
otherwise \Gamma(P; ') contains the equation
Every equation system \Gamma(P; ') has a unique solution. To see this, notice that for every
type variable, there is exactly one equation with that variable as left-hand side. Thus,
intuitively, we obtain the solution by using each equation as an unfolding rule, possibly
infinitely often.
Lemma 4.2 If ' 2 Cmap(P is the unique solution of
Proof. Support first that '(w 1
Suppose then that '(w 1 ) is '-inconsistent. From '(w 1 ) ` '(w 2 ) we then have that
also
Suppose finally that '(w 1
consistent. There are two cases. If '(w 2 ) is '-inconsistent, then /(v '(w 1
Lemma 4.3 If ' satisfies C(P ), is the unique solution
of \Gamma(P; '), and e is a subterm of P , then we can derive A ` e : /(v '([[e]]) ).
Proof. We proceed by induction on the structure of e. In the base case, consider first
We have so we can derive A ` x : /(v '(hxi) ). This is the desired
derivation because
Consider then e j 0. We have and we can derive
In the induction step, consider first We have
so From the induction hypothesis we have that we can
derive A ' e and we can then also derive A ' succ e
Consider next e j -x:e 0 . We have f-x:e 0 g ' '([[-x:e 0 ]]), and from Lemma 4.2 we get
From the induction hypothesis, we have that we can derive
Thus, we can also derive A ' -x:e
Finally, consider e We have '([[e 1 ]]) ` Abs(P ), and for every -x:e 0 2 '([[e 1 ]])
we have '([[e 2 From the induction hypothesis, we
have that we can derive A ' There are two
cases. If '([[e 1 and we can
derive A ' e 1 then we use '([[e 1 ]]) ' Abs(P ) to conclude
that '([[e 1
]]) for all
is '-consistent. Thus, /(v '([[e 1 ]])
we can derive A ' e 1
For example, consider again the -term:
and recall the function ' 1 from Section 2 which satisfies C(E 1 ). The constraint system
When we plug this into the construction in the proof of Lemma 4.3, we get the type
derivation for shown in Section 3. We leave it to the reader to carry out the construction
It will lead to the type derivation for shown in Section 3.
From Types to Flows
Next we consider the mapping of types to flows. If \Delta is the type derivation A '
define f \Delta to map types to elements of CL(P
the set of occurrences -x:e of P where \Delta contains
a judgment of the form A
for an occurrence -x:e of P where \Delta contains
a judgment of the form A
for an occurrence e of P where \Delta contains
a judgment of the form A 0
Lemma 4.4 If \Delta is the type derivation A '
Proof. We consider in turn each of the constraints in C(P ). For an occurrence of 0 and
the constraint we have that \Delta contains a judgment of the form A 0
fIntg.
For an occurrence of succ e and the constraints
we have that \Delta contains judgments of the forms A
fIntg.
For an occurrence x and the constraint we have that \Delta contains a judgment
of the form A[x
For an occurrence -x:e and the constraint f-x:eg ' [[-x:e]], we have that \Delta contains
judgments of the forms A There are
two cases. If f-x:eg. If
For an occurrence e 1 e 2 and the constraint also the constraints, for
every occurrence -x:e in Abs(P ),
we have that \Delta contains judgments of the forms A 0 '
where There are two cases. If
and the other constraints are vacuously satisfied. If
From the definition of f
Concluding Remarks
If we remove from Section 3 the types ?, ? and the notion of subtyping, then we get a
traditional system of recursive types. Given a program P and a map ' 2 Cmap(P ), we
say that ' is consistent if for all S 2 range(') we have that S is '-consistent. If we add
to Section 2 the conditions:
does not contain ;, and
does not contain inconsistent maps,
then we get a notion of flow-based safety analysis which we here will refer to as restricted-
0-CFA= safety. It is easy to modify the proof of Lemma 4.3 to show the following result.
Theorem 5.1 If a -term P is restricted-0-CFA = safe, then P is typable with recursive
types.
Intuitively, the theorem says that if we want a flow analysis weaker than recursive types,
then we can start with 0-CFA= , outlaw ;, and insist on internal consistency in all flow sets.
The converse of Theorem 5.1 is false. For example, if we attempt to modify the proof of
Lemma 4.4, then we run into trouble in the case e 1 e 2 , because there is no guarantee that
is the type of e 1 . Such a situation arises with the program
With recursive types but not subtyping, there is just one type derivation for E 4 , using the
abbreviation
We have
It it straightforward to show that '([[x]]) 6= fIntg
and '([[x]]) 6= f-x:succ(x0)g, so
E 4 is therefore a counterexample to the converse of Theorem 5.1.
We leave it as an open problem to find a flow analysis equivalent to recursive types.
An unusual aspect of Heintze's definition of 0-CFA= is that Int and functions cannot
occur in the same flow set. To allow that we might define
and change the constraints from Section 2 such that the constraints for 0 and succ e become:
There is a systematic way of obtaining this modified flow analysis: begin with the constraints
for 0-CFA' [12] and
ffl change
to
All other constraints remain the same.
The type system that matches the modified flow analysis can be obtained by changing
the type system from Section 3 such that - is the smallest reflexive and transitive relation
on types where ? - t, and such that the type rules for 0 and succ e
become:
Notice that in this modified type system, ? is the least type and ? is the greatest type.
--R
Subtyping recursive types.
A practical and flexible flow analysis for higher-order languages
Efficient analyses for realistic off-line partial evaluation
On binary methods.
Fast interprocedural class analysis.
The Java Language Specification.
Dynamic typing.
The Definition of Standard ML.
Closure analysis in constraint form.
A type system equivalent to flow analysis.
From polyvariant flow information to intersection and union types.
Fast and accurate flow-insensitive points-to analysis
--TR
The definition of Standard ML
Control-flow analysis of higher-order languages of taming lambda
Dynamic typing
Subtyping recursive types
Object-oriented type systems
Closure analysis in constraint form
A type system equivalent to flow analysis
On binary methods
Points-to analysis in almost linear time
A practical and flexible flow analysis for higher-order languages
Linear-time subtransitive control flow analysis
Fast and accurate flow-insensitive points-to analysis
From polyvariant flow information to intersection and union types
Fast interprocedural class analysis
The Java Language Specification
Control-Flow Analysis and Type Systems
--CTR
Jens Palsberg , Mitchell Wand, CPS transformation of flow information, Journal of Functional Programming, v.13 n.5, p.905-923, September
Naoki Kobayashi, Type-based useless variable elimination, ACM SIGPLAN Notices, v.34 n.11, p.84-93, Nov. 1999
Naoki Kobayashi, Type-Based Useless-Variable Elimination, Higher-Order and Symbolic Computation, v.14 n.2-3, p.221-260, September 2001
Neal Glew , Jens Palsberg, Type-safe method inlining, Science of Computer Programming, v.52 n.1-3, p.281-306, August 2004
Jens Palsberg , Christina Pavlopoulou, From Polyvariant flow information to intersection and union types, Journal of Functional Programming, v.11 n.3, p.263-317, May 2001
Michael Hind, Pointer analysis: haven't we solved this problem yet?, Proceedings of the 2001 ACM SIGPLAN-SIGSOFT workshop on Program analysis for software tools and engineering, p.54-61, June 2001, Snowbird, Utah, United States | flow analysis;type systems |
295663 | A new, simpler linear-time dominators algorithm. | We present a new linear-time algorithm to find the immediate dominators of all vertices in a flowgraph. Our algorithm is simpler than previous linear-time algorithms: rather than employ complicated data structures, we combine the use of microtrees and memoization with new observations on a restricted class of path compressions. We have implemented our algorithm, and we report experimental results that show that the constant factors are low. Compared to the standard, slightly superlinear algorithm of Lengauer and Tarjan, which has much less overhead, our algorithm runs 10-20% slower on real flowgraphs of reasonable size and only a few percent slower on very large flowgraphs. | INTRODUCTION
We consider the problem of nding the immediate dominators of vertices in a
graph. A
owgraph is a directed graph r) with a distinguished start
vertex , such that there is a path from r to each vertex in V . Vertex
w dominates vertex v if every path from r to v includes w; w is the immediate
dominator (idom) of v, denoted dominates v and (2) every
other vertex x that dominates v also dominates w. Every vertex in a
owgraph has
a unique immediate dominator [Aho and Ullman 1972; Lorry and Medlock 1969].
Finding immediate dominators in a
owgraph is an elegant problem in graph the-
ory, with applications in global
ow analysis and program optimization [Aho and
Ullman 1972; Cytron et al. 1991; Ferrante et al. 1987; Lorry and Medlock 1969].
Lorry and Medlock [1969] introduced an O(n 4 )-time algorithm, where
to nd all the immediate dominators in a
owgraph. Successive improve-
Some of this material was presented at the Thirtieth ACM Symposium on the Theory of Com-
puting, 1998.
Authors' address: AT&T Labs, Shannon Laboratory, 180 Park Ave., Florham Park, NJ 07932;
Permission to make digital/hard copy of all or part of this material without fee is granted
provided that the copies are not made or distributed for prot or commercial advantage, the
ACM copyright/server notice, the title of the publication, and its date appear, and notice is given
that copying is by permission of the Association for Computing Machinery, Inc. (ACM). To copy
otherwise, to republish, to post on servers, or to redistribute to lists requires prior specic
permission and/or a fee.
c
2 Adam L. Buchsbaum et al.
ments to this time bound were achieved [Aho and Ullman 1972; Purdom and Moore
1972; Tarjan 1974], culminating in Lengauer and Tarjan's [1979] O(m(m;n))-time
algorithm; is the standard functional inverse of the Ackermann function and grows
extremely slowly with m and n [Tarjan and van Leeuwen 1984]. Lengauer and Tarjan
[1979] report experimental results showing that their algorithm outperforms all
previous dominators algorithms for
owgraph sizes that appear in practice.
Reducing the asymptotic time complexity of nding dominators to O(n +m) is
an interesting theoretical exercise. Furthermore, various results in compiler theory
rely on the existence of a linear-time dominators algorithm; Pingali and Bilardi
[1997] give an example and further references. Harel [1985] claimed a linear-time
dominators algorithm, but careful examination of his abstract reveals problems
with his arguments. Alstrup et al. [1997] detail some of the problems with Harel's
approach and oer a linear-time algorithm that employs powerful data structures
based on bit manipulation to resolve these problems. While they achieve a linear-time
dominators algorithm, their reliance on sophisticated data structures adds
su-cient overhead to make any implementation impractical.
We present a new linear-time dominators algorithm, which is simpler than that
of Alstrup et al. [1997]. Our algorithm requires no complicated data structures:
we use only depth-rst search, the fast union-nd data structure [Tarjan and van
Leeuwen 1984], topological sort, and memoization. We have implemented our al-
gorithm, and we report experimental results, which show that, even with the extra
overhead needed to achieve linear time, our constant factors are low. Ours is the
rst implementation of a linear-time dominators algorithm.
The rest of this article is organized as follows. Section 2 outlines Lengauer and
Tarjan's approach. Section 3 gives a broad overview of our algorithm and dieren-
tiates it from previous work. Section 4 presents our algorithm in detail, and Section
5 analyzes its running time. Section 6 presents our new path-compression result,
on which the analysis relies. Section 7 describes our implementation, and Section
8 reports experimental results. We conclude in Section 9.
2. THE LENGAUER-TARJAN ALGORITHM
Here we outline the Lengauer and Tarjan (LT) approach [Lengauer and Tarjan
1979] at a high level, to provide some details needed by our algorithm. Appel
[1998] provides a thorough description of the LT algorithm.
r) be an input
owgraph with n vertices and m arcs. Let D be a
depth-rst search (DFS) tree of G, rooted at r. We sometimes refer to a vertex x
by its DFS number; in particular, x < y means that x's DFS number is less than
y's. Let w
that w is an ancestor (not necessarily proper) of v in D;
can also denote the actual tree path. Similarly, w
that w is a
proper ancestor of v in D and can represent the corresponding path. For any tree
(v) be the parent of v in T , and let nca T (u; v) be the nearest common
ancestor of u and v in T . We will drop the subscripts and write p(v) and nca(u; v)
when the context resolves any ambiguity.
v) be a path in G. Lengauer and Tarjan
dene P to be a semidominator path (abbreviated sdom path) if x i > v; 1
1. An sdom path from u to v thus avoids all tree vertices between u and
A New, Simpler Linear-Time Dominators Algorithm 3
a,13
d,14
(a)
a,13
d,14
(b)
Fig. 1. (a) A
owgraph G with root r. Vertex labels are augmented with their DFS numbers.
(b) A DFS tree D of G. Solid arcs are tree arcs; dotted arcs are nontree arcs. Breaking D into
microtrees of size no more than 3 results in four nontrivial microtrees, rooted at k, j, g, and a;
the vertices of each nontrivial microtree are encircled.
v. The semidominator (semi) of vertex v is
there is an sdom path from u to vg:
For example, consider vertex g in DFS tree D in Figure 1(a). The DFS number of
g is 10. Paths (e; g), (f; g), (d; f; g), (a; d; f; g), (b; d; f; g), and (b; a; d; f; g) are all
the sdom paths to g. Since b has the least DFS number over the initial vertices on
these paths,
To compute semidominators, Lengauer and Tarjan use an auxiliary link-eval data
structure, which operates as follows. Let T be a tree with a real value associated
with each vertex. We wish to maintain a forest F contained in the tree, subject to
the following operations. (Initially F contains no arcs.)
to F .
Let r be the root of the tree containing u in F . If return r.
Otherwise, return any vertex x 6= r of minimum value on the path
r
u.
Tarjan [1979a] shows how to implement link and eval using the standard disjoint
set union data structure [Tarjan and van Leeuwen 1984]. Using linking by size and
path compression, n 1 links and m evals on an n-vertex tree T can be performed
in O(m(m;n) + n) time.
The LT algorithm traverses D in reverse DFS order, computing semidominators
as follows. (Initially, semi(v) v for all
For in reverse DFS order do
For (w; v) 2 A do
4 Adam L. Buchsbaum et al.
done
link(v)
done
It then computes the immediate dominator for each vertex, using semidominators
and the following facts, which we will also use to design our algorithm.
Lemma 2.1 (LT Lem. 1). If v w then any path from v to w in G must
contain a common ancestor of v and w in D.
Lemma 2.2 (LT Lem. 4). For any vertex v 6= r, idom(v)
semi(v).
Lemma 2.3 (LT Lem. 5). Let vertices w; v satisfy w
v. Then w
or idom(v)
idom(w).
Lemma 2.4 (LT Thm. 2). Let w 6= r. Suppose every u for which semi(w)
Lemma 2.5 (LT Thm. 3). Let w 6= r, and let u be a vertex for which semi(u) is
minimum among vertices u satisfying semi(w)
and
3. OUTLINE OF A LINEAR-TIME ALGORITHM
The links and evals used by the LT algorithm make it run in O(m(m;n)) time.
We can eliminate the (m; n) term by exploiting the sensitivity of to relative
dierences in m and n. In particular, when m is slightly superlinear in n, e.g.,
m=n
becomes a constant [Tarjan and van Leeuwen 1984]. 1
Our dominators algorithm proceeds roughly as follows:
(1) Compute a DFS tree D of G, and partition D into regions. We discuss the
partitioning in detail in Section 3.2. For now, it su-ces to consider that D is
partitioned into a collection of small, vertex-disjoint regions, called microtrees.
We consider separately the microtrees at the bottom of D|those that contain
the leaves of D|from the microtrees comprising the interior, D 0 , of D.
(2) For each vertex, determine whether its idom is in its microtree and, if so,
determine the actual idom.
(3) For each vertex v such that idom(v) is not in v's microtree
(a) compute idom(v) by applying the LT algorithm only to vertices in D 0 or
(b) nd an ancestor u of v such that and idom(u) can be
computed by applying the LT algorithm only to vertices in D 0 .
Partitioning D into microtrees serves two purposes. First, the subgraph induced
by the microtree roots will achieve the ratio m=n necessary to reduce (m; n) to
a constant. Second, when the microtrees are small enough, the number of distinct
microtrees will be small compared to n +m. We can thus perform simple computations
on each microtree in O(n using precomputed tables or
memoization to eliminate redundant computations.
log (i) n is the iterated log function: log (0)
A New, Simpler Linear-Time Dominators Algorithm 5
3.1 Comparison to Previous Approaches
We contrast our use of these facts to previous approaches. Harel [1985] and Alstrup
et al. [1997] apply the LT algorithm to all of D, using the microtree partitioning to
speed links and evals. Harel [1985] divides the entire tree D into microtrees, all of
which can contain more than one vertex, and performs links and evals as described
by Lengauer and Tarjan [1979] on the tree D 0 induced by the microtree roots.
Alstrup et al. [1997] simplify Harel's approach [Harel 1985] by restricting nonsin-
gleton microtrees to the bottom of D, leaving an upper subtree, D 0 , of singleton
microtrees as we do. They then perform links and evals on D 0 using two novel
data structures, as well as the Gabow-Tarjan linear-time disjoint set union result
[Gabow and Tarjan 1985] and transformations to D 0 . Both algorithms use pre-computed
tables to process evals on the internal microtree vertices. This approach
requires information regarding which vertices outside microtrees might dominate
vertices inside microtrees, to derive e-cient encodings needed by the table lookup
technique. Harel [1985] presents a method to restrict the set of such outside dominator
candidates. Alstrup et al. [1997] demonstrate deciencies in Harel's arguments
and correct these problems, using Fredman and Willard's Q-heaps [Fredman and
Willard 1994] to manage the microtrees.
We apply the LT algorithm just to the upper portion, D 0 , of D. We combine
our partitioning scheme with a new path compression result to show that the LT
algorithm runs in linear time on D 0 . Instead of processing links and evals on internal
microtree vertices, we determine, using any simple dominators algorithm, whether
the dominators of such vertices are internal to their microtrees, and if so we compute
them directly, using memoization to eliminate redundant computation. We process
those vertices with dominators outside their microtrees without performing evals on
internal microtree vertices. Our approach obviates the need to determine outside
dominator candidates for internal microtree vertices, eliminating the additional
complexity Alstrup et al. require to manage this information.
We can thus summarize the key dierences in the various approaches as follows.
Harel [1985] and Alstrup et al. [1997] partition D into microtrees and apply the
standard LT algorithm to all of D, using precomputed tables to speed the computation
of the link-eval data structure in the microtrees. We also partition D
into microtrees, but we apply the LT algorithm, with the link-eval data structure
unchanged, only to one, big region of D and use memoization to speed the computation
of dominators in the microtrees. In other words, Harel [1985] and Alstrup
et al. [1997] take a purely data structures approach, leaving the LT algorithm unchanged
but employing sophisticated new data structures to improve its running
time. We modify the LT algorithm so that, although it becomes slightly more
complicated, simple and standard data structures su-ce to implement it.
A minor dierence in the two approaches regards the use of tables. Harel [1985]
and Alstrup et al. [1997] precompute the answers to all possible queries on mi-
crotrees and then use table lookup to answer the queries during the actual dominators
computation. We build the corresponding table incrementally using memo-
ization, computing only the entries actually needed by the given instance. The two
approaches have identical asymptotic time complexities, but memoization tends to
outperform a priori tabulation in practice, because the former does not compute
6 Adam L. Buchsbaum et al.
answers to queries that will never be needed.
3.2 Microtrees
Consider the following procedure that marks certain vertices in D. The parameter
g is given, and initially all vertices are unmarked.
For x in D in reverse DFS order do
y child of x S(y)
If S(x) > g then
Mark all children of x
endif
done
Mark root(D)
For any vertex v, let nma(v) be the nearest marked ancestor (not necessarily
proper) of v. The nma function partitions the vertices of D into microtrees as
follows. Let v be a marked vertex; T vg is the microtree
containing all vertices x such that v is the nearest marked ancestor of x. We say
is the root of microtree T v . For any vertex x, micro(x) is the microtree
containing x. See Figure 1.
For any v, if v has more than g descendents, all children of v are marked. There-
fore, each microtree has size at most g. We call a microtree nontrivial if it contains
a leaf of D. Only nontrivial microtrees can contain more than one vertex; these
are the subtrees we process using memoization. The remaining microtrees, which
we call trivial, are each composed of singleton, internal vertices of D; these vertices
comprise the upper subtree, D 0 , of D. Additionally, all the children of a vertex that
forms a trivial microtree are themselves microtree roots. Call a vertex v that forms
a trivial microtree special if each child of v is the root of a nontrivial microtree.
(In
Figure
1(b), c and e are special vertices.) If we were to remove the nontrivial
microtrees from D, these special vertices would be the leaves of the resulting tree.
Since each special vertex has more than g descendents, and the descendents of any
two special vertices form disjoint sets, there are O(n=g) special vertices.
We note that Alstrup et al. [1997] dene microtrees only where they include
leaves of D (our nontrivial microtrees), whereas our denition makes every vertex
a member of some microtree. We could adopt the Alstrup et al. [1997] denition,
but dening a microtree for each vertex allows more uniformity in our discussion,
particularly in the statements and proofs of our lemmas and theorems.
Gabow and Tarjan [1985] pioneered the use of microtrees to produce a linear-time
disjoint set union algorithm for the special case when the unions are known in
advance. In that work, microtrees are combined into microsets, and precomputed
tables are generated for the microsets. Dixon and Tarjan [1997] introduce the idea
of processing microtrees only at the bottom of a tree.
3.3 Path Denitions
v) be a path in G. We dene P to be
an external dominator path (abbreviated xdom path) if P is an sdom path and
dominator path is simply a semidominator
A New, Simpler Linear-Time Dominators Algorithm 7
path that resides wholly outside the microtree of the target vertex (until it hits the
target vertex). The external dominator of vertex v is
there is an xdom path from u to vgg :
In particular, for any vertex v that forms a singleton microtree,
We dene P to be a pushed external dominator path (abbreviated pxdom path)
nontrivial microtrees occur only at the
bottom of D, a pxdom path to v cannot exit and reenter micro(v): to do so would
require traversing a back arc to a proper ancestor of root(micro(v)). Therefore, a
pxdom path to v is (a) an xdom path to some vertex x 2 micro(v) catenated with
(b) an x-to-v path inside micro(v). Either (a) or (b) may be the null path. The
pushed external dominator of vertex v is
there is a pxdom path from u to vg:
Note that pxdom(v) 62 micro(v): since the arc (p D (root(micro(v))); root(micro(v)))
catenated with the tree path root(micro(v))
pxdom path to v, we have
that pxdom(v) pD (root(micro(v))).
For example, consider vertices l and h in DFS tree D in Figure 1(b). The DFS
number of l is 4. The path is an sdom path from r to l, and so
is not an xdom path. Path (c; j; l) is an xdom
path from c to l, and no xdom path exists from r to l, so 2. P is a
pxdom path, however: (r; b; e; n) is an xdom path from r to n 2 micro(l), and
(n; l) is a path internal to micro(l). Thus, semi(l). Continuing, the
DFS number of h is 12. The only sdom path to h is (g; h), and so
Path (b; d; f; g; h) is a pxdom path to h, however, so In
general, for any vertex, its semi, xdom, and pxdom values need not match.
We use the following lemmas. Note the similarity of Lemma 3.2 to Lemma 2.2.
Lemma 3.1. For any vertex v that forms a singleton microtree,
semi(v).
Proof. Let v) be a
pxdom path from u to v. If v forms a singleton microtree, then
and so, by denition of pxdom, x i v for 1 i < k. Without loss of generality,
however, since u is the minimum vertex from which there is a pxdom path to v,
we can assume that x i 6= v for 1 i < k. Therefore P is a semidominator path,
so semi(v) u. Any semidominator path, however, is a pxdom path, so in fact
Lemma 3.2. idom(v) 62 micro(v) =) idom(v)
pxdom(v).
Proof. Let As observed above, u 62 micro(v). By denition
of pxdom, there is a path from u to v that avoids all vertices (other than u) on
the tree path u
Therefore, if idom(v) 62 micro(v), idom(v)
cannot lie on that tree path.
In the next section, we give the details of our algorithm.
4. DETAILS OF OUR LINEAR-TIME ALGORITHM
At a high level, we can abstract our algorithm as follows:
8 Adam L. Buchsbaum et al.
a
d
f
(a)
a
d
f
(b)
Fig. 2. (a) The microtree T , consisting of vertices a, d, and f from Figure 1(b), as well as incident
arcs external to T . (b) The induced graph aug(T ).
(1) Using memoization to reduce running time, determine for each vertex v if
so, the actual value idom(v).
(2) Use the LT algorithm to compute idoms for all v such that idom(v) 62 micro(v).
The remainder of this section provides the details behind our approach. For clarity,
we describe as separate phases the resolution of the idom(v) 2 micro(v) question,
the computation of pxdom(v), and the overall algorithm to compute idom(v). We
discuss in Section 7 how to unite these phases into one traversal of D.
4.1 Computing Internal Dominators
We begin by showing how to determine whether idom(v) 2 micro(v) and, if it is, how
to nd the actual value idom(v). For vertex v that comprises a singleton microtree,
our decision is trivial: idom(v) 62 micro(v). For a nonsingleton microtree T , we
dene the following augmented graph. Let G(T ) be the subgraph of G induced by
vertices of T . Let aug(T ) be the graph G(T ) plus the following:
(1) A vertex t, which we call the root of aug(T ), or root(aug(T )).
(2) An arc (t; v) for each v 2 T such that there exists an arc (u; v) 2 A for some
We call these blue arcs.
Note that there is a blue arc (t; root(T )). Vertex t represents the contraction of
ignoring all arcs that exit T . See Figure 2. We use the augmented graphs
to capture the intuition that removing arcs that exit a microtree 2 does not change
the dominator relationship.
We dene the internal immediate dominator (iidom) of vertex x, iidom(x), to
be the immediate dominator of x in aug(micro(x)). We show that if iidom(x) 2
conversely, that if
Computing
iidoms using memoization on aug(micro(v)) thus yields a fast procedure to deter-
exits micro(u) if v 62 micro(u).
A New, Simpler Linear-Time Dominators Algorithm 9
y
x
(a)
y
z
x
(b)
Fig. 3. aug(micro(x)), plus incident external arcs/paths from G. Solid lines are arcs; dotted lines
are paths. (a) The case t, and z < y. If z 62 micro(x), e.g.,
in the gure, then there is a path from t to x in aug(T ), using blue arc (t; v), that avoids y. If
in the gure, then there is a path internal to micro(T ) from z to x that
avoids y. Either case contradicts the assumption that y = iidom(x). (b) Similar case, but y < z.
There is a path P in aug(T ) from y to x, avoiding z. P contains no blue arcs, so it is a path in
G, contradicting that z = idom(x).
mine whether or not idom(v) 2 micro(v) for any v. We give the details of the
memoization procedure below.
Lemma 4.1. iidom(x) 6= root(aug(micro(x))) =)
Proof. Let
idom(x) such that y 6= t and y 6= z. If z < y, then in the full graph G, there
exists a path P from z to x that avoids y. We use P to demonstrate a path P 0 in
aug(T ) from some z 0 2 ft; zg to x that avoids y, contradicting the assumption that
v) be the last arc on P such that u 62 aug(T ). If there is
no such arc then P yields an immediate contradiction. Otherwise, arc (u; v)
induces blue arc (t; v) 2 aug(T ). This arc together with a subpath of P from v to
x provides path P 0 . See Figure 3(a).
On the other hand, if y < z, then there is a path P in aug(T ) from y to x that
avoids z. By hypothesis, y 6= t, so P contains no blue arcs. (There are no arcs into
t, and so t 62 P .) Therefore, P is also a path in G, contradicting that z = idom(x).
Figure
3(b).
Lemma 4.2.
Proof. Let
t. Then there is a path P in aug(T ) from t to x that avoids
z. If P contains no blue arcs, then it is a path in the original graph, contradicting
the claim that z = idom(x). If P contains blue arc (t; v) for some v, then in G
there is an arc (u; v) for some u 62 T . The tree path root(G)
catenated with
the arc (u; v) and the subpath in P from v to x gives a path in G to x that avoids
a
z
x
Fig. 4. aug(micro(x)), plus incident external arcs/paths from G. Solid lines are arcs; dotted lines
are paths. Case in which t. There is a path P in aug(T )
from t to x, avoiding z. If P contains no blue arcs (follows the path from a to b around z), this
contradicts that z = idom(x), for P exists in G. If P contains blue arc (t; v), then there is a path
in G to x, using the arc (u; v), where u 62 T . Again, P avoids z, contradicting
z, again giving a contradiction. See Figure 4.
We memoize the computation of iidom(v) as follows. The rst time we compute
the internal immediate dominators for some augmented graph aug(T ), we store the
results in a table, I , indexed by graph aug(T ) and vertex v. We encode aug(T ) by
a bit string corresponding to its adjacency matrix represented in row-major order.
To compute this bit string, we traverse aug(T ) in DFS order, assigning DFS value
one to the root of aug(T ) and using the DFS values as vertex identiers; we refer
to this as the canonical encoding of aug(T ).
If a subsequent microtree T 0 has an augmented graph that is isomorphic to
encodings will be identical, so we can simply look up the
iidom values for aug(T 0 ) in table I . This obviates having to recompute the iidoms
for aug(T 0 simply map the iidom values stored in table I , which are relative to
the canonical encoding of aug(T 0 ), to the current instantiation of aug(T 0 ). (Vertex
x in aug(T 0 ) corresponds to vertex x root(aug(T in the canonical encoding
of aug(T 0 ).)
4.2 Computing Pushed External Dominators
We now prove that the following procedure labels vertices with their pxdoms. As
we will show, this process allows us to avoid performing links and evals within
nontrivial microtrees.
Initially, D, and we use a link-eval data structure with
label(v) as the value for vertex v. As we will see, by Theorem 4.4,
pxdom(v) when v becomes linked. The link-eval values are thus pxdoms.
micro(v)g, the external neighbors of v, be the
vertices outside micro(v) with arcs to v. The procedure processes the microtrees T
A New, Simpler Linear-Time Dominators Algorithm 11
in reverse DFS order.
(1) For
6
(c) label(v) min(fvg
Lemma 4.3 proves that this labels v with xdom(v).
(2) For (v) be the set of all vertices in T from which there is a path to
consisting only of arcs in G(T ). Set label(v) min y2Y (v) flabel(y)g. We call
this pushing to v. (Pushing can be done by computing the strongly connected
components of G(T ) and processing them in topological order.) Theorem 4.4
proves that pushing labels v with pxdom(v).
If T is a trivial microtree, then link(v).
Due to the pushing in Step (2), pxdom values are nonincreasing along paths from
the microtree root. This allows us to perform evals only on parents of microtree
roots: the pxdom pushing eectively substitutes for the evals on vertices inside the
microtrees.
To prove that the above procedure correctly labels vertices in a microtree T , we
assume by induction that the procedure has already labeled by their pxdoms all
vertices in all trees preceding T in reverse DFS order. The base case is vacuously
true.
Lemma 4.3. After Step (1),
Proof. Let We show that (1) label(x) w and (2) label(x) w.
(1) Consider the xdom path P from w to x. Let y 62 micro(x) be the last vertex
on P before x. Let z be the least vertex excluding w that P touches on the
tree path w
else P is not an xdom path. The prex P 0
of P from w to z is a semidominator path. Otherwise, there exists some u 6= w
on P 0 such that u < z; by Lemma 2.1, P 0 contains a common ancestor of u and
z, contradicting the assertion that z is the least vertex in P on the tree path
y. Therefore, pxdom(z) semi(z) w. By induction, label(z) w. If
z 2 micro(y), then label(z) got pushed to y, and thus label(y) w. (Note that
in Step (1).) If z 62 micro(y), then C in Step (1) contains some
value no greater than label(z), due to the previous links via Step (3). In either
case the label considered for x via the (y; x) arc is no greater than label(z) w.
Figure
5.
(2) Consider arc (y; x) such that y 62 micro(x). Let
so there is a pxdom path P from w 0 to y. P catenated with
the arc (y; x) is an xdom path. Similarly, for
there is a pxdom path P from w label(z) to z. P catenated with the tree
path z
y and arc (y; x) forms an xdom path from w 0 to x. In either case,
Figure
5 again demonstrates the potential paths.
Theorem 4.4. After Step (2),
12 Adam L. Buchsbaum et al.
y
x
Fig. 5. Microtrees containing x and y, with incident external paths. Solid lines are arcs; dotted
lines are paths. There is an sdom path from w to some z > nca(y; x); thus
in the gure), then label(y) w. If z 62 micro(y)
in the gure), then label(eval(root(micro(y)))) w.
Proof. We argue analogously to the proof of Lemma 4.3. Let
we show that (1) w is considered as a label for x via an internal pushing path and
(2) for any w 0 so considered, there is a valid pxdom path from w 0 to x.
(1) Consider the pxdom path P from w to x. Let v be the rst vertex on P inside
During Step (2),
w is pushed to x via the path from v to x.
(2) Consider any w 0 pushed to x. w 0 is an xdom or pxdom for some vertex y 2 T .
there is a valid pxdom path from w 0 to x.
4.3 Computing Dominators
Using the information we computed in Sections 4.1 and 4.2, we now give an algorithm
to compute immediate dominators. The algorithm proceeds like the LT
algorithm; in fact, on the subtree of D induced by the trivial microtrees, it is exactly
the LT algorithm. The algorithm relies on the following two lemmas:
Lemma 4.5. For any v, there exists a w 2 micro(v) such that
(3
(4
Proof. The proof proceeds as follows. We rst nd an appropriate vertex w
on the tree path root(micro(v))
v. We show that
A New, Simpler Linear-Time Dominators Algorithm 13
x
(a)
x
(b)
Fig. 6. The graph induced by micro(v), plus incident external arcs/paths from G. Solid lines are
arcs; dotted lines are paths. be the pxdom path from x to v. w is the least
vertex in P on the path from root(micro(v)) to v. (a) The prex P 0 of P from x to w includes
only vertices that are greater than v (except w). (b) P 0 includes descendents of w that are less
than v and so must take a back arc to w. In either case, P 0 is an sdom path from x to w, since
w is the least vertex in P on the path from root(micro(v)) to v.
argue that This resolves postulates (1){(3). Finally, we
prove that idom(w) 62 micro(x), which implies postulate (4).
consider the pxdom path P from x to v. Let w be
the least vertex in P on the tree path root(micro(v))
v. We argue that the
prex P 0 of P from x to w is a semidominator path. If not, then there is some
vertex y 6= x on P 0 such that y < w. Since w v, it must be that y 2 micro(v);
otherwise, y violates the pxdom path denition, since we only allow a y < v on P
if y 2 micro(v). By Lemma 2.1, the subpath of P 0 from y to w contains a common
ancestor z of y and w. Since y < w, it must be that z < w. As with y, it must also
be that z 2 micro(v), or else z violates the pxdom path denition. This implies
that z is on the tree path root(micro(v))
v, contradicting the assertion that w is
the least such vertex on P . Therefore semi(w) x. See Figure 6.
Now we argue that semi(w) x. If not, there is a semidominator path P from
some y < x to w. P catenated with the tree path from w to v, however, forms a
pxdom path from y to v, contradicting the assumption that
Similarly, we argue that any semidominator path is also
a pxdom path, pxdom(w) x. If there is a pxdom path P from some y < x
to however, P catenated with the tree path from w to v is a pxdom path
that contradicts the assumption that x. Thus we have shown that
By denition of pxdom, pxdom(w) < root(micro(w)). Therefore,
semi(w) implies that semi(w) 62 micro(w). By Lemma 2.2, therefore, idom(w) 62
micro(w), and thus by Lemma 4.1,
Lemma 4.6. Let w; v be vertices in a microtree T such that
14 Adam L. Buchsbaum et al.
y
Fig. 7. The graph induced by micro(v), plus incident external arcs/paths from G. Dotted lines
are paths. If idom(v) < idom(w), then there is an sdom path from some y < idom(w) to some
x > idom(w) such that there is a tree path from x to v. If x lies on the tree path from idom(w)
to w in the gure), however, this contradicts the denition of idom(w), and if x lies on
the tree path from w to v in the gure), this contradicts that
(3
Proof. Condition (3) and Lemma 4.2 imply that idom(v); idom(w) 62 T . In
particular, idom(v) < w, so Lemma 2.3 implies that idom(v) idom(w). If
idom(v) < idom(w), then there is a path P from idom(v) to v that avoids idom(w).
must contain a semidominator subpath P 0 from some y < idom(w) to some
x > idom(w) such that x
v. x cannot lie on tree path idom(w)
would contradict the denition of idom(w). x cannot lie on tree path w
v, for this
would imply pxdom(v) y < pxdom(w). (By Lemma 3.2, idom(w) pxdom(w).)
So no such P 0 can exist. See Figure 7.
Lemmas 4.5 and 4.6 imply the following, which is formalized in the proof of Theorem
4.7. Consider a path in a microtree, from root to leaf. The vertices on the
path are partitioned by pxdom, with pxdom values monotonically nonincreasing.
Each vertex w at the top of a partition is such that
thermore, idom(w) 62 micro(w). For another vertex v in the same partition as w,
either idom(v) is actually in the partition, or else outside the
microtree. See Figure 8. That implies that our algorithm
devolves into the LT algorithm on the upper subtree, D 0 , of D consisting of trivial
microtrees.
We can now compute immediate dominators by Algorithm IDOM, given in Figure
9. For each v 2 D, IDOM either computes idom(v) or determines a proper ancestor
A New, Simpler Linear-Time Dominators Algorithm 15
x
y
z
a
c
Fig. 8. A microtree with incident external paths. Dotted lines are paths. y, and
All vertices on the tree path from w to p(v) have pxdom y; the path from x
to b, a prex of which is an xdom path, does not aect the pxdom values on the w-p(v) part
of the partition. The vertices in the partition need not share idoms, however. In this picture,
y, but
u of v such that description of the straightforward
postprocessing phase that resolves the latter identities. IDOM uses a second link-eval
data structure, with pxdom(v) as the value for vertex v; at the beginning of IDOM,
no links have been done.
Theorem 4.7. Algorithm IDOM correctly assigns immediate dominators.
Proof. Lemma 4.1 shows that assigning idom(v) to be iidom(v) if iidom(v) 2
micro(v) is correct. Assume then that iidom(v) 62 micro(v), and thus idom(v) 62
micro(v) by Lemma 4.2.
Consider the processing of vertex v in bucket(u). Assume rst that
be the child of u on tree path u
v. We claim that z is
the vertex on tree path u 0
v with minimum semi and that
Assuming that this claim is true, if
Observe that for any w 2 micro(v) such that w
pxdom(w) semi(w). Thus, if
v, and the claim holds. On the other hand, if
then is the vertex on the tree path
pD (root(micro(v))) of minimum pxdom. The claim holds, since (1) pxdom(u 0 )
Consider the remaining case, when pxdom(v) 6= semi(v). Lemma 4.5 shows that
there exists a w 2 micro(v) such that w
root(aug(micro(v))), and so
pxdom(w) and and w are both placed in the same bucket
Algorithm IDOM
For in reverse DFS order do
Process(v)
done
For such that fug is a trivial microtree, in reverse DFS order do
link(u)
done
Process(v)
If iidom(v) 2 micro(v) then
else
add v to bucket(pxdom(v))
endif
For do
If
z v
else
z eval(p D (root(micro(v))))
endif
idom(v) u
else
endif
done
Fig. 9. Algorithm IDOM.
by IDOM. Therefore, IDOM does compute the same value for idom(v) as for idom(w),
and by the previous argument, it computes the correct value for idom(w).
5. ANALYSIS
Here we analyze the running time of our algorithm. It should be clear that the
generation of the initial DFS tree D and the division of D into microtrees can be
performed in linear time, by the discussion in Section 3.2.
5.1 Computation of iidoms
Recall the memoized computation of iidoms described in Section 4.1. So that all
the iidom computations run in linear time overall, the augmented graphs must be
small enough so that (1) a unique description of each possible graph aug(T ) can be
computed in O(jaug(T )j) time and (2) all the immediate dominators for all possible
augmented graphs are computable in linear time. (After computing immediate
dominators for an augmented graph, future table lookups take constant time each.)
We require a description of aug(T ) to t in one computer word, which we assume
holds log n bits. Recall each microtree has no more than g vertices, for some
parameter g. Thus, each augmented graph has no more than g+1 vertices. Without
aecting the time bounds (we can use g 1 in place of g), we can assume that any
aug(T ) has no more than g vertices. Therefore, aug(T ) has no more than g 2 arcs
A New, Simpler Linear-Time Dominators Algorithm 17
and can be uniquely described by a string of at most g 2 bits. To t in one computer
word,
We can traverse aug(T ) and compute its bitstring identier in O(jaug(T )j) time,
assuming that we can (1) initialize a computer word to 0, and (2) set a bit in
a computer word, both in O(1) time. This further assumes that vertices in T are
numbered from 1 to jT j, where jT j is the number of vertices in T . As part of the DFS
of G, we can assign secondary DFS numbers to each v, relative to root(micro(v)),
satisfying this labeling constraint. The total time to generate bitstring identiers
is thus
O
microtree T
Since each vertex (respectively, arc) in G can be attributed to one vertex (respec-
tively, arc) in exactly one augmented graph, and there is one extra root vertex for
each augmented graph, Expression
When rst encountering a particular aug(T ), we can use any naive dominators
algorithm to compute its immediate dominators in poly(g) time. Then we can store
the values for iidom(v), for each v 2 aug(T ), in table I in time O(jaug(T )j). In the
worst case, we would have to memoize all the iidom values for all possible distinct
graphs on g or fewer vertices. There are about 2 g 2
such graphs, so the total time
is O(2
poly(g)), inducing the constraint
poly(g) n:
A simple analysis shows that if using memoization, we can
compute all needed iidom values in O(n +m) time.
5.2 Computation of pxdoms
Step (1) in the computation, the initial labeling of a vertex v, processes each vertex
and arc in G once throughout the labelings of all vertices v. Additionally, Step (1)
performs at most one eval operation, on a trivial microtree root, per arc in G.
Step (2) can be implemented by computing the strongly connected components
(SCCs) of the subgraph of G induced by the microtree T , initially assigning each
vertex in each SCC the minimum label among all the vertices in the SCC, and then
pushing the labels through the SCCs in topological order. Computing SCCs can be
done in linear time [Tarjan 1972], as can the topological processing of the SCCs.
Step (3) links root(T ) once for each trivial microtree T .
Thus, the time to compute the pxdoms, summed over all the microtrees, is
n) plus the time to perform at most n 1 link and m eval operations. We
analyze the link-eval time in Section 6.
5.3 Computation of idoms
We implement the bucket associated with each vertex by a linked list. For each
takes constant time to look up iidom(v) and either assign idom(v)
or place v into bucket(pxdom(v)).
To process a vertex v in bucket(pxdom(v)) requires constant time plus the time
to perform eval on pD (root(micro(v))). Each vertex appears in at most one bucket,
so processing the buckets takes time O(n) plus the time to do at most n evals on
trivial microtree roots. (Since pxdom(v) 62 micro(v), only trivial microtree roots
have buckets.)
Again, we perform link(v) only on trivial microtree roots, so the total time taken
by IDOM is O(m n) plus the link-eval time.
5.4
Summary
By the above analysis, the total time required to compute immediate dominators in
a
owgraph G with n vertices and m arcs is O(m+n) plus the time to perform the
links and evals on D. We next prove that since we do links and evals only on trivial
microtree roots, the total link-eval time is O(m n) for an appropriate choice of
the parameter g.
6. DISJOINT SET UNION WITH BOTTOM-UP LINKING
Recall that link and eval are based on disjoint set union, yielding the (m; n) term
in the LT time bound. Here we show that restricting the tree to which we apply
links and evals to have few leaves results in the corresponding set union operations
requiring only linear time.
Let U be a set of n vertices, initially partitioned into singleton sets. The sets are
subject to the standard disjoint set union operations.
and C are the names of sets; the operation unites sets A
and B and names the result C.
nd(u). Returns the name of the set containing u.
It is well known [Tarjan and van Leeuwen 1984] that n 1 unions intermixed with
m nds can be performed in O(m(m;n) + n) time. The sets are represented by
trees in a forest. A union operation links the root of one tree to the root of another.
Operation nd(u) traces the path from u to the root of the tree containing u. By
linking the smaller tree as a child of the root of the larger tree during a union and
compressing the path from u to the root of the tree containing u during nd(u),
the above time bound is achieved.
We show that given su-cient restrictions on the order of the unions, we can
improve the above time bound. We know of no previous result based on this type
of restriction. Previously, Gabow and Tarjan [1985] used a priori knowledge of the
unordered set of unions to implement the union and nd operations in O(m
time. We do not require advance knowledge of the unions themselves, only that
their order be constrained. Other results on improved bounds for path compression
[Buchsbaum et al. 1995; Loebl and Nesetril 1997; Lucas 1990] generally restrict the
order in which nds, not unions, are performed.
Of the n vertices, designate l to be special and the remainder n l to be ordinary.
The following theorem shows that by requiring the unions to \favor" a small set of
vertices, the time bound becomes linear.
Theorem 6.1. Consider n vertices such that l are special and the remaining
n l are ordinary. Let be a sequence of n 1 unions and m nds such that each
A New, Simpler Linear-Time Dominators Algorithm 19
union involves at least one set that contains at least one special vertex. Then the
operations can be performed in O(m(m; l)
Proof. The restriction on the unions ensures that at all times while the sequence
is being processed, each set either contains at least one special vertex or is
a singleton set containing an ordinary vertex. This observation can be proved by
an induction on the number of unions.
The following algorithm can be used to maintain the sets. A standard union-nd
data structure is created containing all the special vertices as singleton sets. Recall
that such a data structure consists of a forest of rooted trees built on the vertices,
one tree per set. The root of a tree contains the name of the set. There is also
an array, indexed by name, that maps a set name to the root of the corresponding
tree. We will call this smaller data structure U 0 and denote unions and nds on it
by union 0 and nd 0 .
The ordinary vertices are kept separate. Each ordinary vertex contains a pointer
that is initially null. The operations are performed as follows.
If each of x and y names a set that contains at least one special
vertex, perform union 0 (x; Suppose one of x and y, say y, is a singleton set
containing an ordinary vertex. Set the pointer of the ordinary vertex to point to
the root of set x. Relabel that root z.
nd(x). If x is a special vertex, execute nd 0 (x). If x is ordinary and has a null
pointer, return x. (It is in a singleton set.) If x is ordinary with a nonnull pointer
to special vertex y, return nd 0 (y).
The intuition is simple: unless an ordinary vertex x forms a singleton set, it can
be equated to a special vertex y such that
Each operation involves O(1) steps plus, possibly, an operation on a union-nd
data structure U 0 containing l vertices. Let k be the total number of operations done
on U 0 . Then the total running time is O(k(k; l)+m+n), which is O(m(m; l)+n)
It is convenient to implement the above algorithm completely within the frame-work
of a single standard union-nd forest data structure, using path compression
and union by size, as follows. Initially all special vertices are given weight one, and
all ordinary vertices are given weight zero. Recall that the size of a vertex is the
sum of the weights of its descendents, including itself.
To see that this implementation is essentially equivalent to that described in
Theorem 6.1, observe the following points. First, by induction on the number of
operations, an ordinary vertex is always a leaf in the union-nd forest. The union-
by-size rule ensures that whenever a singleton ordinary set is united with a set
containing special elements, the ordinary vertex is made a child of the root of the
other set. The standard nd operation is done by following parent pointers to the
root and then resetting all vertices on the path to point to the root. Hence any leaf
vertex, and in particular any ordinary vertex, remains a leaf in the forest.
Each ordinary vertex is thus either a singleton root or contains a pointer to a
special vertex, as in the proof of Theorem 6.1. Furthermore, since the ordinary
vertices have weight zero they do not aect the size decisions made when uniting
sets containing special vertices. A nd on an ordinary vertex is equivalent to a
20 Adam L. Buchsbaum et al.
nd on its parent, which is a special vertex, just as in the proof of Theorem 6.1.
The only dierence is that the pointer in the ordinary vertex is possibly changed
to point to a dierent special vertex, the root. This only adds O(1) to the running
time.
6.1 Bottom-Up linking
Let a sequence of unions on U be described by a rooted, undirected union tree, T ,
each vertex of which corresponds to an element of U . The edges in T are labeled
zero or one; initially, they are all labeled zero. Vertices connected by a path in T of
edges labeled one are in the same set. Labeling an edge fv; p(v)g one corresponds
to uniting the sets containing v and p(v). The union sequence has the bottom-up
linking property if no edge fv; p(v)g is labeled one until all edges in the subtree
rooted at v are labeled one.
Corollary 6.2. Let T be a union tree with l leaves and the bottom-up linking
property. Then n 1 unions and m nds can be performed in O(m(m; l)
time.
Proof. Let the leaves of T be classed as special and all internal vertices classed
as ordinary. When the union indicated by edge fx; p(x)g occurs, all descendants of
x, and in particular at least one leaf, are in the same set as x. Therefore the union
sequence has the property in the hypothesis of Theorem 6.1.
Alstrup et al. [1997] prove a variant of Corollary 6.2, with the m(m; l) term
replaced by (l log l +m), which su-ces for their purposes. They derive the weaker
result by processing long paths of unary vertices in T outside the standard set union
data structure. We apply the standard set union data structure directly to T ; we
need only weight the leaves of T one and the internal vertices of T zero.
6.2 Application to Dominators
Recall the denition of special vertices from Section 3.2: a vertex is special if all of
its children are roots of nontrivial microtrees.
Theorem 6.3. The (n) links and (m) evals performed during the computation
of pxdoms and by the algorithm IDOM require O(n +m) time.
Proof. Consider the subtree T of D induced by the trivial microtree roots. All
the links and evals are performed on vertices of T . The special vertices of D are
precisely the leaves of T . We can view T as the union tree induced by the links.
The links are performed bottom-up, due to the reverse DFS processing order.
As shown in Section 3.2, there are O(n=g) special vertices in D and thus O(n=g)
leaves of T . We choose log 1=3 n, which su-ces to compute iidoms in linear
time. By Corollary 6.2, the link-eval time is thus O(m(m; n= log 1=3 n) n). The
theorem follows, since m n and (m;
Our algorithm is completely general: it runs in linear time for any input
owgraph
G. Corollary 6.2, however, implies that, implementing union-nd as described
above, the standard LT algorithm [Lengauer and Tarjan 1979] actually runs in
linear time for all classes of graphs in which the corresponding DFS trees D have
the following property: the number l of leaves of D is su-ciently sublinear in m so
that (m;
A New, Simpler Linear-Time Dominators Algorithm 21
7. IMPLEMENTATION
This section describes our implementation, which diers somewhat from our earlier
description of the algorithm for e-ciency reasons. The input is a
owgraph in
adjacency list format, i.e., each vertex v is associated with a list of its successors.
Figure
presents the top-level routines, which initialize the computation, perform
a depth-rst search and partition the DFS tree into microtrees, and compute
dominators. The initialization code creates and initializes the memoization tables.
The partitioning code assigns DFS numbers, initializes the vertices and stores
them in an array, vertices, in DFS order, computes the size of the subtree rooted at
each vertex, and identies microtrees using the subtree sizes. Each vertex is marked
Plain, MTRoot, or TrivMTRoot, depending on whether it is a nonroot vertex in a
microtree, the root of a nontrivial microtree, or the root of a trivial microtree.
Also, each vertex is assigned a weight to be used by the link-eval computation:
special vertices (recall that a vertex is special if all of its children are roots of
nontrivial microtrees) have weight one, and ordinary vertices have weight zero.
(See Lengauer and Tarjan [1979] for the implementation of link and eval.) Finally,
we initialize an array, pmtroot, to contain parent(v) for each v. This array will
eventually store parent(root(micro(v))) for each v. By initializing it to the vertex
parent, we only have to update it for vertices in nontrivial microtrees, which we
will do in ProcessMT below.
The code to compute dominators given the partitioned DFS tree diers from our
earlier presentation in two ways. First, we combine the processing of vertices and
buckets into a single pass, to eliminate a pass over the vertex set, as do Lengauer
and Tarjan [1979]. Second, we separate the code for processing trivial microtrees
from the code for processing nontrivial microtrees, which allows us to specialize
the algorithm to each situation, resulting in simpler and more e-cient code. These
changes, which are simple rearrangements of the code, do not alter the time complexity
of the algorithm.
Computedom calls ProcessV, to handle trivial microtrees, and ProcessMT, to
handle nontrivial microtrees. ProcessV, shown in Figure 11, computes the xdom
and pxdom of v, stores v in the appropriate bucket, links v to its parent, and then
processes the bucket of v's parent. This code exhibits both of our changes. First,
we follow the LT approach to combining the processing of vertices and buckets:
we link v to p, its parent, and then process p's bucket. Immediately following the
processing of v, only vertices from the subtree rooted at v are in p's bucket. Adding
the link from v to p completes the path from any such vertex to p, which allows us
to process the bucket. Second, we exploit that idom(v) is guaranteed to be outside
v's microtree, thereby eliminating a conditional expression.
ProcessMT (Figure 12) performs similar steps but is more complex, because it
processes an entire microtree at once. The rst step is to nd the microtree's root.
Since the vertices in a microtree have contiguous DFS numbers, we can nd the
root by searching backward from v in the vertices array for the rst vertex that is
marked as a nontrivial microtree root. Once we have the microtree root, we update
pmtroot(v) appropriately for each v in the microtree. Then we (1) compute the
xdom of each vertex in the microtree and an encoding for the augmented graph
that corresponds to the microtree, (2) compute iidoms, (3) compute pxdoms, and
22 Adam L. Buchsbaum et al.
Initialize computation
Partition(root)
status(root) TrivMTRoot
Computedom(root)
Partition(Vertex v)
Assign DFS number to v
Mark v as visited
bucket(v) NULL
link(v) NULL
label(v) dfsnum(v)
status(v) Plain
For s 2 successors(v) do
If s has not been visited then
endif
Add v to predecessors(s)
done
If size(v) > g then
If all v's children are Plain then
Mark the Plain children of v in DFS tree with MTRoot
status(v) TrivMTRoot
endif
Computedom(Vertex root)
For in reverse DFS order do
ProcessV(v)
elseif v has not been processed
ProcessMT(v)
endif
done
For in DFS order do
If samedom(v) 6= NULL then
endif
done
Fig. 10. Pseudocode for computing dominators.
A New, Simpler Linear-Time Dominators Algorithm 23
ProcessV(Vertex v)
label(v) dfsnum(v)
For do
If label(p) < label(v) then
label(v) label(p)
endif
If dfsnum(p) > dfsnum(v) then
evalnode
If label(evalnode) < label(v) then
label(v) label(evalnode)
endif
endif
done
Add v to bucket(vertices[label(v)])
For do
z
else
samedom(w) z
endif
delete w from bucket(parent(v))
done
Fig. 11. Pseudocode for processing trivial microtrees.
(4) process the bucket of the parent of the microtree root.
We compute xdoms and the microtree encoding together, because both computations
examine predecessor arcs. The microtree encoding is simple: two bits for
each pair of microtree vertices, plus one bit for each blue arc. During this computa-
tion, we also identify a special class of microtrees: a microtree is isolated if the only
target of a blue arc is the microtree root. We will use this information to speed the
computation of pxdoms.
The iidom computation uses memoization to maintain the linear time bound.
To increase its eectiveness, we remove unnecessary bits and eliminate unnecessary
information from the microtree encoding used to index the memoization tables.
First, we remove the bits for self-loops. Second, we exploit that a blue arc to
v implies that iidom(v) 62 micro(v) and that none of the information about v's
internal arcs is useful. In particular, since we know that the root of the microtree
is always the target of some blue arc, we eliminate from the encoding the bits for
arcs into the root. These changes reduce the size of the iidom encoding from g 2 +g
to bits (from 12 bits to six, for In addition to reducing the size
of the encoding, we can reduce the number of populated slots in the memoization
table, using the same observation. If there is a blue arc into a nonroot vertex, w,
we zero the remaining bits for arcs into it, because they are irrelevant. We do not
ProcessMT(Vertex v)
Find mtroot in vertices starting from v
initialize encoding
isolated true
For do
label(v) dfsnum(v)
For do
Include (p; v) in encoding
else
Include blue arc to v in encoding
If v 6= mtroot then
isolated false
endif
If label(p) < label(v) then
label(v) label(p)
endif
If dfsnum(p) > dfsnum(v) then
evalnode
If label(evalnode) < label(v) then
label(v) label(evalnode)
endif
endif
endif
done
done
iidomencoding reduced encoding
If iidommemo[iidomencoding] is not dened then
iidommemo[iidomencoding] Computeiidom(encoding)
endif
iidoms iidommemo[iidomencoding]
If (isolated) then
else
endif
For do
delete w from bucket(parent(mtroot))
done
Fig. 12. Pseudocode for processing nontrivial microtrees.
A New, Simpler Linear-Time Dominators Algorithm 25
IsolatedPush(microtree MT; int iidoms[ ])
mtroot MT [0]
for do
label(v) label(mtroot)
done
Add mtroot to bucket(label(mtroot))
Fig. 13. Pseudocode for pushing in isolated microtrees.
remove these bits, because we want a xed-length encoding. (The bits are extra
only when there is a blue arc into w.) Once we compute the reduced encoding, we
look it up in the memoization table to determine if futher computation is necessary
to determine the iidoms. We use the O(n 2 )-time bit-vector algorithm [Aho et al.
1986] augmented to exploit the blue arcs, when necessary.
The iidoms are expressed in terms of a DFS numbering of the augmented graph.
We translate the augmented graph vertex into the corresponding vertex in the
current microtree by adding its secondary DFS number to the primary DFS number
of the root of the microtree.
We have implemented two forms of Push. The rst, shown in Figure 13, is
a simplied form that can be used for isolated microtrees. The absence of blue
arcs into nonroot vertices implies that (1) the xdom of the microtree's root is
the pxdom of all vertices in the microtree and (2) the immediate dominators of
nonroot vertices will be local to the microtree (that is,
root(micro(v))). The root vertex has a nonlocal idom, so we simply add it to
the bucket of its pxdom.
The second, shown in Figure 14, handles the general case. First, we compute
strongly connected components (SCCs) using memoization. In this case, the memoization
is used only for e-ciency. As with the iidom calculation, we use a reduced
encoding for SCCs. The SCC encoding, which uses g 2 g bits, does not include
self-loops or blue arcs, since neither aects the computation. We compute SCCs
using the linear-time two-pass algorithm from Cormen et al. [1991]. Given the
SCCs (either from the memoization table or by computing them), we process them
in topological order: we nd the minimum of the xdoms of the vertices within
the SCC and the incoming pxdoms, and then assign this value to each vertex
as its pxdom. Given v's pxdom and iidom, we either assign idom(v) directly, if
or we put v into the appropriate bucket.
After pushing, we nish by processing the bucket of pmt, the parent of the mi-
crotree's root. Any vertex in the bucket must have pmt as its immediate dominator.
By denition, the pxdom of a vertex, v, in the microtree, is the minimum pxdom
along the path from pmt to v. As a result, we can skip the eval on any vertex in
pmt's bucket and assign pmt as the immediate dominator directly.
8. RESULTS
This section describes our experimental results. It would be interesting to compare
our algorithm (BKRW) with that of Alstrup et al. (AHLT) [1997], to judge relative
constant factors, but AHLT relies on the atomic heaps of Fredman and Willard.
26 Adam L. Buchsbaum et al.
mtroot MT [0]
sccencoding reduced encoding
If sccmemo[sccencoding] is not dened then
sccmemo[sccencoding] Computescc(mtroot; encoding)
endif
For in topological order do
For do
If iidoms[secdfsnum(v)] 62 MT then
Add v to bucket(label(v))
else
endif
done
done
Fig. 14. Pseudocode for pushing in the general case.
Atomic heaps, in turn, are composed of Q-heaps, which can store only log 1=4 n
elements given O(n) preprocessing time. The atomic heap construction requires
Q-heaps that store 12 log 1=5 n elements. For atomic heaps, and thus the AHLT
algorithm, to run in linear time, therefore, n must exceed 2 12 20
[Fredman and
(Alternatively, one can consider AHLT to run in linear time, but
with an impractically high additive constant term.) Alstrup et al. [1997] provide
variants of their algorithm that do not use atomic heaps, but none of these runs
in linear time. Ours is thus the only implementable linear-time algorithm, and we
therefore compare our implementation of BKRW with an implementation of the LT
algorithm derived from their paper [Lengauer and Tarjan 1979].
We performed two sets of experiments. The rst set used
owgraphs collected
from the SPEC 95 benchmark suite [SPEC 1995], using the CFG library from the
Machine SUIF compiler [Holloway and Young 1997] from Harvard. 3 (Six les from
the integer suite could not be compiled by Machine SUIF v. 1.1.2 and are omitted
from the data.) The second set used some large graphs collected from our Lab.
We performed our experiments on one processor of an eight-processor SGI Origin
2000 with 2048MB of memory. Each processing node has an R10000 processor
with 32KB data and instruction caches and a 4MB unied secondary cache. Both
implementations were compiled with the Mongoose C compiler version 7.0.
We report aggregate numbers for the SPEC test set, because it contains a large
number of
owgraphs. Table I reports the sizes of the
owgraphs, averaged by
benchmark. Table II contains average running times for LT and for BKRW with
microtree sizes of two and three. Figure 15 displays a scatter plot in which each
3 Machine SUIF is an extension of the SUIF compiler [Amarasinghe et al. 1995] from Stanford.
We used Machine SUIF version 1.1.2.
A New, Simpler Linear-Time Dominators Algorithm 27
Table
I. Graph Sizes, Averaged Over the Flowgraphs
in Each Benchmark, for the SPEC 95 Flowgraphs
Benchmark Number of Average Average
Flowgraphs Vertices Arcs
CINT95 Suite
129.compress
132.ijpeg 524 14 20
147.vortex 923 23 34
134.perl 215
CFP95 Suite
145.fpppp 37 19 26
103.su2cor 37
104.hydro2d 43 35 46
125.turb3d 24 52 71
Table
II. Running Times on the SPEC 95 Flowgraphs, Averaged Over the Flow-
graphs in Each Benchmark. The Numbers in Parentheses Measure the Dier-
ence Between the Two Algorithms, as Computed by the Following Formula:
LT 100:0. Positive Numbers Indicate That LT is Better; Negative Numbers
Indicate That BKRW is Better.
Benchmark LT BKRW
CINT95 Suite
130.li 20.01 us 33.91 us (69.49%) 36.99 us (84.90%)
129.compress 22.61 us 37.62 us (66.42%) 43.84 us (93.92%)
132.ijpeg 25.46 us 40.43 us (58.78%) 45.86 us (80.11%)
147.vortex 36.70 us 53.59 us (46.02%) 61.29 us (67.00%)
us 56.61 us (43.93%) 63.73 us (62.04%)
099.go 50.39 us 69.87 us (38.66%) 79.37 us (57.51%)
126.gcc 66.56 us 87.87 us (32.01%) 95.61 us (43.63%)
134.perl 89.54 us 112.23 us (25.34%) 121.13 us (35.28%)
CFP95 Suite
145.fpppp 32.75 us 46.63 us (42.37%) 49.33 us (50.61%)
us 53.14 us (37.93%) 59.90 us (55.47%)
107.mgrid 38.36 us 53.72 us (40.02%) 60.32 us (57.23%)
103.su2cor 46.74 us 62.06 us (32.78%) 67.28 us (43.95%)
104.hydro2d 49.99 us 66.71 us (33.45%) 72.82 us (45.68%)
146.wave5 51.71 us 68.45 us (32.38%) 74.46 us (44.00%)
125.turb3d 73.66 us 103.56 us (40.60%) 110.36 us (49.82%)
us 99.01 us (26.60%) 106.72 us (36.46%)
101.tomcatv 174.60 us 210.20 us (20.39%) 215.20 us (23.25%)
28 Adam L. Buchsbaum et al.
Number of vertices1.52.5
Number of vertices101000
Number
of
flowgraphs
Fig. 15. Relative dierences in running times of BKRW and LT (for 3). There is a
point in the top plot for each
owgraph generated from the SPEC 95 benchmarks. The bottom
plot displays the number of
owgraphs for each respective number of vertices. (Note that the
y-axis on the bottom plot represents a logarithmic scale.)
point represents the running time of BKRW (with microtrees of size two or three)
relative to LT on a single
owgraph. The plot shows that the overhead of BKRW is
larger than that of LT on small graphs, but that the dierence tails o to about 10%
quickly. For this gure, we combined the data from the integer and
oating-point
suites; separating the two, as in Table II, would yield two similar plots.
Table
III lists our large test graphs, which come from a variety of sources, along
A New, Simpler Linear-Time Dominators Algorithm 29
Table
III. Graph Sizes for the
Large Test Graphs
Graph Vertices Arcs
ATIS 4950 515080
Phone
2048 4095 7166
with their sizes. The ATIS, NAB, and PW graphs are derived from weighted
nite-state automata used in automatic speech recognition [Pereira and Riley 1997;
Pereira et al. 1994] by removing weights, labels, and multiple arcs. The Phone
graph represents telephone calling patterns. The augmented binary graphs (AB1
and AB2) were generated synthetically by building a binary tree of a given size
(shown in the table as the graph's label) and then replacing each leaf by a sub-
graph. See Figure 16. The AB1 graphs use the subgraph shown in Figure 16(b),
and the AB2 graphs use the subgraph shown in Figure 16(c). These graphs were
designed to distinguish BKRW from LT. The subgraphs will be treated as isolated
microtrees in BKRW, which means that all the nonroot vertices in a microtree will
have dominators within the microtree and that the back and cross arcs will be handled
cheaply (without evals) by BKRW. In particular, calls to eval related to these
arcs will be avoided by BKRW, and as a result, no links in the link-eval forest will
be compressed by BKRW.
We observed that, as expected, BKRW performs fewer links and evals than does
LT. Running time is a more telling metric, however, and we present the running
times for our experiments in Table IV. For the speech and Phone graphs, the
overhead of processing the microtrees, which includes initializing the memoization
tables, computing iidoms, computing microtree encodings, and pushing, outweighs
the savings on calls to link and eval. BKRW does outperform LT on the larger
augmented binary graphs. This is to be expected, since BKRW has substantially
fewer calls to eval and compresses zero links for these graphs. In addition, the
overhead of processing the microtrees is low, because they are all isolated. Note
that the improvement of BKRW over LT decreases as the graphs get larger. The
benet gained by our algorithm is small relative to the cost due to paging, which
increases as the graphs get larger.
(a)1(b)13
(c)
Fig. 16. (a) T k is a k-depth binary tree. Augmented binary graph AB1 (respectively, AB2) is
generated by replacing each leaf M i with the subgraph shown in (b) (respectively, (c)).
Table
IV. Running Times on the Large Test Graphs. The Numbers in Parentheses
Measure the Dierence Between the Two Algorithms, as Computed by the Following
LT 100:0. Positive Numbers Indicate That LT is Better; Negative
Numbers Indicate That BKRW is Better.
Graph LT BKRW
ATIS 384.50 ms 423.38 ms (10.11%) 427.25 ms (11.12%)
ms 2836.25 ms (3.55%) 2844.75 ms (3.86%)
ms 3195.98 ms (2.20%) 3189.15 ms (1.99%)
Phone 8313.62 ms 8594.75 ms (3.38%) 8616.38 ms (3.64%)
1024 2.00 ms 2.00 ms (0.00%) 2.50 ms (25.00%)
2048 5.00 ms 5.00 ms (0.00%) 5.00 ms (0.00%)
ms 10.00 ms (-9.09%) 12.00 ms (9.09%)
ms 22.00 ms (-9.74%) 23.00 ms (-5.64%)
ms 48.38 ms (-8.08%) 48.38 ms (-8.08%)
ms 117.50 ms (-7.75%) 117.88 ms (-7.46%)
2097152 20188.00 ms 19499.00 ms (-3.41%) 19498.38 ms (-3.42%)
1024 4.00 ms 3.12 ms (-21.88%) 4.00 ms (0.00%)
2048 8.00 ms 7.00 ms (-12.50%) 7.00 ms (-12.50%)
ms 15.62 ms (-2.34%) 16.00 ms (0.00%)
ms 34.25 ms (-4.86%) 32.75 ms (-9.03%)
ms 74.25 ms (-5.11%) 70.38 ms (-10.06%)
ms 182.12 ms (-4.14%) 175.38 ms (-7.70%)
2097152 51920.62 ms 51461.25 ms (-0.88%) 51029.88 ms (-1.72%)
A New, Simpler Linear-Time Dominators Algorithm 31
Given the overhead that BKRW pays for computing microtree encodings and
pushing and that is very small, BKRW is surprisingly competitive, even for small
owgraphs, but these experiments suggest that LT is the algorithm of choice for
most current practical applications. LT is simpler than BKRW and performs better
on most graphs. BKRW performs better only on graphs that have a high percentage
of isolated microtrees.
9. CONCLUSION
We have presented a new linear-time dominators algorithm that is simpler than
previous such algorithms. We have implemented our algorithm, and experimental
results show that the constant factors are low.
Rather than decompose an entire graph into microtrees, as in Harel's approach
to dominators, our path-compression result allows microtree processing to be restricted
to the \bottom" of a tree traversal of the graph. We have applied this
technique [Buchsbaum et al. 1998] to simplify previous linear-time algorithms for
least common ancestors, minimum spanning tree (MST) verication, and randomized
MST construction. We also show [Buchsbaum et al. 1998] how to apply our
techniques on pointer machines [Tarjan 1979b], which allows them to be implemented
in pure functional languages.
ACKNOWLEDGMENTS
We thank Bob Tarjan, Mikkel Thorup, and Phong Vo for helpful discussions, Glenn
Holloway for his help with Machine SUIF, and James Abello for providing the phone
graph.
--R
Compilers: Principles
The Theory of Parsing
Dominators in linear time.
Manuscript available at ftp://ftp.
The SUIF compiler for scalable parallel machines.
Modern Compiler Implementation in C.
Introduction to Algorithms.
Optimal parallel veri
The program dependency graph and its uses in optimization.
A linear-time algorithm for a special case of disjoint set union
A linear time algorithm for
The ow analysis and transformation libraries of Machine SUIF.
A fast algorithm for
Linearity of strong postorder.
Object code optimization.
Postorder disjoint set union is linear.
Speech recognition by composition of weighted
In Finite-State Language Processing
Weighted rational transductions and their application to human language processing.
Optimal control dependence computation and the Roman chariots problem.
Algorithm 430: Immediate predominators in a directed graph.
Finding dominators in directed graphs.
Applications of path compression on balanced trees.
A class of algorithms which require nonlinear time to maintain disjoint sets.
accepted June
--TR
Worst-case Analysis of Set Union Algorithms
Compilers: principles, techniques, and tools
A linear algorithm for finding dominators in flow graphs and related problems
The program dependence graph and its use in optimization
Introduction to algorithms
Postorder disjoint set union is linear
Efficiently computing static single assignment form and the control dependence graph
Trans-dichotomous algorithms for minimum spanning trees and shortest paths
Data-Structural Bootstrapping, Linear Path Compression, and Catenable Heap-Ordered Double-Ended Queues
Linearity and unprovability of set union problem strategies I
Optimal control dependence computation and the Roman chariots problem
Modern compiler implementation in Java
Linear-time pointer-machine algorithms for least common ancestors, MST verification, and dominators
Applications of Path Compression on Balanced Trees
A fast algorithm for finding dominators in a flowgraph
Immediate predominators in a directed graph [H]
Object code optimization
The Theory of Parsing, Translation, and Compiling
--CTR
Adam L. Buchsbaum , Haim Kaplan , Anne Rogers , Jeffery R. Westbrook, Corrigendum: a new, simpler linear-time dominators algorithm, ACM Transactions on Programming Languages and Systems (TOPLAS), v.27 n.3, p.383-387, May 2005
G. Ramalingam, On loops, dominators, and dominance frontier, ACM SIGPLAN Notices, v.35 n.5, p.233-241, May 2000
On loops, dominators, and dominance frontiers, ACM Transactions on Programming Languages and Systems (TOPLAS), v.24 n.5, p.455-490, September 2002
Andrzej S. Murawski , C.-H. Luke Ong, Fast verification of MLL proof nets via IMLL, ACM Transactions on Computational Logic (TOCL), v.7 n.3, p.473-498, July 2006
Loukas Georgiadis , Robert E. Tarjan, Finding dominators revisited: extended abstract, Proceedings of the fifteenth annual ACM-SIAM symposium on Discrete algorithms, January 11-14, 2004, New Orleans, Louisiana
Loukas Georgiadis , Robert E. Tarjan, Dominator tree verification and vertex-disjoint paths, Proceedings of the sixteenth annual ACM-SIAM symposium on Discrete algorithms, January 23-25, 2005, Vancouver, British Columbia
Efficient Algorithm for Finding Double-Vertex Dominators in Circuit Graphs, Proceedings of the conference on Design, Automation and Test in Europe, p.406-411, March 07-11, 2005
Adam Buchsbaum , Yih-Farn Chen , Huale Huang , Eleftherios Koutsofios , John Mocenigo , Anne Rogers , Michael Jankowsky , Spiros Mancoridis, Visualizing and Analyzing Software Infrastructures, IEEE Software, v.18 n.5, p.62-70, September 2001
Ren Krenz , Elena Dubrova, A fast algorithm for finding common multiple-vertex dominators in circuit graphs, Proceedings of the 2005 conference on Asia South Pacific design automation, January 18-21, 2005, Shanghai, China
Elena Dubrova, Structural Testing Based on Minimum Kernels, Proceedings of the conference on Design, Automation and Test in Europe, p.1168-1173, March 07-11, 2005 | compilers;dominators;microtrees;path compression;flowgraphs |
295681 | Coordinating agent activities in knowledge discovery processes. | Knowledge discovery in databases (KDD) is an increasingly widespread activity. KDD processes may entail the use of a large number of data manipulation and analysis techniques, and new techniques are being developed on an ongoing basis. A challenge for the effective use of KDD is coordinating the use of these techniques, which may be highly specialized, conditional and contingent. Additionally, the understanding and validity of KDD results can depend critically on the processes by which they were derived. We propose to use process programming to address the coordination of agents in the use of KDD techniques. We illustrate this approach using the process language Little-JIL to program a representative bivariate regression process. With Little-JIL programs we can clearly capture the coordination of KDD activities, including control flow, pre- and post-requisites, exception handling, and resource usage. | INTRODUCTION
KDD-knowledge discovery in databases-has become a
widespread activity undertaken by an increasing number
and variety of industrial, governmental, and research
organizations. KDD is used to address diverse and often
unprecedented questions on issues ranging from marketing,
to fraud detection, to Web analysis, to command and
control. To support these diverse needs, researchers have
devised scores of techniques for data preparation,
transformation, mining, and postprocessing. Moreover,
dozens of new techniques are added each year. While the
growing collection of techniques and tools helps address
the growing set of needs, the size and rapid growth of the
collection is becoming something of a problem itself.
Many of the techniques will yield incorrect results unless
they are used correctly with other techniques. In addition,
KDD is often done by teams whose activities must be
correctly coordinated.
Thus, one of the chief challenges facing an organization that
wishes to conduct KDD is in assuring that data analysis
and processing techniques are used appropriately and
correctly and that the activities of teams assembled to do
KDD are properly controlled and coordinated. The
applicability of techniques can depend on a number of
factors, including the question to be addressed, the
characteristics of the data being studied, and the history of
processing of those data. This problem can be
compounded if the organization lacks experience with the
(possibly new) techniques, or if individual analysts on a
team differ with respect to their general level of expertise,
specialized knowledge about the data (e.g., biases and
assumptions), or familiarity with particular analysis
techniques (pitfalls and tricks). The problem can be further
exacerbated if multiple analysts must be orchestrated in a
KDD effort, or if the resources required to support the KDD
effort are scarce or subject to competitive access.
We view these problems as issues of coordination, with the
general goal being to assure that the right team member
applies the right technique to the right data at the right
time. Similar problems of coordination come up in
software development, for example, in the application of
software tools to software artifacts, the assignment of
developers to development tasks, and the organization of
tasks in the execution of software methods. We have
applied process programming to solve coordination
problems in software development [17, 18], and we believe
that process programming is also suited to representing and
supporting coordination in KDD processes. The
applicability of approaches based on software process
programming is further suggested by other similarities
between KDD processes and software processes. For
example, both sorts of problems entail the involvement of
both human and automated agents, the combination of
algorithmic and non-algorithmic techniques, the reliance on
external resources, and the need to react to contingencies
and handle exceptions. Additionally, issues of process are
important in understanding and assuring the validity of
KDD results.
In this paper we argue that a process orientation is
important for KDD and that process programming is an
appropriate technique for effecting good coordination in the
Copyright - 1999 by the Association for Computing Machinery, Inc.
Permission to make digital or hard copies of part or all of this work for
personal or classroom use is granted without fee provided that copies
are not made or distributed for profit or commercial advantage and
that copies bear this notice and the full citation on the first page.
Copyrights for components of this work owned by others than ACM
must be honored. Abstracting with credit is permitted. To copy
otherwise, to republish, to post on servers or to redistribtue to lists,
requires prior specific permission and/or a fee.
use of KDD techniques. We support this argument with
examples programmed in Little-JIL, a process language that
emphasizes coordination of activities, agents, and the use of
resources and artifacts. We believe that Little-JIL provides
a basis for orchestrating coordination that assures
correctness and consistency in the specification and
execution of KDD processes, and assures that agents will
have the ability to communicate, analyze, and generally
reason about the coordination of KDD techniques.
2. KDD PROCESSES
A process can be thought of as a multi-step plan for
completing a given task. A process specification defines a
class of process instances. Each instance conforms to the
specification, but carries out its work in ways that are
molded by the mix of agents and data that are available
when the process is executed. Instances differ from each
other in ways that include the selection of agents that
execute particular steps, the order in which steps are
executed, and the choice of which substeps are used to
complete a given step.
For example, a single KDD process specification for
bivariate regression might allow choice among multiple
methods for handling outliers (e.g., manual removal,
automatic removal, non-removal), for constructing a
regression model (simple least-squares regression, locally-
weighted regression, and three group resistant line), and for
estimating statistical significance (parametric estimates,
randomization tests). Naively assuming no interstep
constraints and only these three steps, this very simple
process can be instantiated in different ways - a
potentially confusing number for an unaided user.
Some of these possible configurations of process steps are
clearly more desirable and effective than others in different
situations. Thus researchers and practitioners have begun
to provide this sort of guidance. Presently this takes the
form of technical papers that specify desirable processes in
informal ways. We believe that there is considerable value
in augmenting these informal descriptions with the more
precise, complete, and formal specifications that are
achievable through process programming. Capturing and
representing processes precisely, completely, and clearly is
notoriously difficult, but our preliminary work indicates
that carefully designed process specification languages can
greatly facilitate this task.
are Particularly Important to
KDD
Explicit representation of processes is particularly important
in KDD. First, effective KDD requires managing
dependencies between steps. Some steps may require,
disallow, or enable other steps. For example, using most
neural network training algorithms requires a preceding step
to recode missing values. Non-parametric regression
techniques disallow any subsequent step to construct
parametric confidence intervals. Constructing a decision
tree enables a future step of pruning that tree. Explicit
representations of these dependencies can assure that they
are appropriately handled.
Second, the details of processes are essential to determining
the statistical validity of inductive inferences. One example
of this is the well-known error of testing on training data
[24]. KDD processes that do not enforce separation
between training and testing data (e.g., through simple
disjoint sets or cross-validation) will produce biased
estimates of model accuracy. The underlying cause of this
phenomenon - referred to as "multiple comparisons" in
statistics - has far more general effects. It has been
causally linked to several pathologies of data mining
algorithms, including attribute selection errors, overfitting,
and oversearching [14] and pathological growth in the size
of decision trees [15]. It has also been causally linked to
errors in evaluating several types of modeling algorithms
[8, 11, 12]. KDD systems that employ multiple analysts
distributed in time and space are particularly susceptible to
pathologies stemming from multiple comparisons [16].
Explicit representation of KDD processes supports analyses
that can determine when these pathologies can and cannot
occur. In addition, the ability to reinvoke an identical
process is a necessary prerequisite to solutions such as
randomization tests, cross-validation, and bootstrap
estimates [20]. Explicit representation of processes
provides a vehicle for assuring that reinvocations are indeed
identical.
Third, process details are vital to establishing the validity
of KDD results in more general ways. The literature of
KDD, statistics, and machine learning is filled with
discoveries of implicit assumptions underlying particular
techniques. In most cases, the only way to verify whether
these assumptions are met is to examine the process used
to apply a particular technique. Only by knowing the
process used to derive a result can potential errors be traced
back to their source. Explicit KDD process descriptions
capture these details.
Fourth, explicit representation of KDD processes can help
balance multiple performance goals. Several approaches to
a given analysis task may produce results of differing
statistical validity, comprehensibility, and ultimate utility.
In addition, those techniques may require different amounts
of computation effort and human attention. By explicitly
representing these characteristics as part of the specification
of individual steps, the process specification can be created
that meets particular objectives (e.g., "give me a fast
approximate result" or "give me a highly accurate result,
but take all night if you need it").
Combining Human Analysts and Automated
Agen t s
Research on KDD processes represents a return to one of the
central issues of early work in KDD: how best to combine
the goals and expertise of human users with powerful
automated data analysis tools. While this topic was
identified as a central one by early work in the field (e.g.,
[9]), it can be overlooked in our rush to develop more
sophisticated automated techniques. Recent work has
returned to this theme, including general descriptions of
KDD processes (e.g., [7]), analysis and integration of steps
[6, 26], formulation of exploratory data analysis as an AI
planning activity [21], and a nascent industry effort to
formulate standard KDD processes (CRISP-DM (see
http://www.ncr.dk/CRISP/)). More broadly, we believe
that the effective integration of the work of human and
automated agents is a problem that is at the core of a
growing number of critical problems. We believe that we
can advance work on this problem by studying it in the
more specific context of mixed-agent coordination in KDD
process specification.
One important note: our work explores how to coordinate
the activities of multiple KDD agents, be they automated or
human. Our work does not concern programming
individual automated agents for such tasks as training a
neural network or calculating a chi-square statistic. These
tasks are best done using conventional programming
languages and software engineering techniques. Our work
also does not attempt to tell human analysts how to do
their job. Human analysts have knowledge and expertise
that is essential to the KDD process. Instead, we are
exploring flexible languages that can be used to coordinate
the actions of experienced human analysts with those of
automated agents and to build processes that enable less
experienced analysts to achieve high-quality results. The
next section provides an extended example of one such
language.
3. AN EXAMPLE: BIVARIATE REGRESSION
In this section we present an example of a KDD process for
bivariate regression. Regression appears to be a relatively
simple process, but it is an appropriate example
nevertheless. First, it is a common data analysis activity,
regression tools are included in several KDD workbenches,
and it is a basic task in deployed KDD applications.
Second, the process is not actually as simple as it may
appear. It involves a combination of human and automated
agents, it may draw on a variety of analytical techniques,
the use of these techniques may be conditional and
contingent, interdependencies exist between certain
techniques, and the whole process may entail sequential,
parallel, alternative, and recursive activities. Thus,
although bivariate regression is a relatively "small" process,
it still suffers many of the coordination problems that
process programming is intended to address.
The basic bivariate regression problem can be described
simply (see Figure 1a). We have a continuously-valued
variable X (e.g., advertising spending), and we wish to
determine whether it can help us predict another
continuously-valued variable Y (e.g., net sales). To assess
this relationship between X and Y, we have a data sample
of N (x, y) tuples.
In this section, we present a process that coordinates agents
and techniques in the performance of bivariate regression.
We begin with basic linear regression, and then expand the
example to incorporate further functionality in the form of
non-linear regression and accommodation of
inhomogeneous data sets (i.e., data reflecting two or more
independent phenomena). The process is defined using the
process language [25], which is described with
reference to the examples.
This process should not be taken as a complete or
comprehensive specification. It contains both intentional
and unintentional simplifications. That said, we believe
that it illustrates many of the necessary features of a more
complete specification, and that the Little-JIL language
could be used to represent many of the necessary details in
a more complete specification.
3.1 Linear Regression
The most common approach to the task of bivariate
regression is linear regression. Linear regression constructs
a model of the form
assessment of the statistical significance of the slope - 1 .
We can conclude that X and Y are dependent if we can
reject the null hypothesis that - 1 is zero with high
confidence.
Least squares regression (LSR) is the most commonly used
form of linear regression. The advantages of LSR include
relatively high statistical power and computational
efficiency. However, LSR's desirable characteristics rest on
several assumptions, including homoskedasticity (the
variance of Y is independent of X) and the absence of
outliers- (x, y) tuples that lie far from all other points.
Outliers often represent errors or highly unusual conditions
that produce extreme values.
An outlier
Bivariate regression Inhomogeneity
Y
Figure
1: Simple bivariate regression and two common problems
Consider the assumption about outliers in more detail.
Outliers strongly affect LSR models-a single outlier can
sharply shift an LSR model, causing it to accurately predict
neither the outlier, nor the other data points (Figure 1b).
An alternative modeling technique -three group regression
[5]-is robust to the presence of outliers. TGR
divides the range of X into three groups with equal
numbers of points, finds the median X and Y value of each
group, and constructs a line from those three points.
Because the median is a measure of central tendency that is
resistant to outliers, TGR is much less strongly affected by
outliers than LSR.
TGR addresses the problem of outliers, but the parametric
significance test of - 1 used for LSR does not apply to TGR.
Instead, a computationally-intensive-technique - randomization
test [1, 4]- should be used to test significance for
the slope of the line built with TGR. Incidentally, a
randomization test can also be used for LSR (although, due
to its computational cost, we chose to exclude this from
our example process).
How the varied activities of linear regression should be
coordinated, in light of the relevant dependencies,
conditions, alternatives, and contingencies, is precisely
what a cogent process definition should make clear. Such
process definitions require a process language that enables
coordination semantics to be expressed clearly and
concisely, that allows rigor and flexibility to be combined
as appropriate, and that supports effective process
enforcement while admitting dynamic adaptation.
3.2 Representing a Linear Regression Process
In this section we illustrate the linear regression process
using the Little-JIL process language. Little-JIL is a visual
language derived from a subset of JIL, a process language
originally developed for software development processes
[22]. Little-JIL focuses on coordination of agents in the
performance of process activities in a wide range of
processes.
Little-JIL represents the activities of a process as steps,
where each step can be decomposed into substeps.
Substeps within a step can be invoked either proactively or
reactively. A step may also have a prerequisite to guard
entry into the step, a postrequisite to guard exit from the
step, and exception handlers to handle exceptions thrown
during the step. The requisites and exception handlers in
turn are steps that may also have substeps, etc. In
addition, steps may include resource specifications.
Runtime management of resource allocation provides
another means of dynamically constraining, adapting, and
controlling process execution. Each step also has, as a
distinguished resource, an execution agent, which is
responsible for initiating and carrying out the work of the
step. Execution agents may be human or automated, and
both types may be transparently combined in a Little-JIL
process. These features and others are illustrated and
discussed below with respect to the examples.
Figure
2 shows a Little-JIL specification of a linear
regression process. Process steps in Little-JIL are
represented visually by a step name surrounded by several
graphical badges that represent aspects of step semantics.
The bar below the step represents control of substeps. The
leftmost element in the control bar is a sequencing badge
that indicates how substeps should be executed. For
example, the Linear Regression step in Figure 2 contains a
circle-with-slash badge that represents a "choice" control
construct; this indicates that Linear Regression is executed
by executing one of the alternatives Least Squares
Regression or Three Group Regression. The agent, an
analyst to whom the step is assigned, makes this choice.
Least Squares Regression and Three Group Regression, in
turn, are executed by executing a sequence of substeps, as
indicated by the arrow control badge. (Two other proactive
control badges, "try" and "parallel", are discussed with
respect to later figures.)
The rightmost element of a step control bar represents
exception handlers. Exception handlers may be simple
actions or more complex subprocesses, represented by
Figure
1: Little-JIL specification for linear regression
additional substeps. The simple actions include completing
the step, continuing the step, restarting the step, and
rethrowing the exception. In Figure 2, the exception
handler for the Outliers exception (thrown by step
Construct Linear Model) has no substep; rather, this
handler simply traps the exception and continues the
Linear Regression step, as indicated by the arrow badge
associated with the exception handler. handler with a
substep is shown in Figure 4.) In the context of a choice
step, continuing after an exception means that the agent is
offered a choice of the remaining alternatives. A step may
also include reactions, which are attached as substeps to a
badge in the center of the control bar (however, reactions are
omitted here for the sake of simplicity).
In the visual representation of Little-JIL steps, a circular
badge above a step name represents the interface to the step.
The interface includes resources needed by the step, as well
as parameters sent into and out of the step, local data, and
events and exceptions that may be thrown by the step.
Execution agents are represented as a type of resource. Each
step has an execution agent; if none is specified for a step,
the execution agent is inherited from the step's parent. In
Figures
2 and 3 the agents include both humans and
automated tools. Data sets can also be modeled as
resources. Several steps in the example throw exceptions
(designated in the interface by an X). While much of the
data flow between steps is shown in a simplified form,
most of the data declarations have been omitted from the
interfaces in the figures for the sake of brevity.
A Little-JIL step may also have a prerequisite and/or a
postrequisite. A prerequisite is indicated by a downward-
pointing triangle on the left of the step name and a
postrequisite is indicated by an upward-pointing triangle on
the right. An empty triangle indicates no requisite; a filled
triangle with text indicates the name of the specified
requisite. The body of the requisite is a separately specified
step (not shown in our figures) possibly containing
multiple substeps. A requisite is successful if it terminates
normally; if it fails, it throws an exception. For example,
the step Construct LSR Model has the postrequisite No
Outliers. If outliers exist, then the postrequisite throws the
Outliers exception, which causes Construct LSR Model to
fail. The parent step Least Squares Regression propagates
the exception, which is handled by its parent Linear
Regression.
Clearly, there are many ways to add to the process specified
in
Figure
2. Additional pre- and post- requisites could be
added to the LSR and TGR steps, data preprocessing steps
could be added to improve the robustness of the process,
and other approaches to regression could be added. The
next section discusses one of the most important
elaborations to the process: how to deal with non-linearity.
3.3 Coping with Non-linearity
A common diagnostic technique for any form of linear
regression is to examine a plot of residuals. Ideally, the
residuals-the errors in Y left unexplained by a
model-should not vary with X. A non-linear relationship
between X and the residuals indicates a non-linear
relationship between X and Y, one that is not adequately
captured by the linear model. Checking for linear residuals
can be represented in Little-JIL as a postrequisite for the
Linear Regression step. What if this postrequisite fails?
One solution would be to try a non-linear modeling
technique such as locally-weighted regression or lowess [2].
Figure
3 shows a process that includes both the original
Figure
2: Regression with substeps for linear and non-linear regression
linear regression step and a new step for non-linear
regression. The "try" sequencing badge on the root
regression step indicates that non-linear regression is
invoked only if linear regression fails. Given the current
specification of linear regression, the principal reason the
step might fail is the presence of non-linear residuals.
Linear regression and non-linear regression are partitioned
as separate alternatives because different processes are
required to determine if linear and non-linear models
indicate a relationship between X and Y. Linear regression
tests a relatively simple statistical hypothesis (-
regression relies on a step Evaluate Relationship in
which a human analyst makes a qualitative judgement. To
assist in that judgment, a step to construct confidence
intervals has been added to non-linear regression, although
analysts should be cautious to distinguish between
confidence intervals and significance tests [1].
Note that the overall Regression process coordinates the
work of human and non-human agents who participate at
various levels in the process. As with linear regression,
many additions to the regression process are possible.
These include additional approaches to non-linear
regression, more quantitative substitutes for the evaluate
relationship step, and prerequisites for the regression step.
The next section describes one particularly important
prerequisite for regression-homogeneity.
3.4 Coping with Inhomogeneity
A frequently overlooked assumption of regression is that
the data sample is homogeneous-that it represents a single
uniform phenomenon rather than two or more phenomena
with fundamentally different behavior (Figure 1c). For
example, inhomogeneity can occur when men and women
have different physiological responses to some
phenomenon, yet data from men and women are mixed
together for purposes of analysis. In contrast to outliers,
which often represent errors that cannot be explicitly
modeled, inhomogeneity represents two or more distinct
data regimes that require independent modeling.
Figure
4 shows a Model Relationships process that handles
inhomogeneous data. The process first attempts to apply
regression testing to a given bivariate data set. However,
the regression step is guarded by a prerequisite that tests
the homogeneity of the data. This prerequisite assures that
a single regression is not performed on heterogeneous data.
If the prerequisite is violated, the exception
NonHomogeneity is thrown, which is caught by an
exception handler for Model Relationships. The recursive
process Model Subsets handles the exception. At the top
level Model Subsets is a sequence. The first substep,
Choose a Subset, chooses a data subset from the
inhomogeneous data set. The second substep is a parallel
step, Use and Choose Next. This substep, in parallel,
applies regression to the selected subset and recursively
calls Model Subsets on the remaining part of the data set.
By this recursion, Model Subsets iteratively models subsets
of the original data set, completing normally when no more
subsets are available (as indicated by the "check" badge on
the exception handler for the exception NoSubsetAvailable).
Figure
3: Handling inhomogeneous data
By combining the parallel step with recursion, multiple
data subsets may be modeled concurrently. Note that, in
this formulation of the process, a chosen data subset is not
guaranteed to be homogeneous. In that case, when the
process Regression is called on the subset, the
homogeneity prerequisite will again throw the
NonHomogeneity exception, which will take control back
to the exception handler for inhomogeneous data (i.e.,
Model Subsets). As an alternative, we could have put a test
for homogeneity as a postrequisite on the Choose a Subset
step.
4 . Coordinating Agents at Process Execution
Time
In the preceding sections we have shown how Little-JIL can
be used to flexibly specify a process that manages inter-step
process dependencies for multiple execution agents. In this
section, we describe how the activities of these agents are
coordinated when a process is instantiated and executed.
The vehicle for agent coordination during process execution
is an agenda management system (AMS). An agenda
management system is a software system that is based on
the metaphor of using agendas, or to-do lists, to coordinate
the activities of various human and automated agents. In
such a system, task execution assignments are made by
placing agenda items on an agenda that is monitored by
one or more execution agents. Different types of agenda
items may be used to represent different kinds of tasks that
an agent is asked to perform.
Our agenda management system [19] is composed of a
substrate that provides global access to AMS data, a set of
root object types (agendas, agenda items, etc.), application-specific
object types that extend the root types, and
application-specific agent interfaces (e.g., GUIs for human
agents).
We have designed and implemented an AMS specifically to
support the execution of Little-JIL processes. This AMS
has five types of agenda items: one item type corresponds
to each of the four Little-JIL step kinds, and one item type
corresponds to a process step at its lowest level of
decomposition. Each Little-JIL agenda item has many
attributes, including step name, execution agent, current
status, log, step instance parameters, throwable exceptions,
and interpreter. The last attribute is provided because, as
we illustrate below, the Little-JIL interpretation architecture
allows each step to have its own interpreter instance.
When a step of a process program is first instantiated, an
agenda item of the appropriate type is created and its
attribute values are set accordingly (e.g., status is set to
"Posted," input parameters are given the correct values).
As the process executes, the attribute values change
accordingly (e.g., the execution agent sets output parameter
values, status is changed). Thus, process program
execution state is stored within the AMS. This approach to
storing process state is similar to that used in the
ProcessWall [13].
An agent typically monitors one or more agendas to receive
tasks to perform. Multiple agendas are used because an
agent may frequently be involved in several disjoint
processes (or acting in roles that are logically disjoint).
When an item is posted to an agenda that an agent is
monitoring, the agent is notified that the agenda has
changed. In the case of a human agent, for example, this
could result in a new item appearing in the person's agenda
view window. The agent is then responsible for
interpreting the item and performing the appropriate task.
Agents may also monitor items individually; this gives
them the ability to post an item to an agenda and to
observe the item so they can react to changes in the item's
status, for example.
These mechanics are sufficient for the Little-JIL interpreter
to instantiate and execute multi-agent Little-JIL process
programs. By examining the state of an agenda item
corresponding to a step of the process program, the
interpreter can execute the process. When a new step is to
be executed, the interpreter identifies the appropriate
execution agent (with the help of a resource management
system), creates an appropriately typed agenda item for that
step, and posts it on the agent's agenda. As the agent
executes the step, its updated status is reflected in the
agenda item's status attribute value, which is monitored by
the interpreter. As the status changes, the interpreter
accordingly creates and posts substeps, returns output
parameters on successful completion of the step, propagates
exceptions on unsuccessful completion, and so on. Thus,
an AMS provides language-independent facilities that allow
coordination to take place, while the interpreter encodes key
coordination semantics of the Little-JIL language itself.
This design decouples concerns about why and when
coordination should occur from concerns about how
coordination should occur.
For example, consider how the process program fragment in
Figure
2 would be executed, supposing an interpreter had
created an item to correspond to an instance of a Linear
Regression step (a Little-JIL choice step). Assume the
interpreter has identified a HumanAnalyst for this task
(named Herman), posted it to Herman's agenda, and started
the item's interpreter (which is stored in the interpreter
attribute of the item). At this point the human analyst
would use the GUI to change the status attribute of the
Linear Regression item to "Starting." Its interpreter
would be notified of this change and would create two new
agenda items that correspond to the Least Squares
Regression and the Three Group Regression sub-steps,
then set the parent item's status to ``Started.'' Because new
agents are not specified for these steps, the interpreter would
post them to Herman's agenda and would start interpreters
for the new items. Herman's agenda GUI would render the
agenda to clearly depict the subitems of the choice item as
alternatives.
Suppose Herman then chooses to start the Least Squares
Regression step, changing its status to "Starting." At this
point both the Linear Regression item's interpreter and the
Least Squares Regression item's interpreter would be
notified of the change. The Linear Regression item's
interpreter would react by setting the status of the other
sub-item (Three Group Regression) to "Retracted." This
would cause the item to disappear from Herman's agenda.
Meanwhile, the Least Squares Regression item's interpreter
would create a new item for the first substep, Construct
Model. Because the process specifies an LSRTool for
that step, the new item would be posted to a particular
LSRTool's agenda, then the Least Squares Regression
item's status would be set to ``Started.'' Whatever agent
was monitoring the LSRTool's agenda would then be
notified that the tool's agenda has changed. This agent
would extract whatever information was needed by the tool
from the agenda item, set the item's status to ``Starting,''
and would invoke the LSRTool agent. Because Construct
LSR Model is a leaf step, the item's interpreter immediately
changes its state to "Started." When the LSRTool finishes,
the tool's agent would set the status of the leaf step
appropriately ("Completing" if successful or "Terminating"
if not), and that step's interpreter would complete the
transition of the leaf step to a final state. The interpreter for
the Least Squares Regression item would be notified that
the step has changed, and, depending on its status, would
start the next sequential sub-step or would terminate the
parent.
As previously mentioned, Little-JIL makes no distinction
between human and tool agents. Similarly, neither does
the AMS. As seen in the above example, different agents
interact with the AMS, and consequently with the running
Little-JIL process, via customized agent interfaces. For
humans, this interface may be a GUI that is used to change
an item's status, signal exceptions, change parameters, etc.
For COTS tools (such as the LSRTool, perhaps), this
interface may be a wrapper agent that integrates the tool
with the AMS, spawning the tool to perform tasks in
response to agenda items being posted to the tool's agenda
and reporting the results of tool execution by setting agenda
item attributes (e.g., parameters, status) as required.
Our early experiences support our belief that an agenda
management system provides an appropriate metaphor for
coordinating interaction in mixed-agent processes such as
KDD. We intend to continue experimenting with the use
of Little-JIL and the AMS to facilitate coordination in such
processes.
5. LESSONS LEARNED
Our experience using Little-JIL to specify KDD processes
has been instructive. Many coordination aspects of KDD
processes (including examples not described here) have
been easily expressed using Little-JIL. For example, one
aspect well handled by Little-JIL is the highly variable
control requirements of KDD processes. Conversely, KDD
processes have drawn on the full range of Little-JIL control
constructs. In some cases, processes require extremely
strict control, and Little-JIL allows us to indicate this (e.g.,
by executing substeps in a specified order). In other cases,
only very loose control is needed, and the language allowed
us to specify this as well (e.g., by allowing user choice or
parallel execution). We believe that successful process
languages for KDD must allow flexibility to program
processes both strictly and loosely.
Little-JIL's pre- and postrequisites are essential to effective
coordination in processes. Prerequisites make explicit the
assumptions that underlie a sampling or analysis technique;
postrequisites make explicit the acceptance criteria for the
successful completion of a technique. The ability to make
assumptions and acceptance criteria explicit is important for
making a process understandable, evaluating its correctness,
assuring its consistent execution, and validating its results.
Similarly, the ability to represent exceptions and exception
handling is critical for process robustness, reliability, and
safety. In our KDD examples, exception management is
also crucial in specifying process control structures. While
many descriptions of KDD techniques use nearly ideal data,
most practitioners who attempt to apply these techniques
quickly uncover hidden assumptions, leading to exceptions
in idealized process models. The ability to indicate
possible exceptions, specify how they are to be handled,
and direct subsequent execution, is essential to
coordinating KDD efforts in real-world applications.
Resource management provides another dimension of
coordination in processes. Flexibility in agent
coordination is afforded because Little-JIL process can be
written independently of the specific execution agents to
which they will be bound at run time. Additionally, the
control model of the language, in conjunction with the
agenda management system, allows processes to be written
transparently with respect to the issue of human versus
automated agents. However, runtime allocation of agents
allows dynamic orchestration of agent activities and enables
the dynamic adaptation of process behaviors to agent
availability. Similar degrees of flexibility and
opportunities for dynamic control apply to resources in
general.
6. FUTURE WORK
Our work to date with Little-JIL has convinced us of the
general utility of process specification. However, at least
three important areas of work remain. First, additional
experience with specifying processes is needed. We intend
to increase the level of sophistication of the existing
processes and to develop processes in other application
domains. In particular, we have begun development of
processes in the areas of coordination of robot and processes
used in electronic commerce.
Second, while we believe that Little-JIL specifications are
easy to read and write compared to more algorithmic
languages, we would like the KDD process to be extended
by non-programmers. We imagine providing a more
sophisticated process editor that would assist a KDD
researcher by assisting with the insertion of appropriate
steps with the necessary prerequisites, postrequisites, data
flow and exception handling.
Finally, the Little-JIL language itself is still under
development and there are a number of issues we intend to
address. We are investigating integrating an AI planner
[10] and resource-based scheduler [23] with Little-JIL.
Such mechanisms would allow us to schedule agents and
other resources based on cost, availability for a specific time
and duration, and expected quality of their results. The
results from planning would help guide agents in their
decision making at choice and parallel steps by identifying
which substeps are most likely to satisfy the time, cost,
and quality constraints for process instances.
We are also investigating the use of static analysis
techniques [3] on Little-JIL processes. Specifically, we
wish to prove properties of Little-JIL processes such as
ordering rules (Step A always executes before Step B) and
non-local dependencies (if Step A is performed, Step B is
eventually performed).
There are also some extensions to Little-JIL itself that we
want to consider further. In particular, KDD processes
appear to need a more explicit means of representing non-local
control flow dependencies. For example, in our
example regression process, a parametric significance test is
only applicable when least squares regression is used.
Currently, this control flow dependency is captured via data
flow in Little-JIL. That is, least squares regression results
in the computation of intermediate values that are used in
the parametric test. The precondition test for the parametric
test checks whether those data exist, and, if they do not,
prevents the parametric test from being used. A more direct
means of expressing this control flow dependency would be
preferable to hiding it within the data flow, as is currently
done. A more direct means would also enable our static
analysis techniques to reason more effectively about the
behavior of the program.
7. CONCLUSIONS
Knowledge discovery research is developing and exploiting
a diverse and expanding set of data manipulation and
analysis techniques. Not all analysts, or even all
organizations, can have a thorough knowledge of how to
correctly and effectively combine and deploy these
techniques. Process programming provides an effective
means for specifying the coordinated use of KDD techniques
by agents in potentially complex KDD processes. As
demonstrated in this paper, KDD process specifications
written in Little-JIL express requirements on individual
techniques and capture dependencies among techniques.
Little-JIL is a high-level process language designed to
support the specification of coordination in processes;
offers appropriate control flow constructs, pre- and
post-requisites, reactions, exception handling, agent
specifications, and dynamic resource bindings. Little-JIL
enables explicit representation of KDD processes, allows
reasoning about those processes, and supports correct
execution of the processes. In turn, this enables KDD
applications to produce reliable and repeatable results,
which is necessary for the effective use of data mining across
a wide range of organizations.
ACKNOWLEDGMENTS
This work was supported in part by the Air Force Materiel
Command, Air Force Research Laboratory, and the Defense
Advanced Research Projects Agency under Contracts
F30602-94-C-0137, F30602-97-2-0032, and F30602-93-C-
0100.
--R
Empirical Methods for Artificial Intelligence.
Smoothing by local regression: principles and methods (with discussion).
Data Flow Analysis for Verifying Properties of Concurrent Programs
Randomization Tests.
Resistant lines for x versus y.
A guided tour through the data mining jungle
From data mining to knowledge discovery in databases.
Which method learns the most from data?
A Negotiation-Based Interface between a Real-Time Scheduler and a Decision-Maker
Statistical significance in inductive learning
Presenting and analyzing the results of AI experiments: Data averaging and data snooping
The ProcessWall: A Process State Server Approach to Process Programming
Multiple comparisons in induction algorithms.
Building Simple Models: A Case Study with Decision Trees.
Unique Challenges of Managing Inductive Knowledge
An Adaptable Generation Approach to Agenda Management
Computer Intensive Methods for Testing Hypotheses: An Introduction.
Complex Goal Criteria and its Application in Design-to-Criteria Scheduling
Computer Systems That Learn.
KDD process planning
--TR
Randomization tests
The ProcessWall
Statistical significance in inductive learning
Data flow analysis for verifying properties of concurrent programs
Empirical methods for artificial intelligence
The design of a next-generation process language
An adaptable generation approach to agenda management
Multiple Comparisons in Induction Algorithms
Programming Process Coordination in Little-JIL
Building Simple Models
Enhancing Design Methods to Support Real Design Processes | knowledge discovery process;process programming;agent coordination;knowledge representation;agenda management |
295692 | The design of an interactive online help desk in the Alexandria Digital Library. | In large software systems such as digital libraries, electronic commerce applications, and customer support systems, the user interface and system are often complex and difficult to navigate. It is necessary to provide users with interactive online support to help users learn how to effectively use these applications. Such online help facilities can include providing tutorials and animated demonstrations, synchronized activities between users and system supporting staff for real time instruction and guidance, multimedia communication with support staff such as chat, voice, and shared whiteboards, and tools for quick identification of user problems. In this paper, we investigate how such interactive online help support can be developed and provided in the context of a working system, the Alexandria Digital Library (ADL) for geospatially-referenced data. We developed an online help system, AlexHelp!. AlexHelp! supports collaborative sessions between the user and the librarian (support staff) that include activities such as map browsing and region selection, recorded demonstration sessions for the user, primitive tools for analyzing user sessions, and channels for voice and text based communications. The design of AlexHelp! is based on user activity logs, and the system is a light-weight software component that can be easily integrated into the ADL user interface client. A prototype of AlexHelp! is developed and integrated into the ADL client; both the ADL client and AlexHelp! are written in Java. | INTRODUCTION
Online customer service systems such as "Call Centers" or
"Customer Care Centers" have been widely used, e.g., home-
banking, telephone registration, online shopping, airline ticket
booking, and digital libraries, etc. However, most of
Supported in part by NSF grants IRI-9411330 and IRI-9700370.
To appear in Proc. ACM International Joint Conference
on Work Activities Coordination and Collaboration, WACC
'99, Feb. 22-25, San Francisco, CA 1999
those systems lack sufficient capabilities of interactive communication
mechanisms necessary for providing online customers
with more sophisticated support and help. The emergence
of the World Wide Web provides some new options for
the user support and help problem because multi-media information
such as images, audio/video, and animation can
be easily presented in addition to traditional text. On the
other hand, because of this and rapid advances in technol-
ogy, the software systems for existing applications are becoming
more and more complicated and at the same time
new applications are being quickly developed. Such applications
include distance learning, computer-based training,
electronic commerce applications, and more. We believe
that a good package of online help facilities can not only
compensate for deficiencies of user interfaces, but also will
make very complex services easier to learn and use. At the
core of the help facilities lies the cooperating software systems
or modules enabling communication between the user
and service (human) agents. The communication may be
based on paradigms of specific software systems, in addition
to the ASCII and voice channels. For example, in the context
of a digital library for georeferenced information both
the user and the information specialist at the help desk (at
a distant site) should be able to have a view of the same interface
session so that the user can see what the information
specialist does exactly. Similar situations can also occur in
sessions for seeking technical support for a software systems,
making online reservations and orders, etc. In this paper,
we present the design and implementation of an online interactive
help system in the context of a digital library for
geospatially referenced information.
In particular, we report our experiences in the development
and integration of an online help desk system Alex-
Help!, for the Alexandria Digital Library (ADL) [FFL
In ADL, the user starts a session by initiating an HTTP-based
connection to the ADL server. During one session, the
ADL client software has a relatively complex user interface
that allows users to browse maps, zoom in/out, construct
queries (searches), and manipulate the results of queries.
The potential users of ADL do not necessarily have much
experience in accessing computers, and dealing with spatial
information; their knowledge about software systems may
be very limited. Our objective is to provide "just in time"
online (collaborative) help facilities for the users of the ADL
system.
The AlexHelp! system provides the following functional-
ity. The user can request to establish a connection to a help
desk. Once the connection is made, both the user and the
information specialist will share a common view of the user's
ADL session. The information specialist is now able to perform
various actions on the ADL client interface, including
those mentioned above, and all actions happen simultaneously
at both sides. In addition to such synchronized session
support, AlexHelp! also allows the presentation of pre-recorded
demonstration sessions to the user, and showing or
replaying user sessions to the help desk.
AlexHelp! is designed using user activity logs (i.e., session
logs), and is a light-weight software component that can be
easily integrated into or layered on top of the ADL user
interface client. Our design is different from many other
collaborative systems that are developed from scratch with
collaborative environment tools. This is because the ADL
system is already operational, and using tools like Habanero
would require rewriting
much of the ADL client code and could significantly increase
the cost of communication. Moreover, the advantage of our
approach is that it saves time by not repeating the work
already done.
As pointed out in [BM94], a straightforward approach to
constructing collaborative environments that combines different
communication mechanisms together will not necessarily
result in a good collaborative environment. Bergmann
and Mudge found in their experiment that the successful use
of their system requires much logistics and support. How-
ever, what we find is that for a certain class of collaborative
applications, careful design makes it possible to significantly
reduce such costs by automating most repetitive tasks.
Most of our work on AlexHelp! is focused on the ADL
client. Generally speaking, AlexHelp! is quite different from
typical groupware [Gru94b]. The collaborative model that
AlexHelp! supports is much simpler, since collaboration only
occurs between users of the system and the help desk. In a
collaborative session, one participant is designated the master
and the others are slaves. The master controls all the
slaves' views of the client interface (essentially). During the
session, either the user or the help desk can be the master
and the master "token" can be passed among participants.
Such transferring must be under the control of the help desk.
The collaborative actions supported during the session are
not very complex, and the nature of the connection between
master and slaves is almost stateless. But clearly such an
application in the context of ADL is also very representative
for various other online assistance services.
In the CSCW research community, several typologies
have been proposed. In terms of the Grudin's typology
[Gru94a], CSCW models are categorized in two dimensions,
space and time, as follows.
Different but Different and
Same Predictable Unpredictable
Same Meeting Work Shifts Team Rooms
Facilitation
L Different but Tele-, Video-, Electronic Collaborative
A Predictable Desktop Mail Writing
Conferencing
Different but Interactive Computer Workflow
Unpredictable Multicast Boards
Seminar
The AlexHelp! system supports a combination of these mod-
els: in the space dimension, the collaboration can occur at
different locations, either predictable or unpredictable; in
the time dimension, it can be at the same time or at a different
but predictable time.
Our work is also related to synchronized web browsers
such as [WR94, FLF94]. However, AlexHelp! is based on
and integrated into the ADL client software rather than relying
on web browsers; the design issues and implementation
techniques are thus quite different.
This paper is organized as follows. Section 2 presents
the motivation of AlexHelp!. Section 3 outlines the overall
design and system architecture. Section 4 focuses on the session
log file, the key component used in AlexHelp!. Section
5 briefly summarizes and discusses some implementation is-
sues. Conclusions and future work are included in Section
6.
In this section, we first give a motivating example in the
context of the Alexandria Digital Library system. We then
discuss the general concept of online interactive help systems
or help wizards and their applications.
The Alexandria Digital Library (ADL) [FFL + 95] is a digital
library for geospatially referenced data including maps,
images, and text. ADL's graphical user interface is a client
application implemented in Java; the user runs it locally (on
the user's workstation), connecting to the ADL via the In-
ternet. The ADL client includes the following components
(see
Figure
1).
1. Map Browser Window (window 1 in Figure 1).
The Map Browser window allows users to interactively
pan and zoom a two-dimensional map of the globe to
locate their area(s) of interest. In addition, the user can
select one or more areas on the map that may be used
to constrain queries. The map is also used to display the
spatial extent of the items retrieved from a query.
2. Search Window (window 2 in Figure 1).
The Search window allows users to set query parameters;
parameters include choosing the collection(s) to search,
location (i.e., coordinates from the Map Browser), type
of information (maps or aerial photographs from the the
catalog or hydrologic features from the gazetteer), date
options.
3. Workspace Window (window 3 in Figure 1).
The Workspace window displays and allows manipulation
of query results. A query history is also maintained
here, allowing the ADL client to be returned to the state
of a prior query. A Scan Display or metadata browser
(bottom part of the workspace window) displays brief
object metadata. Brief metadata includes title, format,
access information, spatial/temporal references, and a
small scale version of the original image if available.
From this window, full metadata and access information
can be requested.
4. Help Window (window 4 in Figure 1).
The ADL client's Help Window is a typical example of
today's software help systems. It shows, depending on
what component in the graphical user interface is se-
lected, help topics specific to that component. ADL has
a wide range of potential users, many of whom may not
have experience interacting with complex software sys-
tems. The Help Window is of some assistance. A tutorial
and walkthrough are also made available through the
ADL homepage 1 . Clearly something more is required,
however, since it is still difficult for inexperienced users
Figure
1: The ADL client/User Interface
to know how to use the system for their particular purposes
Consider the following example. Suppose M is a student
majoring in political science. M is working on a research
paper about the civil population in the Santa Barbara area,
and one of her friends suggests that she can get some useful
data from ADL. She follows her friend's suggestion and
launches an ADL client. Unfortunately, she quickly finds
herself frustrated. She tries the help system, but is still
confused and unable to learn how to effectively utilize ADL.
She needs a quick answer. If she were in the university
library, she could ask one of the librarians to help her find
the information she's looking for.
With AlexHelp!'s extension to the ADL help system, she
has more choices than blindly searching for answers from the
static help system. The following are possible alternatives.
- She could try the online tutorial (Demo Sessions). She
could download some pre-recorded sessions and replay
them (the Demo Session player is discussed in Section
4). These sessions could demonstrate to her the basic
operation of the ADL client.
- If the online tutorial doesn't help, she could try the Help
Wizard. She would be guided through a process where
she answers questions based on the nature of her prob-
lem. The Help Wizard would then direct her to several
Demo Sessions that would hopefully help solve her problem
- If the first two methods don't solve her problem, she
could connect to the help desk at ADL for an interactive
session. M and the help desk could communicate
through text-based chat, phone, or online audio channels
while the information specialist guides M through the
process of constructing a query and evaluating the information
returned from the query. M would actually see,
on her own ADL client, exactly what she would need to
do because the help desk controls M 's client while they
talk.
Clearly, these help desk services can help the student in
this example. It is conceivable different kinds of users may
prefer some of the service to others. To provide the help desk
services in the above example, the following capabilities need
to be developed.
ffl Collaborative sessions
By collaborative session, we mean that the ADL clients
on both sides (the user and the help desk) are "con-
nected"; that is, one client is in control of the session and
sends its actions to the other client, which mirrors them
on its graphical user interface. The clients can switch
roles during the session, allowing the user and help desk
to participate in a rich exchange of information.
ffl Session replay (demo sessions)
Support staff are not always immediately available. Instead
of forcing users to wait until they can contact an information
specialist, AlexHelp! uses the concept of demo
sessions. Demo sessions are examples of using the ADL
client that may be replayed on the client itself, showing
the user successful ways to use the client.
To support this concept, we developed a Session Player.
This gives the user VCR-style control over a session; the
user may pause, rewind, start and stop sessions. Different
sessions show the user how to perform tasks of
varying complexity, and for those that are more com-
plex, it is very useful to be able to pause a session and
replay subtle or difficult portions.
In addition, the Session Player allows users to record
their own sessions, which may be sent to support staff
for offline analysis or detection of usage patterns.
ffl Multimedia communication
Interaction between the user and the help desk should
not be restricted to manipulating or observing the ADL
client. It seems reasonable to assume that a network connection
of some type has been established between the
help desk's and user's ADL clients; it could be utilized for
more than just synchronizing the clients. Collaborative
help systems should provide additional methods for com-
text-based chat, voice, and even video could
be used to allow participants to communicate. Shared
whiteboards fit the paradigm as well, but in this case
the "whiteboard" is the application itself - the reason
the participants are connected is to explain the use of
the application.
In addition to the above capabilities, it is also very useful in
such a context to provide:
ffl Call waiting and forwarding
In order to serve the users fairly and efficiently, our design
calls for a way to queue help requests when there is
more than one user that wants to participate in a collaborative
session.
Call waiting means that when the user requests to connect
with the help desk for a collaborative session, if
the help desk is not immediately available, the user is
notified that there will be some period of waiting, and
information such as expected wait time and the reasons
for the current delay might be made available to the user.
Another method to support interactive help is call for-
warding: if the current information specialist is not proficient
in the particular area in which the user needs help,
the user's connection may be ``task switched'' to another
information specialist that has the proper expertise.
ffl Multicast collaborative sessions
With multicast support, more than one slave mode client
can be synchronized with a single master mode client -
a desirable feature to support scheduled instructional/
demonstration sessions or distance learning.
Based on the analysis of the requirements for online assistance
in ADL and the design of the ADL client, we decided
that the interactive online help facilities in AlexHelp! should
be designed as an independent component that is easily integrated
with the ADL client. Even though the AlexHelp!
system is relatively simple and does not include everything
listed at the end of Section 2, it succeeds as a good example
of adding collaborative functions on top of the single-user
version of a software package. The feedback from the development
and user evaluation groups within the ADL project
shows that our system provides a feasible and efficient way
to support interactive online help in ADL.
In this section we discuss the main issues in developing
the AlexHelp! system, including functionality, design, and
architecture.
3.1 Functionality
Our prototype AlexHelp! provides the following facilities:
ffl Collaborative session establishment and operation
A simple dialog window is used to initiate the connection
between two ADL clients. Once the connection is made,
AlexHelp! runs in one of two modes:
Slave (receive) mode
In this mode, the ADL client uses AlexHelp! to listen
for incoming messages from the other client; when
a message (a remote event) is received, the remote
graphical user interface event is "replayed" or duplicated
on the Slave client. The user of the Slave client
is unable to change the client's state.
Master (send) mode
In this mode, the AlexHelp! system sends local graphical
user interface events to the Slave ADL client, where
they are replayed. The state of the Master ADL client
is thus reflected in the Slave ADL client.
In particular, the ADL client Map Browser window is
able to replay operations such as "Zoom In", "Zoom
Out", "Pan", etc.
ffl Session replay
We also developed a mechanism for replaying recorded
sessions. As we discuss below and in the next section,
every ADL client records its operations in a log file. Alex-
Help! utilizes the logs in both Collaborative Sessions and
Replay.
3.2 Design
Consideration of an appropriate collaborative model for the
AlexHelp! system was driven in part by our goal of quickly
developing a prototype that could be used to demonstrate
the possibilities of extending and developing online help sys-
tems. While we had access to the ADL client's code base, we
also had the constraint that we could not change the basic
functionality of the client; in other words, the client had to
perform in all other respects exactly as it had before. Thus
we were faced with adding multi-user ability to an existing
single-user application that was not originally designed with
multi-user access in mind, a task Grudin recommends as a
reasonable way to approach building groupware [Gru94b].
Not being able to fundamentally alter the client prevented
us from using frameworks or toolkits for building
collaborative environments like NCSA's Habanero [CGJ
or Sun's JSDT (Java Shared Data Toolkit) [JSDT]. Using
a framework such as JSDT would entail in essence rebuilding
the application. On the other hand, including the ADL
client in an application that comprises a collaborative environment
like Habanero would require the user to run the
ADL client as a "component application" from within the
new environment, making the ability to participate in a collaborative
help session dependent on this new environment.
This was deemed unacceptable for two reasons. First, the
ADL client is meant to be a stand-alone application; it is
the library user's interface to the ADL. Second, the ADL
client has a fairly small footprint in terms of memory and
network resources; including it in a larger framework would
impose additional resource requirements.
Our specification calls for a model of collaboration best
described as turn taking or token passing: semantically, only
one of the user or the help desk should have control of both
clients at a given time. During the period of collaboration,
one of the clients is in the "slave" or receive mode, and the
other is in the "master" or send mode. Considering this
simple model and our desired features within the context
of a collaborative help system for ADL, there is no need to
use complex concurrency control or shared object models
[GM94, MR91]. The user interface of the client, say A, that
is currently in receive mode is simply "locked;" it may receive
and interpret events or messages from the other client,
say B (i.e., the remote client), but the user is unable to
change client A's state. When the clients trade roles, client
A takes charge and the events generated at A are sent and
duplicated at client B.
We realized early on that the ADL client log facilities
already in place were rich enough to duplicate the actions of
one client at another client separated by space and/or time.
This not only simplified design of the client-to-client commu-
nication, but also makes it easy to generalize the client-to-
client model into a one-to-many "multicast" model. Rather
than capturing and packaging system or user interface events
for transmission, which is a potentially complex endeavor
even in a syntactically sweet language like Java [JAVA], the
log entries can simply be sent to the remote client as they are
generated. The use of session logs in AlexHelp! is discussed
in detail in Section 4.
3.3 Architecture
Initially, it was intended that the help desk and the ADL
user would run the same program (on their respective work-
stations). It was thought that the symmetry of both participants
using the same application (i.e., one set of code) was
good design. As the project progressed, it was decided that
the help desk should be responsible for controlling a help ses-
sion, and the necessary features for control should be built
into the help desk's client. During a help session, the information
specialist may turn control over to the remote user
(i.e., the help desk's client becomes the slave); the information
specialist should have the ability to recover control of
the session if desired. However, because AlexHelp! will be
rewritten along with the next version of the ADL client, development
of a single application continued. The help desk's
session management features were not implemented in our
current prototype.
AlexHelp! consists of three layers: the ADL Client layer,
the Event Handler layer, and the Communication layer (Fig-
ure 2). These three layers are described below.
3.3.1 ADL Client
The ADL client is the primary interface to the ADL for
users of the library. In its first incarnation, the ADL client
was a Java applet suitable for running within a Java-capable
browser such as Netscape's Navigator. The current version
is a stand-alone application, built entirely in Java. Subsequent
development is intended to produce both stand-alone
and applet versions.
As mentioned in Section 3.2, we were not at liberty to
fundamentally change the ADL client's design or function-
ality. We chose therefore to layer the additional functionality
we required on top of the ADL client (see Figure 2).
Note that in our prototype, the ADL client communicates
directly with both the Event Handler layer and the Communication
Figure 2 reflects our design rather than
our current implementation. In an iterative software development
model, we would choose in our next iteration (post
prototype) to restrict the ADL client to interacting with a
single layer, the Event Handler layer, in order to keep the
module dependencies clean.
3.3.2 AlexHelp! Event Handler Layer
The Event Handler is layered directly on top of the ADL
client. Its primary function is to receive incoming remote
events from the Communication Layer, and replay or reproduce
them on the local ADL client.
We noted in Section 3.2 that the existing logging mechanism
in the ADL client is rich enough in content to allow us
to duplicate or replay remote events on a local ADL client.
This allowed us a very simple design for the Event Handler
layer: when a remote event is received in the form of a
log entry (a string of ASCII characters), it is parsed to determine
what event should be triggered locally in the ADL
client. The Event Handler then directs the local ADL client
to perform the event. For example, if the remote event is a
pan (horizontal or vertical movement) in the remote ADL
client's Map Browser window, the Event Handler notifies
the local ADL client, providing the direction to pan (north,
south, east, west), and also the distance.
This design for handling remote events is easily generalized
for a distance learning or "multicasting" scenario: a
single information specialist or instructor runs an ADL client
in master mode, and the client's events are broadcast to a
group of ADL client users, all of whose clients are operating
in the slave mode. Each ADL client in slave mode receives
and handles remote events in the manner described above.
3.3.3 AlexHelp! Communication Layer
In the discussion of the collaborative model chosen for the
AlexHelp! system we mentioned JSDT, an example of a
framework for building collaborative applications that provides
abstractions for typical components of such systems
such as "channel," "client," and "server." Had we the opportunity
of building AlexHelp! (and also the ADL client) from
scratch, it is likely that we'd have chosen such a framework
to ease the typically troublesome task of properly designing
and building a system in which distributed communication
is central to operation. However, it was decided that given
the time and limited flexibility in terms of altering the ADL
client, it would be easier and faster to use simple TCP/IP
(sockets) to implement client-to-client communication. The
choice was made easy because of Java's inclusion of networking
abstractions as part of its core libraries [JAVA].
The java.net.* library provides objects that encapsulate
network connections that are either connection-oriented or
connectionless, and also provides a model for sending the
same message to multiple recipients (useful for our distance-learning
"multicast" model). The current implementation of
the AlexHelp! system provides client-to-client connections
using connection-oriented Java sockets and a turn-taking
slave/master model; the next version of AlexHelp! will include
the distance learning mode as well.
AlexHelp!'s architecture is structured in layers so that it
would be relatively easy to replace one of the layers with an
alternative implementation - hopefully, it would be easy to
the point that replacing one layer would require no modifications
to the adjacent layers. The Communication Layer is
meant to be no exception. In addition, we foresee utilizing
the Communication Layer to apply additional communication
mechanisms: text-based chat, voice, video, etc.
ADL
Client
Event
Handler
Communication
Layer
Communication
Layer
ADL
Client
Event
Handler
ADL Client
Layer
Communication
Layer
ADL
Server
Event
Layer
Handler
Figure
2: The Architecture of AlexHelp!
In this section we discuss the central technique of using session
logs in developing synchronized sessions in AlexHelp!.
We also illustrate that session logs facilitate the design of
session replay and give a general discussion on applying this
technique in other contexts.
Several of our early design meetings included personnel
from the ADL development team. Among the many invaluable
things we learned from them was the fact that they
built a simple user activity log system into the client. As
part of the event handling in the ADL client, certain user-
and system-generated events are logged into an ASCII text
file, which may then be analyzed after the fact.
The original intent of the ADL development team was
to use the activity logs for analysis of user activity and usage
pattern discovery. We saw that if the logs were augmented
to include more information about each event, we
could then use the log entries to implement coordination of
two physically separate ADL clients. Further, the logs could
be viewed as a persistence mechanism, giving us a method
for "session replay". We felt that this (persistence of an entire
user's session) would be crucial to integrating AlexHelp!
into the ADL's existing help system.
Our experience developing AlexHelp! using session logs
as both a way to forward local events to a remote client and
as a persistence mechanism showed us that for a certain class
of applications, log files can be instrumental in rapidly and
simply augmenting existing help systems (or building new
ones), and also adding multi-user capability to single-user
applications.
In Section 4.1, we discuss the format of the ADL client's
log files, and explain some example log entries. In Section
4.2, we discuss the use of logs in controlling collaborative
sessions in AlexHelp!. Section 4.3 examines how log files are
used to implement VCR-style control of Demo Sessions (log
replay). Section 4.4 discusses general application of of the
log-based technique.
4.1 The ADL Client Log File
The ADL client log consists of event summaries, each of
which makes up a single log entry. The order in which entries
appear in the log maps loosely to the order in which
they occur; there may be variation between two logs recording
identical sessions due to timing differences that can be
tracked to the effects of multithreading; i.e., a system event
may occur before a user-generated event, but the user event
is logged first due to scheduling. We did not experience unexpected
behavior related to such scheduling discrepancies.
In addition to information recording an event (see examples
below), a log entry also includes information identifying
the particular session and client that generated the event.
In a log entry, the information of interest (information that
allows an event to be reconstructed from the log entry) actually
makes up very little of the entry. In many cases,
less than about one third of a log entry is of interest. For
brevity, the other information (such as session or client iden-
tification) has been removed from the log entries used in the
discussion below.
The following are examples of the ADL client's log en-
tries, and comments related to using the logs with AlexHelp!.
ffl client action - Map Mode: ZOOM IN
This log entry is among the simplest types. It is generated
when the user clicks with the mouse on the button
labeled "ZOOM IN" on the Map Browser window.
Similar log entries are made when the user clicks on
the buttons labeled "ZOOM OUT," "SELECT," and
"ERASE."
ffl client action - New Extent:
-205.3 -40.56 -25.29 49.44
This log entry occurs when the user clicks the mouse in
the map on the Map Browser window when the selected
mode is either ZOOM IN or ZOOM OUT. "New Extent"
identifies this as a map resizing event. The parameters
correspond to the lower left and upper right corners of
a rectangular geographic region which defines the new
extent of the map in the Map Browser window. The
first two numbers are the latitude and longitude of the
lower left corner; the second two are the latitude and
longitude of the upper right corner.
ffl client action - Query Region(s) Modified:
This log entry occurs when the selected mode is SELECT
and the user clicks and drags the mouse in the map on
the Map Browser window. This action creates a selection
box, which defines a rectangular geographic region,
as in the previous log entry. The parameters are again
the latitudes and longitudes of the lower left and upper
right corners of the geospatial region, which may be used
to constrain queries. Selection boxes may overlap each
other.
ffl client action - Query Region(s) Modified:
In this log entry, a selection box is again drawn; in this
case, however, several selection boxes already existed.
Log entries for modifications to Query Regions (selection
boxes), like the log entry above, simply enumerate all of
the selection boxes, whether the change was adding or
removing a selection box. The log entry above shows the
five currently existing selection boxes.
ffl client action - Map: Button Pressed: west
This log entry shows a "pan" event (horizontally or vertically
repositioning of the map in the Map Browser win-
dow).
4.2 Logs in Collaborative/Synchronized Sessions
The ADL client's log facility was central to our design and
implementation of synchronized help sessions. Our specification
calls for the ability to capture, distribute, and reproduce
graphical user interface events. Capturing such events
as they occur can be troublesome at best, and at worst im-
possible. Although Java aids the programmer in this type
of endeavor (see Section 3.2), it was clear to us that in light
of the fact that the session log mechanism was already in
place, utilizing it would permit implementation to proceed
much faster than dealing with (intercepting) events at the
graphical user interface level.
Augmenting the ADL client in order to send log entries
to a remote client was a simple matter of hooking into the
module responsible for writing log entries to a file. Acting on
the log entries that have been received from a remote client
entails using the ADL client's public application programming
interface (API) to set its internal state, which triggers
any corresponding changes in the graphical user interface.
As an example, consider again two ADL clients, A and
B, connected in a collaborative session. Client A is being
operated by the Alexandria help desk, and B is being operated
by a user of the library. A is currently in the master
(send) mode; B is in the slave (receive) mode. When the information
specialist draws a selection box on client A's Map
Browser, the log module in client A appends to the log file
an entry corresponding to drawing the selection box. Since
A is in the master mode, the log entry is also handed up
to AlexHelp!'s Event Handler Layer. There, the log entry is
forwarded to AlexHelp!'s Communication Layer, where it is
sent over its network connection to client B. At client B, the
log entry is received at the Communication Layer, which immediately
hands it down to the Event Handler Layer. The
Event Handler parses the log entry to determine if it is of
interest; in this case, it finds that the log entry consists of
parameters making up a list of selection boxes. The Event
Handler uses ADL client B's public API to set its collection
of selection boxes; client B then redraws its display in the
Map Browser window to reflect the change.
When events are received at an ADL client in the slave
(receive) mode, the graphical user interface gives the user
some kind of visual cues to alert him or her that something
has changed: when the state of the Selection Box/Map Zoom
mode buttons are changed, for example, the button that is
now selected flashes red and white several times.
Given a priori knowledge of the possible forms log entries
can take, the task of parsing a log entry and deciding what
to do in terms of reproducing the event is rather simple.
Consequently, the Event Handler Layer, possibly the most
complex module in AlexHelp!, is essentially a parser for a
rather restricted language (see example log entries in Section
4.1).
4.3 VCR-Style Control of Log Replay (Demo Sessions)
As in AlexHelp!'s collaborative sessions, the ADL client's
session logs were important in the design and development
of AlexHelp!'s Demo Sessions.
Giving a user the ability to play and replay a series of
graphical user interface events as if the user were operating
a VCR is a powerful learning tool that goes beyond help
systems that are text-based or rely on short animations.
Using AlexHelp!'s model for Demo Sessions in conjunction
with detailed explanations (easily supplied along with the
log files as text or audio files), help systems can be developed
that show usage of complex software systems in a way that
users would otherwise only experience by using the system,
perhaps in a trial-and-error fashion.
Demo Sessions in AlexHelp! use log files in the same way
that client B in Section 4.2 uses incoming log entries. An
ADL client playing a Demo Session uses AlexHelp!'s Event
Handler Layer to read a log file, parsing each log entry and
carrying out the event described in the entry. AlexHelp! uses
an additional window in the graphical user interface to give
the user control over replay. The user may set the rate of
playback, start and pause playback, single-step the session
(i.e., play a single event and pause), rewind a single event,
or rewind to the start of the session.
4.4 The Use and Utility of Log Files
We envision (for ADL as well as other complex software
systems) help systems that utilize log files in concert with
a "Help Wizard"; the Help Wizard would present the user
with a series of questions that help to focus the user towards
one or several log files that show examples of sessions that
closely resemble what the user will need to do
to achieve his or her goals with the application.
This is in stark contrast to many help systems in current
applications, which seem to consist mainly of duplicating
the functional descriptions of each user interface component
in the help files. Help systems built with session logs can
be viewed as an extension of support staff; even if the support
staff and the end users operate in different time zones,
making synchronized activities difficult to schedule, custom
session logs can be created and forwarded to the user in lieu
of an online meeting. For example, one of our recommendations
for application of AlexHelp! in ADL is to maintain a
page on the Alexandria World Wide Web site dedicated to
listing demo sessions, from which users may download demo
sessions showing queries similar to those the users hope to
carry out and thus learn how to use the ADL interface from
the example sessions.
Users can, in turn, record their own session log for support
staff analysis; what better summary of a problem using
an application than a complete reproduction of the user's
activity?
5 IMPLEMENTATION OF A RAPID PROTOTYPE
In this section we briefly discuss issues related to the implementation
of our current prototype.
5.1 Module Architecture
The relationship of the AlexHelp! modules with the original
ADL system is shown in Figure 3. The implementation
details of the extension modules will be discussed in the following
sections.
Of interest in Figure 3 are the various working modes of
the extended ADL client:
1. Stand-alone Mode
This is the same as the sole working mode supported
by the original ADL system. Native user interface (UI)
events generated by the local windowing environment are
sent to the User Event Handler and then get recorded
locally by the event logging module.
2. Master Mode
When the client is in the master mode, the native UI
events are still processed by the User Event Handler.
However, now they are also sent to the remote client via
the network interface.
3. Slave Mode
When the client is in slave mode, the user interface
should not respond to local UI events but remote events
instead. Remote events go through the Event Parser and
then the UI highlighting module, which presents visual
cues to the user that remote events are occurring.
4. VCR Mode
When replaying session logs (retrieved from the network
or locally), the Event Parser passes each event to the UI
highlighting module, which passes the events to the User
Event Handler after performing the necessary highlighting
5.2 Network Interface
The network sublayer is encapsulated in an object which
is responsible for maintaining the TCP link and some state
information related to the connection. The reason for choosing
(as opposed to UDP) is that a reliable link that
guarantees delivery of synchronization messages is desired.
A reliable link greatly simplifies the mode switching proto-
col. In the current prototype, we have not added the mode
switching control to the help desk's ADL client (since we decided
to keep only one code base). When implemented, the
information specialist will control mode switches. The reliable
connection allows us to use a simple handshaking
protocol to ensure state consistency.
Event Parser
The kernel of the session player is an Event Parser which
breaks down session logs into a sequence of event records
and retrieves the type and parameter of each event. There
are a finite number of event types. As shown in Section 4.1,
for each type, the format of the events is a simple regular
language. This makes the manual coding of the Event Parser
that we decided to use much simpler than using a lexical
analyzer generator such as lex.
5.3 Session Player
Our multithreaded Session Player performs well; however,
careful attention must be paid to the "atomicity" of replaying
remote events. Aborting an event in the middle will
result in an inconsistent state. All events we are interested
in (i.e., those that we capture and relay to the ADL client
that's in the slave Mode) finish in a finite and brief amount
of time, so we simply use a protocol that ensures the completion
of the current event.
5.4 User Interface (Receiver Event Notification)
Not all user interface events are obvious if not generated by
the user. We found that in replaying events received from
the remote Master client, major changes were obvious (such
things as zooming in or out in the Map Browser window),
but minor changes resulting from selecting a button were
easily missed by the user. Ideally, every user interface event
could be duplicated, down to tracking the movement of the
mouse. This way it is easier to follow subtle changes in
the user interface state. It's possible to simulate user interface
events at this level of granularity, but it requires a
formidable amount of user interface resources. In addition,
the time required for implementing such a scheme was not
available to us in developing a prototype. We instead added
a fairly simple layer between the User Event Handler and
the Event Parser that, depending on the user interface component
in question, gives the user obvious visual clues. For
example, when changing mode from Select to Erase in the
Map Browser window, the Select button changes its high-lighting
to the default color for non-selected components,
and the Erase button flashes red and white several times,
finally settling on the default color for selected components.
This light-weight and modular approach makes it easy to
replace or extend the event notification system.
Synchronized Web Browser
As mentioned in Section 2, the current ADL client also uses
a World Wide Web browser to display images (the results of
Figure
3: The relationship between the extension modules and the original system
queries). In a separate project, we implemented some primitive
synchronization between two World Wide Web browsers
using JavaScript and Java; the functionality is
similar to but much simpler than [WR94, FLF94].
6 CONCLUSIONS AND FUTURE WORK
Our experience in the design and development of AlexHelp!
shows that adding collaborative functionality to an existing
system as an independent module can be a viable and fast
approach.
Our approach can be summarized in the following steps:
1. Examine the existing system and based on the analysis,
carefully design the extension system. The design work
includes but is not limited to the extension architecture
and function specification.
2. Identify one or several likely hook points between the
original system and the extension. In our case, one of the
hook points was the session logging; we augmented the
logging facilities to enable distribution of user interface
events.
3. Perform detailed design work, including communication
interface, state transition protocols and user events re-
production, etc.
4. Implement and link the new module to the existing system
As mentioned previously, what we have done is a throw-away
prototype; some promising features have been left out
due to time constraints, which include:
The design of a good help wizard requires much analysis
of the problems that users may encounter. It might be
interesting to add some logic to the help wizard that
allowed it to learn from users' requests.
ffl Call Forwarding
Call forwarding is not a trivial feature. Since TCP/IP
sockets don't allow the migration of connection information
from system to system, transparent call forwarding
(meaning the user should not be aware that the connection
has been broken and re-established) involves another
layer upon the TCP/IP protocol.
ffl Wait Time Estimation
An accurate estimation of the expected waiting time is
necessary for a practical system. However, finding a suitable
model for such estimation could itself be an interesting
research area. One such model would involve using
priority-based scheduling for the incoming help call
requests. Many factors must be considered when determining
priority: user class ("premium user", "common
user", etc.) accumulated wait time, the difficulty or urgency
of the problem (if reasonable measures can be es-
tablished), etc. Scheduling in this fashion is similar to
process scheduling in an operating system.
Multicast Support
Multicast support will become more important as the
user group grows larger. The incorporation of multi-cast
capability involves modification of the collaboration
model and mode control protocol.
ffl Client Side Action Analysis
Users' session logs reflect the actions they've performed.
This data is valuable for the analysis of how the digital
library is being used. Data mining may be added to ease
the task of analyzing such a potentially large volume of
data.
Currently ADL is only accessed by a small testing com-
however, security issues will become important
when the service is open to public.
ffl Multimedia Support
Multimedia support is not difficult to plug in as several
separate channels within the Communication Layer.
However, since users can access the library via different
links, ranging from high speed network connections to
low speed modem connections, users should be able to
control the bandwidth by switching on only the multi-media
features their connection can accommodate.
ACKNOWLEDGMENTS
The authors thank Vinod Anupam for his stimulating discussions
which lead to the idea of online help studied in
this paper; Linda Hill, Mary Larsgaard, and the ADL implementation
team, in particular, Nathan Freitas and Kevin
Lovette, for their comments and help in the specification and
implementation of AlexHelp!; Linda Hill and Mary-Anna Rae
for their comments on an earlier version of this paper.
--R
Automated assistance for the telemeeting lifecycle.
Alexandria digital library: Rapid prototype and metadata schema.
Extending www for synchronous collaboration.
Real time groupware as a distributed system: Concurrency control and its effect on the interface.
Eight challenges for developers.
JDK 1.1.
Java Shared Data Toolkit
The impact of CSCW on database technology.
A synchronous collaboration tool for World-Wide Web
--TR
Groupware and social dynamics
Computer-Supported Cooperative Work
Real time groupware as a distributed system
Automated assistance for the telemeeting lifecycle
Java object-sharing in Habanero
Alexandria Digital Library
--CTR
Marcos Andr Gonalves , Edward A. Fox , Layne T. Watson , Neill A. Kipp, Streams, structures, spaces, scenarios, societies (5s): A formal model for digital libraries, ACM Transactions on Information Systems (TOIS), v.22 n.2, p.270-312, April 2004 | online support;digital library;user interface;online help desk;collaboration |
296252 | Time and Cost Trade-Offs in Gossiping. | Each of n processors has a value which should be transmitted to all other processors. This fundamental communication task is called gossiping. In a unit of time every processor can communicate with at most one other processor and during such a transmission each member of a communicating pair learns all values currently known to the other.Two important criteria of efficiency of a gossiping algorithm are its running time and the total number of transmissions. Another measure of quality of a gossiping algorithm is the total number of links used for transmissions. This is the minimum cost of a network which can support the gossiping algorithm. We establish trade-offs between the time T of gossiping and the number C of transmissions and between the time of gossiping and the number L of links used by the algorithm. For a given T we construct gossiping algorithms working in time T, with parameters C and L close to optimal. | Introduction
Gossiping (also called all-to-all broadcasting) is one of the fundamental tasks in network
communication. Every node of a network (processor) has a piece of information (value)
which has to be transmitted to all other nodes, by exchanging messages along the links
of the network. Gossiping algorithms have been extensively studied, especially in the last
twenty years; see the comprehensive surveys [5, 8] of the domain.
The classical communication model, already used in the early papers on gossiping [1, 2, 3,
7, 14], is called the 1-port full-duplex model. Communication is synchronous. In a single
round (lasting one unit of time) every node can communicate with at most one neighbor and
during such a transmission communicating nodes exchange all values they currently know.
Two important criteria of efficiency of a gossiping algorithm are its running time (the number
of communication rounds) and the total number of transmissions (calls). The latter is a
measure of cost of the algorithm, assuming unit charge per call. The minimum time of
gossiping in a complete n-node network was the first problem in this domain, studied in the
fifties [2, 14]. It was proved to be dlog ne for even n and dlog ne On the other
hand, the minimum number of calls in gossiping is 2n \Gamma 4, for any n ? 3 (cf. [1, 7]).
Another measure of quality of a gossiping algorithm is the total number of links used for
communication. This is the minimum cost of a network which can support the algorithm,
measured by the number of links in the network. This can be also viewed as a measure of
the cost of implementing the algorithm, a fixed cost associated with network design, rather
than the cost associated with each run. Clearly, the sparsest network supporting gossiping
is a tree and thus the minimum number of links is n \Gamma 1.
It turns out that the above criteria of efficiency are incompatible: it is impossible to minimize
time and the number of calls or to minimize time and the number of links used by the
algorithm, simultaneously. If every gossiping algorithm working in time r must have
both the number of calls and the number of links used for communication equal to r2
as every node has to communicate in every round with a different node, in order to double
its knowledge. On the other hand, Labahn [11] proved that the minimum running time of
a gossiping algorithm with the number of calls 2n \Gamma 4 is 2dlog ne \Gamma 3, almost a double of
the absolute minimum time. (An earlier proof of this fact, published in [15], was incorrect.)
Likewise, in order to minimize the number of links used for communication, we must allow
larger gossiping time. Labahn [10] proved that the minimum gossiping time in a tree is at
least 2dlog ne \Gamma 1, again almost a double of the absolute minimum time.
These results indicate the existence of time vs. cost trade-offs in gossiping, where cost is
measured either by the number C of calls or by the number L of links used for communication.
Establishing these trade-offs is the main goal of the present paper. For a given T (ranging
from log n to 2log n) we show upper and lower bounds on the minimum cost of gossiping in
time T . The algorithms yielding our upper bounds are generalizations of known gossiping
schemes that minimized separately either the running time or the cost (cf. [1, 2, 3, 7, 11,
14, 15]). While these classical algorithms were either fast but costly or cheap but slow, it
turns out that they can be combined to yield almost optimal cost for any given running
time. However, the main contribution of this paper are lower bounds on the minimum cost
of gossiping for a given running time, that closely match the performance of our respective
algorithms. This is for the first time that the full spectrum of relations between the time
and the cost of gossiping is investigated.
Each of our bounds is useful for a different range of values of the running time and cost.
If the running time is ne + t(n), we show an upper bound 2n + O( nlog n
) on the
number of calls, which closely matches the lower
) following from [12] . These
bounds are useful for small t(n), i.e., when the running time is small. If the running time
is ne \Gamma r(n), we show an upper bound 2n + O(r(n)2 r(n) ) and a lower bound
log
These bounds are useful for small r(n), i.e., when the running time is larger.
Here are a few consequences of the above results. Let the running time T of gossiping be
equal to dlog ne+ t(n). Let C denote the minimum number of calls in time T . The following
sequence of bounds shows how C gradually decreases from \Theta(nlog n) to the asymptotically
optimal range 2n + o(n), as restrictions on T are being relaxed.
If t(n) is constant then C 2 \Theta(nlog n).
If
If t(n) - log log n \Gamma d, for a constant d, then C 2 O(n).
If
For medium range values of the running time T we obtain the following bounds on the
minimum number of calls:
If
log
Finally, if we want to keep the number of calls very small, time has to increase significantly:
If
We also establish trade-offs between the time T of gossiping and the minimum number L of
links used for communication. For medium and large values of T the optimum values of L are
roughly one-half of the values of C for the same time. In this range we get bounds that are
even tighter than in the case of the number C of calls. For example, if
log n) and L
log n ). For small values
of we obtain the upper bound n + O( nlog n
on L but our lower bound
leaves a larger gap than before: we show that if t(n) - c log log log n, for
It remains open, for example, if L 2 \Omega\Gamma nlog n) for
constant t(n).
The latter bound should be contrasted with a result of Grigni and Peleg [6], concerning
broadcasting. They showed that the minimum number of links in an n-node network supporting
broadcasting from any node in a given time T is extremely sensitive to the value of
is a power of 2, broadcasting in time log n
links, while broadcasting
in time log can be performed in a network with O(n) links. Our bound shows that
this is not the case for gossiping: in particular, gossiping in time log const cannot be
performed in a network with a linear number of links.
It turns out that the problem of minimizing the cost of gossiping with a given running time
has a different flavor in the case of the number of calls and of the number of links. While
the same algorithms provide upper bounds in both cases, the techniques used to prove lower
bounds are different and results concerning one of these performance measures do not seem
to imply meaningful bounds for the other, in any straightforward way.
The paper is organized as follows. In section 2 we introduce the terminology and state some
preliminary results used in the sequel. Section 3 is devoted to the description of a class of
gossiping algorithms and computing their running time, number of calls and number of links
used for communication. These results yield upper bounds on the minimum cost of gossiping
with a given running time. In section 4 we establish lower bounds on the number of calls
in gossiping with a given running time. In section 5 we give lower bounds on the number
of links used in gossiping with a given running time. In section 6 we derive consequences
of previous results by applying them with appropriate parameter values. Finally, section 6
contains conclusions and open problems.
preliminaries
The set of communicating nodes is denoted by X and its size is denoted by n. A calling
scheme S on the set X is a multigraph on X whose edges are labeled with natural numbers
t, so that edges sharing a common node have different labels. Edges with label i
represent calls made in the ith time unit. The number of labels is called the running time
of the scheme and the number of edges is called the number of calls of the scheme. The
corresponding multigraph is called the graph of calls of the scheme S. The underlying graph
of a calling scheme S is the simple graph on the set X of nodes in which adjacent nodes
are those joined by at least one edge in S. This is the minimal network that supports the
scheme S. The number of edges in the underlying graph of S is called the number of links
used by S.
Upon completion of S the node v knows the value of the node w if there exists an ascending
path from w to v in S, i.e., a path with increasing labels on edges. The set of nodes who know
the value of v upon completion of S is denoted by K(v) and the set of nodes whose value v
knows upon completion of S is denoted by K \Gamma (v). If the
calling scheme S is called a gossiping scheme or gossiping algorithm. The (total) knowledge
upon completion of the calling scheme S is the number
(v)j. The knowledge
after i rounds is at most for every node v. The knowledge at the
end of a gossiping scheme is n 2 .
Lemma 2.1 1. If the calling scheme has k calls then jK(v)j
for every node v.
2. If the running time of a calling scheme is t then jK(v)j
node v.
Lemma 2.2 If then the time required for the remaining nodes to learn the
value of v is at least log n \Gamma log k.
Proof: One of the k informed nodes has to inform at least n\Gammak
other nodes which requires
time at least log n
All logarithms are with base 2. The notation
O,\Omega and \Theta is standard. We use o(f(n)) (resp.
!(f(n))) to denote the class of functions g(n), such that g(n)
converges to 0, as
grows.
3 Gossiping algorithms and upper bounds
In this section we present a class of gossiping algorithms that provide good time and cost
trade-offs, both in the case when cost is measured by the number of calls and when it is
measured by the number of links used for communication. Two important graphs will be
used in the construction of our schemes. The first is the k-dimensional hypercube H k . This
is the graph on 2 k nodes labeled with all binary sequences of length k. Nodes are adjacent
iff their labels differ in exactly one position. Nodes whose labels differ in the jth position
are called j-neighbors.
The second graph is the k-broadcasting tree B k . It is defined by induction on k. B 0 is a
single node v. B k+1 is obtained from B k by attaching a different new node to every node of
. The set of all new edges is called the 1)th layer in B k+1 . The initial node v is called
the root of the broadcasting tree.
Hypercubes and broadcasting trees are important for gossiping. Giving label j to edges of
the hypercube H k joining j-neighbors yields a gossiping scheme with the smallest running
time k. The cost of this scheme, however, is very large: it uses k2 k\Gamma1 calls and k2 links.
On the other hand, broadcasting trees yield gossiping schemes with small cost but large time.
Replace every edge in layers by two edges: one with label
the other with label k 1). Give label k to the edge in layer 1. The obtained gossiping
scheme first gathers all values in the root and then broadcasts them all to all nodes. Its
running time is its cost is very low: if is the number of nodes, it has the
optimal number links and it uses 2n \Gamma 3 calls, only one call more than the absolute
minimum.
In order to save gossiping time at a given cost or to lower cost with a given running time,
it is advantageous to use a combination of the two above schemes. Let
ne \Gamma 1. Let r - k and We describe the gossiping
algorithm COT(n; r). (COT stands for Cube of Trees.) Consider the hypercube H r and let
each of its nodes be the root of a broadcasting tree B s . Trees rooted at distinct nodes of
H r are disjoint. There are 2 k nodes in all trees. Attach each of the remaining x nodes to a
distinct node in one of the trees. Define the set of edges incident to these nodes to be the
Replace each edge of layers by two edges with labels s
Finally give label s to edges of the hypercube H r
joining i-neighbors.
The above described gossiping scheme works as follows: first information from all nodes of
the tree rooted at a given node of the hypercube is gathered in this node. Then gossiping
is executed inside the hypercube H r among all its nodes. At this point all nodes of the
hypercube know all values. Finally each node of the hypercube broadcasts the complete
information to all nodes of the tree rooted at it. The underlying graph of the scheme
COT(n; r) is the undirected version of the graph H r;s used in [6] for broadcasting.
Theorem 3.1 The gossiping algorithm COT(n; r) has running time uses
links.
Proof: Gathering information in nodes of H r takes time gossiping in H r takes time
r and broadcasting complete information in trees takes time total of
Gathering information in nodes of H r uses 2 r gossiping in H r uses r2
calls and broadcasting complete information in trees uses again 2 r calls, for a
total of
The number of links in the hypercube H r is r2 r\Gamma1 , the total number of links in all trees is
a total of
The above theorem yields upper bounds on the cost of gossiping with a given running time.
It will be convenient for our purposes to formulate them in two versions:
Corollary 3.1 For any functions t; r : N ! N , such that t(n); r(n) - dlog ne, there exists
a gossiping algorithm
1. with running time ne \Gamma r(n), number of calls C 2 2n +O(r(n)2 r(n) ) and using
2. with running time ne number of calls C
using
links.
Proof: 1. Straightforward.
2. Use part 1. for
The above corollary shows that there exists a gossiping algorithm whose time and cost are
both asymptotically optimal, i.e., whose running time is log n) and which uses
links. To this end it suffices to take, e.g.,
However, the results of the following sections will enable us to establish time and cost trade-offs
more precisely.
Lower bounds on the number of calls
In this section we give two lower bounds on the number of calls in gossiping with a given
running time. Each of them provides meaningful consequences for a different range of time
and cost values. The first bound follows directly from a result of Labahn [12] and is useful
for small values of the running time.
Theorem 4.1 Every gossiping algorithm with running time
calls.
The next theorem yields lower bounds on the number of calls in gossiping that are useful
when the running time is larger. We first prove two lemmas.
Lemma 4.1 If a calling scheme has the running time at most t and its graph of calls is a
tree then:
1. there exists a node v such that jK(v)j - t
2. there exists a node v such that
Proof: We prove only the first part of the lemma: the second part is analogous. Call a node
terminal if there is no ascending path of length larger than 1, starting from v. It suffices
to prove that there exists a terminal node v. Indeed, for such a node, K(v) consists of v
itself and of its neighbors in the tree of calls. The desired inequality follows from the fact
that the number of neighbors of a node in the graph of calls cannot exceed the running time
of the calling scheme.
Choose any node w 0 and suppose that it is not terminal. Choose any ascending path
(w 2. If w 2 is terminal, we are done, if not, choose any ascending path
(w length 2, and so on. Since labels in each path are strictly increasing and the
graph of calls is a tree, at every step at least one new node is visited. Thus the process must
terminate at some node w k which has to be terminal. 2
Lemma 4.2 If a calling scheme on n nodes has the running time at most t and uses at most
1. there exists a node v such that jK(v)j - t
2. there exists a node v such that
Proof: Again we prove only the first part of the lemma. Suppose that S is a calling scheme
satisfying the assumptions but violating assertion 1. Let a 1 ; :::; a k be the numbers of nodes in
components of the graph of calls of S. No component has a node v such that jK(v)j - t + 1.
It follows from lemma 4.1 that none of the components can be a tree hence the ith component
must have at least a i edges. Hence the total number of edges in the graph of calls is at least
n, contradicting the assumption on the number of calls in S. 2
Theorem 4.2 Every gossiping algorithm with running time
log
calls.
Proof: Let t be the largest integer such that less than n calls are placed before round t.
Let S be the calling scheme consisting of all calls of S with labels at most t \Gamma 1. Lemma
4.2 implies that after time there is a node v such that jK(v)j - 2log n. (Here K(v) is
taken with respect to the calling scheme S .) By lemma 2.2 the additional time required for
all nodes to learn the value of v is at least log n
Hence
and consequently
be the calling scheme consisting of all calls of S with labels at most t. (The number
of calls in S 1 is at least n.) Lemma 2.1 implies that after the first t rounds,
for every node v 2 X. (Here sets K \Gamma (v) are taken with respect to the calling scheme S 1 .)
consider the calling scheme S 2 consisting of the first a(n) calls
placed after round t (order calls in the same round arbitrarily). Lemma 2.1 implies that,
for every node v 2 X, taken with respect to S 2 . Thus,
upon completion of all calls in schemes S 1 and S 2 ,
for every node v 2 X.
Now at most remain to be placed. Denote by S 3 the scheme consisting of these
remaining calls. By lemma 4.2 there exists a node w, such that
now taken with respect to the scheme S 3 . It follows that upon completion of all
calls in schemes S 1 , S 2 and S 3 , i.e., at the end of the scheme S, node w knows the values of
at most
nodes. Since S is a gossiping scheme, we must have
whence
log
This concludes the proof. 2
5 Lower bounds on the number of links
In this section we establish two lower bounds on the number of links used by a gossiping
scheme with a given running time. The first bound concerns the case when the running time
is small.
Theorem 5.1 Every gossiping algorithm with running time T - log n+c log log log n, where
uses L 2 !(n(log log n) d ) links, where d
Before proving the theorem we fix some additional terminology and prove several technical
lemmas. Consider a calling scheme with running time T . Let
log log log n, c ! 1. Suppose that the number of links used by this scheme is
log log log n, for sufficiently large n. We will prove that the
considered calling scheme is not a gossiping scheme. Suppose it is.
A node v is called after round i of the scheme if jK \Gamma (v)j is at most 1
a node that
is not weak is called strong. A call between nodes v and w in round i is said to be ff-increasing
round i is at most ff times larger than
this round. Let
In every round i consider the following classes of calls:
A - Calls between weak nodes,
ffl)-increasing calls not belonging to the class A,
remaining calls.
The idea of the proof is to show that in many rounds there are few nodes that are either
or participate in calls of class C and consequently the increase of knowledge in these
rounds is too slow to enable achieving knowledge n 2 upon completion of the scheme. Among
our arguments many hold only for sufficiently large n. This does not cause any problems,
since the result is of asymptotic nature. We skip the phrase "for sufficiently large n" for the
sake of brevity.
We start with a lower bound on the number of strong nodes.
Lemma 5.1 In every round there are at least 3
strong nodes.
Proof: After every round i the knowledge K is at least because in the remaining
log rounds knowledge can increase at most 2 log n+f(n)\Gammai times and the final
knowledge must be n 2 . Let p be the number of strong nodes and the number of weak
nodes after the ith round. After the ith round the knowledge K is at most p2 i
p)2 i\Gammaf (n)\Gamma2 , hence
which implies
:The aim of the next two lemmas is to give an upper bound on the size of the class C.
Define the forbidden distance to be the maximum number k such that if a call of class C has
been placed on a link in round i then no call of this class is placed on this link in rounds
Lemma 5.2 The forbidden distance is at least 2 f(n)+b(n)+8 .
Proof: Suppose that a call of class C has been placed on link
)j be the amount of information in each of these nodes after this
round. Let l be the minimum positive integer such that a call of class C is placed on link e
in round i l. We will show that l ? 2 f(n)+b(n)+8 . Since the call on link e in round i was in
the class C, at least one of the nodes v 1 or v 2 was strong after round i \Gamma 1. Thus
Consider the increase of the number
after round 2. We have
the upper bound requiring that v j communicate in every round
having maximum and mutually disjoint information. On the other hand, jK
w after round inequality was already true after round i.
After round l we have
hence the increase of the number l is at most
In view of inequality 1 the right hand side of the above is at most
Since the call in round i+l on link e is in the class C, it is not in the class B and consequently
the number must increase in round times. Hence
we get
which implies
and finally
l
Lemma 5.3 jCj - nlog n
Proof: Since the total number of rounds is less than 2log n, there are at most 2log n
calls of class C on every link. The total number of links is at most
:The next two lemmas show that in many rounds there are many strong nodes that do not
participate in calls of class C.
Call a round essential if there are at most n
calls of class C in this round.
Lemma 5.4 At least Trounds are essential.
Proof: Otherwise more than Trounds would have more than n
calls of class C, for a
total of more than2 log n \Delta n
which contradicts lemma 5.3. 2
Lemma 5.5 In every essential round there are at least n
strong nodes that do not
participate in calls of class C.
Proof: By lemma 5.1 there are at most n(1 \Gamma 3
nodes in every round. By
definition there are at most n
calls of class C in every essential round. At most n
nodes can participate in these calls. Hence the total number of nodes that are either weak
or participate in a call of class C is at most
in every essential round. 2
The next two lemmas show that in many rounds the rate of knowledge increase can be
bounded strictly below 2.
Call a pair of nodes fv; wg red in round i if (w)j is at least 1
round sum increases at most 2 \Gamma ffl times in round i; otherwise call the pair
fv; wg white in round i.
Lemma 5.6 In every essential round there are at least n
red pairs of
nodes.
Proof: Fix an essential round i. A strong node v that does not participate in a call of class
C, either participates in a call of class B or does not communicate at all in round i. By
lemma 5.5, there are either at least n
nodes of the first type or of the second type. In
the first case there are at least n
calls in the class B because every such call involves at
least one strong node (otherwise it would be in the class A). All pairs of nodes in these calls
are red, which proves the lemma in this case. In the second case partition nodes that do not
communicate in the ith round into disjoint pairs arbitrarily. does
not increase at all in such pairs in the ith round and at least n
pairs contain a strong
node in this case, hence they are red. 2
Lemma 5.7 In every essential round the total knowledge K increases at most
times; where
Proof: For simplicity assume that the number of nodes is even, it will be clear how to
modify the argument otherwise. Fix an essential round i. By lemma 5.6 there are at least
disjoint red pairs in round i. For every such pair fv; wg,
after round and the increase of (w)j in this round is at most 2 \Gamma ffl times.
For pairs fv; wg that are white in round i, and the
increase of (w)j is at most 2 times.
We want to establish an upper bound on the rate of knowledge increase in round i. We will
compute this rate as a fraction R whose numerator is the sum of
disjoint pairs of nodes after round i and the denominator is the corresponding sum before
round i.
The value of R canot decrease if the number of red pairs is decreased to n
and the sum
lowered to 1
in every red pair, while the
number of white pairs is increased to n\Gamma n
and the sum
increased to 2 i in every white pair. Also R cannot decrease if we assume that the
increase of times in red pairs and 2 times in white pairs. Hence
we get
Denoting simplifying we get
4x
4x
and finally
. 2
Proof of theorem 5.1: Denote, as before,
. By lemmas 5.4 and
5.7, knowledge increases at most
times in at least 1log n rounds. In all remaining
rounds it increases at most 2 times. Hence in order to show that our scheme is not a gossiping
scheme it suffices to show
i.e.,
log log log n, for log log log n, for d
log n:
g(n) . The latter inequality implies h(n) - 2f(n). Since
e , we have
2:5
for sufficiently large n and thus
In view of h(n) - 2f(n) we have
which implies inequality 2. 2
The last result of this section gives a meaningful lower bound on the number of links when
the running time is in the medium or large range.
Theorem 5.2 Every gossiping algorithm with running time T - 2log
uses
log n
links.
Proof: We may assume that the
conclusion is trivial. Suppose that L -
16log n . Take a spanning tree of the underlying
graph, with root k, diameter at most 2log n and maximum degree at most 2log n. Such a
tree must exist for the gossiping to be completed in time less than 2log n. Color all links
of this tree black and all other links (at most 2 r(n)
red links to the
tree one by one, each time recoloring red those black links which appear in a newly created
cycle. If the link fv; wg is added, this causes recoloring red links on the paths joining v with
k and w with k in the tree. (Some of them may have been recolored already previously.)
Hence adding a new red link causes recoloring at most 2log n black links. After adding at
most 2 r(n)
red links, the total number of red links at the end of the recoloring process
is at most
which is less than 2 r(n)\Gamma2 for sufficiently large n, in view of
Since links that are red at the end of the recoloring process are exactly those situated in
cycles in the underlying graph, this graph has z ! 2 r(n)\Gamma2 nodes situated in cycles. Hence
there exists a tree D attached to only one node d in some cycle, such that
z
Case 1. 2n
nThe value of some node v in D reaches the node d after time larger than log 2n
1. Broadcasting the value of v from d to all nodes outside of D requires time at least
log n= log n \Gamma 1. Hence the total time of gossiping exceeds 2log
Case 2. jDj ? nSince the maximum degree of D is at most 2log n, the tree D contains a subtree Y , such
that 2n
. The rest of the argument is as in Case 1, with D replaced
by Y . 2
6 Discussion
We have two pairs of bounds on the minimum number of calls C in gossiping with a given
running time T . If ne
log
). The first pair of
bounds is useful for small t(n), e.g., when t(n) 2 O(log log n), i.e., when gossiping time is
small. They yield the following corollary showing how C gradually decreases from \Theta(nlog n)
to the asymptotically optimal range 2n + o(n), as restrictions on T are being relaxed.
Corollary 6.1 If ne
1. If t(n) is constant then C 2 \Theta(nlog n).
2. If t(n) 2 log log
3. If t(n) - log log n \Gamma d, for a constant d, then C 2 O(n).
4. If t(n) 2 log log n
The lower bound C 2 \Omega\Gamma nlog n
), following from [12], becomes trivial when t(n) ? log log n.
For even larger values of gossiping time our second pair of bounds can be applied. For
example, it gives a fairly precise estimate of the minimum number of calls when the running
time is in the medium range fflog n, 2.
Corollary 6.2 If the running time of a gossiping algorithm is
then C 2 2n +O(n 2\Gammaff log n) and C
log
The next corollary corresponds to the situation when the gossiping time is fairly large. In
this case it is more natural to reverse the problem: what is the minimum running time of
gossiping when the number of calls has to be kept very small?
Corollary 6.3 If the number of calls in a gossiping algorithm is
is polylogarithmic in n, then its running time T is 2log
Proof: Suppose not, and let
2\Omega\Gamma367 n). Then r(n) - dlog n, for
some constant d and C
log
We next turn attention to the trade-off between the time T and the number of links L. For
small values of T the gap between our upper and lower bounds is larger than in the previous
case. Corollary 3.1 and theorem 5.1 imply, for example, that if
a constant, then L 2 O(nlog n) and L 2 !(n(log log n) d ), for d ! 1. It remains open if
n) in this case.
The last pair of bounds, applicable for larger values of gossiping time
follows from corollary 3.1 and theorem 5.2. In this case L 2
log n ). For the medium range of gossiping time fflog n, gives an
even more precise estimate of L than previously of C.
Corollary 6.4 If the running time of a gossiping algorithm is
then log n) and L
log n
Finally, a result similar to corollary 6.3 holds for the number of links.
Corollary 6.5 If the number of links used by a gossiping algorithm is
c(n) is polylogarithmic in n, then its running time T is 2log
7 Conclusion
We established upper and lower bounds on the minimum number of calls and the minimum
number of links used by a gossiping scheme with a given running time. Our algorithms,
which turned out to be cost-efficient for the whole range of running time values, follow the
same simple pattern: gather information in nodes of a hypercube of appropriately chosen size
using a separate broadcasting tree for each node, then gossip in the hypercube in minimal
time and finally broadcast complete information to all remaining nodes, using again the same
broadcasting trees. The tree part of the scheme uses few calls and few links but a lot of
time, as it is executed twice, while the hypercube part is fast but uses many calls and many
links. Thus a suitable balance between these parts must be maintained to get low cost for a
given running time.
Our bounds leave very small gaps. For example, if our upper bound on C is
log n) and the lower bound is 2n
log leaving a gap within a factor of
O(log 3 n) in the part of the number of calls exceeding the absolute minimum 4. In the
case of the number of links L, our bounds are even tighter for this range of running time.
For the same value our upper bound on L is
log n) and
the lower bound is n
log n ), leaving a gap within a factor of O(log 2 n) in the part of the
number of links exceeding the absolute minimum n \Gamma 1.
Further tightening of these bounds, for all values of running time, remains a natural open
problem yielded by our results. We do not know, for example, if it is possible to gossip in
time 3log n using 2n
n) calls and/or n
n) links. It also remains open what is
the minimum value of L when const. We conjecture that L 2 \Theta(nlog n) in this
case.
Another interesting problem is to evaluate the complexity of finding the exact value of the
minimum cost of gossiping with a given running time. Given n and T , can the minimum
number of calls C or the minimum number of links L be found in polynomial time?
In many papers (cf. [4, 9, 10]) gossiping was studied for specific important networks, such as
trees, grids or hypercubes, and the time or the number of calls were minimized separately. It
would be interesting to extend our study by investigating time vs. number of calls trade-offs
in gossiping for these networks as well. Also communication models other than the classical
1-port full-duplex model (cf., e.g., [9]), could be considered in this context.
--R
Gossips and telephones
Communication patterns in task-oriented groups
A problem with telephones
Gossiping in grid graphs
Methods and problems of communication in usual net- works
Tight bounds on minimum broadcast networks
A cure for the telephone disease
A survey of gossiping and broadcasting in communication networks
Fast gossiping for the hypercube
The telephone problem for trees
Kernels of minimum size gossip schemes
Some minimum gossip graphs
The distribution of completion times for random communication in a task-oriented group
Time and call limited telephone problem
--TR
--CTR
Francis C.M. Lau , Shi-Heng Zhang, Fast Gossiping in Square Meshes/Tori with Bounded-Size Packets, IEEE Transactions on Parallel and Distributed Systems, v.13 n.4, p.349-358, April 2002
Francis C. M. Lau , S. H. Zhang, Optimal gossiping in paths and cycles, Journal of Discrete Algorithms, v.1 n.5-6, p.461-475, October | gossiping;lower bounds;algorithm |
296347 | High-level design verification of microprocessors via error modeling. | A design verification methodology for microprocessor hardware based on modeling design errors and generating simulation vectors for the modeled errors via physical fault testing techniques is presented. We have systematically collected design error data from a number of microprocessor design projects. The error data is used to derive error models suitable for design verification testing. A class of basic error models is identified and shown to yield tests that provide good coverage of common error types. To improve coverage for more complex errors, a new class of conditional error models is introduced. An experiment to evaluate the effectiveness of our methodology is presented. Single actual design errors are injected into a correct design, and it is determined if the methodology will generate a test that detects the actual errors. The experiment has been conducted for two microprocessor designs and the results indicate that very high coverage of actual design errors can be obtained with test sets that are complete for a small number of synthetic error models. | INTRODUCTION
It is well known that about a third of the cost of developing a new microprocessor is devoted
to hardware debugging and testing [25]. The inadequacy of existing hardware verification
methods is graphically illustrated by the Pentium's FDIV error, which cost its manufacturer
an estimated $500 million. The development of practical verification methodologies
for hardware verification has long been handicapped by two related problems: (1) the
A preliminary version of this paper was presented in [4] at the 1997 IEEE International High Level
Design Validation and Test Workshop, Oakland, California, November 14-15, 1997.
Permission to make digital or hard copies of part or all of this work for personal or classroom use is
granted without fee provided that copies are not made or distributed for profit or direct commercial
advantage and that copies show this notice on the first page or initial screen of a display along with
the full citation. Copyrights for components of this work owned by others than ACM must be hon-
ored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on servers, to
redistribute to lists, or to use any component of this work in other works, requires prior specific permission
and/or a fee. Permissions may be requested from Publications Dept., ACM Inc., 1515
Broadway, New York, NY 10036 USA, fax +1 (212) 869-0481, or [email protected]. - 1998
by the Association for Computing Machinery, Inc.
Al-Asaad, J. P. Hayes, T. Mudge, and R. B. Brown
lack of published data on the nature, frequency, and severity of the design errors occurring
in large-scale design projects; and (2) the absence of a verification methodology whose effectiveness
can readily be quantified.
There are two broad approaches to hardware design verification: formal and simula-
tion-based. Formal methods try to verify the correctness of a system by using mathematical
proofs [32]. Such methods implicitly consider all possible behavior of the models
representing the system and its specification, whereas simulation-based methods can only
consider a limited range of behaviors. The accuracy and completeness of the system and
specification models is a fundamental limitation for any formal method.
Simulation-based design verification tries to uncover design errors by detecting a cir-
cuit's faulty behavior when deterministic or pseudo-random tests (simulation vectors) are
applied. Microprocessors are usually verified by simulation-based methods, but require an
extremely large number of simulation vectors whose coverage is often uncertain.
Hand-written test cases form the first line of defense against bugs, focusing on basic
functionality and important corner (exceptional) cases. These tests are very effective in the
beginning of the debug phase, but lose their usefulness later. Recently, tools have been
developed to assist in the generation of focused tests [13,20]. Although these tools can significantly
increase design productivity, they are far from being fully automated.
The most widely used method to generate verification tests automatically is random test
generation. It provides a cheap way to take advantage of the billion-cycles-a-day simulation
capacity of networked workstations available in many big design organizations.
Sophisticated systems have been developed that are biased towards corner cases, thus
improving the quality of the tests significantly [2]. Advances in simulator and emulator
technology have enabled the use of very large sets as test stimuli such as existing application
and system software. Successfully booting the operating system has become a basic
quality requirement [17,25].
Common to all the test generation techniques mentioned above is that they are not targeted
at specific design errors. This poses the problem of quantifying the effectiveness of a
test set, such as the number of errors covered. Various coverage metrics have been proposed
to address this problem. These include code coverage metrics from software testing
[2,7,11], finite state machine coverage [20,22,28], architectural event coverage [22], and
observability-based metrics [16]. A shortcoming of all these metrics is that the relationship
between the metric and the detection of classes of design errors is not well understood
A different approach is to use synthetic design error models to guide test generation.
This exploits the similarity between hardware design verification and physical fault test-
ing, as illustrated by Figure 1. For example, Al-Asaad and Hayes [3] define a class of
design error models for gate-level combinational circuits. They describe how each of these
errors can be mapped onto single-stuck line (SSL) faults that can be targeted with standard
automated test pattern generation (ATPG) tools. This provides a method to generate tests
with a provably high coverage for certain classes of modeled errors.
A second method in this class stems from the area of software testing. Mutation testing
[15] considers programs, termed mutants, that differ from the program under test by a single
small error, such as changing the operator from add to subtract. The rationale for the
approach is supported by two hypotheses: 1) programmers write programs that are close to
High-Level Design Verification of Microprocessors via Error Modeling - 3
correct ones, and 2) a test set that distinguishes a program from all its mutants is also sensitive
to more complex errors. Although considered too costly for wide-scale industrial
use, mutation testing is one of the few approaches that has yielded an automatic test generation
system for software testing, as well as a quantitative measure of error coverage
(mutation score) [24]. Recently, Al Hayek and Robach [5] have successfully applied mutation
testing to hardware design verification in the case of small VHDL modules.
This paper addresses design verification via error modeling and test generation for
complex high-level designs such as microprocessors. A block diagram summarizing our
methodology is shown in Figure 2. An implementation to be verified and its specification
are given. For microprocessors, the specification is typically the instruction set architecture
(ISA), and the implementation is a description of the new design in a hardware
description language (HDL) such as VHDL or Verilog. In this approach, synthetic error
models are used to guide test generation. The tests are applied to simulated models of both
the implementation and the specification. A discrepancy between the two simulation outcomes
indicates an error, either in the implementation or in the specification.
Section 2 describes our method for design error collection and presents some preliminary
design error statistics that we have collected. Section 3 discusses design error modeling
and illustrates test generation with these models. An experimental evaluation of our
methodology and of the error models is presented in Section 4. Section 5 discusses the
results and gives some concluding remarks.
2. DESIGN ERROR COLLECTION
Hardware design verification and physical fault testing are closely related at the conceptual
level [3]. The basic task of physical fault testing (hardware design verification) is to generate
tests that distinguish the correct circuit from faulty (erroneous) ones. The class of faulty
Prototype
system
Operational
system
Design
Manufacturing
Verification
tests
Physical
fault tests
Design errors Physical faults
Design development Field deployment
model Fault model
1. Correspondence between design verification and physical fault testing.
residual
design errors
D. Van Campenhout, H. Al-Asaad, J. P. Hayes, T. Mudge, and R. B. Brown
circuits to be considered is defined by a logical fault model. Logical fault models represent
the effect of physical faults on the behavior of the system, and free us from having to deal
with the plethora of physical fault types directly. The most widely used logical fault model,
the SSL model, combines simplicity with the fact that it forces each line in the circuit to be
exercised. Typical hardware design methodologies employ hardware description languages
as their input medium and use previously designed high-level modules. To capture the richness
of this design environment, the SSL model needs to be supplemented with additional
error models.
The lack of published data on the nature, frequency, and severity of the design errors
occurring in large-scale projects is a serious obstacle to the development of error models
for hardware design verification. Although bug reports are collected and analyzed internally
in industrial design projects the results are rarely published. Examples of user-oriented
bug lists can be found in [21,26]. Some insight into what can go wrong in a large
processor design project is provided in [14].
The above considerations have led us to implement a systematic method for collecting
design errors. Our method uses the CVS revision management tool [12] and targets ongoing
design projects at the University of Michigan, including the PUMA high-performance
microprocessor project [9] and various class projects in computer architecture and VLSI
2. Deployment of proposed design verification methodology.
Design error
models
Test
generator
Implementation
simulator
Specification
simulator
Equal?
Diagnose
Specification
Unverified
design
Verified
design
CVS
revision
database
Unknown actual error
Assisted
verification
Assisted
verification
High-Level Design Verification of Microprocessors via Error Modeling - 5
design, all of which employ Verilog as the hardware description medium. Designers are
asked to archive a new revision via CVS whenever a design error is corrected or whenever
the design process is interrupted, making it possible to isolate single design errors. We
have augmented CVS so that each time a design change is entered, the designer is
prompted to fill out a standardized multiple-choice questionnaire, which attempts to gather
four key pieces of information: (1) the motivation for revising the design; (2) the method
by which a bug was detected; (3) a generic design-error class to which the bug belongs,
and (4) a short narrative description of the bug. A uniform reporting method such as this
greatly simplifies the analysis of the errors. A sample error report using our standard questionnaire
is shown in Figure 3. The error classification shown in the report form is the
result of the analysis of error data from several earlier design projects.
Design error data has been collected so far from four VLSI design class projects that
involve implementing the DLX microprocessor [19], from the implementation of the LC-2
microprocessor [29] which is described later, and from preliminary designs of PUMA's
fixed-point and floating-point units [9]. The distributions found for the various representative
design errors are summarized in Table 1. Error types that occurred with very low frequency
are combined in the "others" category in the table.
(replace the _ with X where
MOTIVATION:
correction
_ design modification
_ design continuation
_ performance optimization
_ synthesis simplification
_ documentation
BUG DETECTED BY:
_ inspection
_ compilation
simulation
_ synthesis
Please try to identify the primary
source of the error. If in doubt,
check all categories that apply.
_ verilog syntax error
_ conceptual error
combinational logic:
wrong signal source
_ missing input(s)
_ unconnected (floating) input(s)
_ unconnected (floating)
_ conflicting outputs
_ wrong gate/module type
_ missing instance of gate/module
_ sequential logic:
_ extra latch/flipflop
_ missing latch/flipflop
_ extra state
_ missing state
_ wrong next state
_ other finite state machine error
_ statement:
_ if statement
_ case statement
_ always statement
_ declaration
_ port list of module declaration
_ expression (RHS of assignment):
_ missing term/factor
_ extra term/factor
_ missing inversion
_ extra inversion
_ wrong operator
_ wrong constant
_ completely wrong
_ buses:
_ wrong bus width
_ wrong bit order
_ new category (describe below)
Used wrong field from instruction
3. Sample error report.
6 - D. Van Campenhout, H. Al-Asaad, J. P. Hayes, T. Mudge, and R. B. Brown
3. ERROR MODELING
Standard simulation and logic synthesis tools have the side effect of detecting some design
error categories of Table 1, and hence there is no need to develop models for those particular
errors. For example a simulator such as Verilog-XL [10] flags all Verilog syntax errors
(category 9), declaration statement errors (category 12), and incorrect port lists of modules
(category 16). Also, logic synthesis tools, such as those of Synopsys, usually flag all wrong
bus width errors (category 10) and sensitivity-list errors in the always statement (category
13).
To be useful for design verification, error models should satisfy three requirements: (1)
tests (simulation vectors) that provide complete coverage of the modeled errors should
also provide very high coverage of actual design errors; (2) the modeled errors should be
amenable to automated test generation; (3) the number of modeled errors should be relatively
small. In practice, the third requirement means that error models that define a number
of error instances linear, or at most quadratic in the size of the circuit are preferred.
The error models need not mimic actual design bugs precisely, but the tests derived from
complete coverage of modeled errors should provide very good coverage of actual design
bugs.
3.1 Basic error models
A set of error models that satisfy the requirements for the restricted case of gate-level logic
circuits was developed in [3]. Several of these models appear useful for the higher-level
(RTL) designs found in Verilog descriptions as well. From the actual error data in Table 1,
we derive the following set of five basic error models:
1. Actual error distributions from three groups of design projects.
Design error category
Relative frequency [%]
1. Wrong signal source 29.9 28.4 25.0
2. Conceptual error 39.0 19.1 0.0
3. Case statement
4. Gate or module input 11.2 9.8 0.0
5. Wrong gate/module type 12.1
6. Wrong constant 0.4 5.7 10.0
7. Logical expression wrong
8. Missing input(s) 0.0 5.2 0.0
9. Verilog syntax error
10. Bit width error 0.0 2.2 15.0
11. If statement 1.1 1.6 5.0
12. Declaration statement
13. Always statement 0.4 1.4 5.0
14. FSM error 3.1
15. Wrong operator 1.7 0.3 0.0
16. Others 1.1 5.8 25.0
High-Level Design Verification of Microprocessors via Error Modeling - 7
. Bus SSL error (SSL): A bus of one or more lines is (totally) stuck-at-0 or stuck-at-
1 if all lines in the bus are stuck at logic level 0 or 1. This generalization of the
standard SSL model was introduced in [6] in the context of physical fault testing.
Many of the design errors listed in Table 1 can be modeled as SSL errors
(categories 4 and 6).
. Module substitution error (MSE): This refers to mistakenly replacing a module by
another module with the same number of inputs and outputs (category 5). This
class includes word gate substitution errors and extra/missing inversion errors.
. Bus order error (BOE): This refers to incorrectly ordering the bits in a bus
(category 16). Bus flipping appears to be the most common form of BOE.
. Bus source error (BSE): This error corresponds to connecting a module input to a
wrong source (category 1).
. Bus driver error (BDE): This refers to mistakenly driving a bus with two sources
(category 16).
Direct generation of tests for the basic error models is difficult, and is not supported by
currently available CAD tools. While the errors can be easily activated, propagation of
their effects can be difficult, especially when modules or behavioral constructs do not have
transparent operating modes. In the following we demonstrate manual test generation for
various basic error models.
3.2 Test generation examples
Because of their relative simplicity, the foregoing error models allow tests to be generated
and error coverage evaluated for RTL circuits of moderate size. We analyzed the test requirements
of two representative combinational circuits: a carry-lookahead adder and an
ALU. Since suitable RTL tools are not available, test generation was done manually, but in
a systematic manner that could readily be automated. Three basic error models are consid-
ered: BOEs, MSEs, and BSEs. Test generation for SSLs is discussed in [1,6] and no tests
are needed for BDEs, since the circuits under consideration do not have tristate buses.
Example 1: The 74283 adder
An RTL model [18] of the 74283 4-bit fast adder [30] appears in Figure 4. It consists of a
carry-lookahead generator (CLG) and a few word gates. We show how to generate tests for
some design error models in the adder and then we discuss the overall coverage of the targeted
error models.
BOE on A bus: A possible bus value that activates the error is A
an unknown value. The erroneous value of A is thus A Hence, we can represent
the error by represents the error signal which is 1 (0)
in the good circuit and 0 (1) in the erroneous circuit. One way to propagate this error
through the AND gate G 1 is to set Hence, we get G
and G Now for the module CLG we have
X. The resulting outputs are This implies that
hence the error is not detected at the primary outputs. We need to assign more input values
to propagate the error. If we set C
DXXD
DXXD
DXXD
8 - D. Van Campenhout, H. Al-Asaad, J. P. Hayes, T. Mudge, and R. B. Brown
Hence, the error is propagated to S and the complete test vector is (A, B, C
(0XX11XX10).
On generating tests for all BSEs in the adder we find that just 2 tests detect all 33
detectable BSEs, and a single BSE is redundant as shown above. We further targeted all
MSEs in the adder and we found that 3 tests detect all 27 detectable MSEs and proved that
a single MSE (G 3 /XNOR) is redundant. Finally, we found that all BOEs are detected by
the tests generated for BSEs and MSEs. Therefore, complete coverage of BOEs, BSEs,
and MSEs is achieved with only 5 tests.
Example 2: The c880 ALU
In this example, we try to generate tests for some modeled design errors in the c880 ALU,
a member of the ISCAS-85 benchmark suite [8]. A high-level model based on a Verilog
description of the ALU [23] is shown in Figure 5; it is composed of six modules: an adder,
two multiplexers, a parity unit, and two control units. The circuit has 60 inputs and 26 out-
puts. The gate-level implementation of the ALU has 383 gates.
The design error models to be considered in the c880 are again BOEs, BSEs, and MSEs
(inversion errors on 1-bit signals). We next generate tests for these error models.
BOEs: In general, we attempt to determine a minimum set of assignments needed to detect
each error. Some BOEs are redundant such as the BOE on B (PARITY), but most BOEs
are easily detectable. Consider, for example, the BOE on D. One possible way to activate
the error is to set To propagate the error to a primary output, the path
through IN-MUX and then OUT-MUX is selected. The signal values needed to activate this
path are:
Solving the gate-level logic equations for G and C we get:
All signals not mentioned in the above test have don't care values. We found that just 10
tests detect all 22 detectable BOEs in the c880 and serve to prove that another 2 BOEs are
redundant.
4. High-level model of the 74283 carry-lookahead adder [18].
A
G
High-Level Design Verification of Microprocessors via Error Modeling - 9
MSEs: Tests for BOEs detect most, but not all, inversion errors on multibit buses. In the
process of test generation for the c880 ALU, we noticed a case where a test for an inversion
error on bus A can be found even though the BOE on A is redundant. This is the case when
an n-bit bus (n odd) is fed into a parity function. Testing for inversion errors on 1-bit signals
needs to be considered explicitly, since a BOE on a 1-bit bus is not possible. Most inversion
errors on 1-bit signals in the c880 ALU are detected by the tests generated for BOEs and
BSEs. This is especially true for the control signals to the multiplexers.
3.3 Conditional error model
The preceding examples, as well as prior work on SSL error detection [1,6], show that the
basic error models can be used with RTL circuits, and that high, but not complete, error
coverage can be achieved with small test sets. These results are further reinforced by our
experiments on microprocessor verification (Section which indicate that a large fraction
of actual design errors (67% in one case and 75% in the other) is detected by complete test
sets for the basic errors. To increase coverage of actual errors to the very high levels needed
for design verification, additional error models are required to guide test generation. Many
more complex error models can be derived directly from the actual data of Table 1 to supplement
the basic error types, the following set being representative:
. Bus count error (BCE): This corresponds to defining a module with more or fewer
input buses than required (categories 4 and 8).
5. High-level model of the c880 ALU.
IN-MUX OUT-MUX
A
G
Cin
C Cont
ParA
ParB
F
Par-Hi
Par-Al
Par-Bl
Pass-B
Usel-G Cout
GEN
ADDER
Pass-H
F-add
F-and
F-xor
Al-Asaad, J. P. Hayes, T. Mudge, and R. B. Brown
. Module count error (MCE): This corresponds to incorrectly adding or removing
a module (category 16), which includes the extra/missing word gate errors and the
extra/missing registers.
. Label count error (LCE): This error corresponds to incorrectly adding or
removing the labels of a case statement (category 3).
. Expression structure error (ESE): This includes various deviations from the
correct expression (categories 3, 6, 7, 11, 15), such as extra/missing terms, extra/
missing inversions, wrong operator, and wrong constant.
. State count error (SCE): This error corresponds to an incorrect finite state
machine with an extra or missing state (category 14).
. Next state error (NSE): This error corresponds to incorrect next state function in
a finite state machine (FSM) (category 14).
Although, this extended set of error models increases the number of actual errors that
can be modeled directly, we have found them to be too complex for practical use in manual
or automated test generation. We observed that the more difficult actual errors are
often composed of multiple basic errors, and that the component basic errors interact in
such a way that a test to detect the actual error must be much more specific than a test to
detect any of the component basic errors. Modeling these difficult composite errors
directly is impractical as the number of error instances to be considered is too large and
such composite modeled errors are too complex for automated test generation. However,
as noted earlier, a good error model does not necessarily need to mimic actual errors accu-
rately. What is required is that the error model necessitates the generation of these more
specific tests. To be practical, the complexity of the new error models should be comparable
to that of the basic error models. Furthermore the (unavoidable) increase in the number
of error instances should be controlled to allow trade-offs between test generation effort
and verification confidence. We found that these requirements can be combined by augmenting
the basic error models with a condition.
A conditional error (C,E) consists of a condition C and a basic error E; its interpretation
is that E is only active when C is satisfied. In general, C is a predicate over the signals
in the circuit during some time period. To limit the number of error instances, we restrict C
to a conjunction of terms , where y i is a signal in the circuit and w i is a constant of
the same bit-width as y i and whose value is either all-0s or all-1s. The number of terms
(condition variables) appearing in C is said to be the order of (C,E). Specifically, we consider
the following conditional error types:
. Conditional single-stuck line (CSSLn) error of order n;
. Conditional bus order error (CBOEn) of order n;
. Conditional bus source error (CBSEn) of order n.
When reduces to the basic error E from which it is
derived. Higher-order conditional errors enable the generation of more specific tests, but
lead to a greater test generation cost due to the larger number of error instances. For exam-
ple, the number of CSSLn errors on a circuit with N signals is . Although the
total set of all N signals we consider for each term in the condition can possibly be
reduced, CSSLn errors where n > 2 are probably not practical.
For gate-level circuits (where all signals are 1-bit), it can be shown that CSSL1 errors
High-Level Design Verification of Microprocessors via Error Modeling - 11
cover the following basic error models: MSEs (excluding XOR and XNOR gates), missing
2-input gate errors, BSEs, single BCEs (excluding XOR and XNOR gates), and bus driver
errors. That CSSL1 errors cover missing two-input gate errors can be seen as follows.
Consider a two-input AND gate Y=AND(X1,X2) in the correct design; in the erroneous
design, this gate is missing and net Y is identical to net X1. To expose this error we have to
set X1 to 1, X2 to 0, and sensitize Y. Any test that detects the CSSL1 error, (X2=0, Y s-a-0)
in the erroneous design, will also detect the missing gate error. The proof for other gate
types is similar. Higher-order CSSLn errors improve coverage even further.
4. COVERAGE EVALUATION
To show the effectiveness of a verification methodology, one could apply it and a competing
methodology to an unverified design. The methodology that uncovers more (and hard-
er) design errors in a fixed amount of time is more effective. However, for such a comparison
to be practical, fast and efficient high-level test generation tools for our error models
appear to be necessary. Although we have demonstrated such test generation in Section 3.2,
it has yet to be automated. We therefore designed a controlled experiment that approximates
the conditions of the original experiment, while avoiding the need for automated test
generation. The experiment evaluates the effectiveness of our verification methodology
when applied to two student-designed microprocessors. A block diagram of the experimental
set-up is show in Figure 6. As design error models are used to guide test generation, the
effectiveness is closely related to the synthetic error models used.
To evaluate our methodology, a circuit is chosen for which design errors are to be systematically
recorded during its design. Let D 0 be the final, presumably correct design.
From the CVS revision database, the actual errors are extracted and converted such that
they can be injected in the final design D 0 . In the evaluation phase, the design is restored to
an (artificial) erroneous state D 1 by injecting a single actual error into the final design D 0 .
This set-up approximates a realistic on-the-fly design verification scenario. The experiment
answers the question: given D 1 , can the proposed methodology produce a test that
determines D 1 to be erroneous? This is achieved by examining the actual error in D 1 , and
determining if a modeled design error exists that is dominated by the actual error. Let D 2
be the design constructed by injecting the dominated modeled error in D 1 , and let M be the
error model which defines the dominated modeled error. Such a dominated modeled error
has the property that any test that detects the modeled error in D 2 will also detect the
actual error in D 1 . Consequently, if we were to generate a complete test set for every error
defined on D 1 by error model M, D 1 would be found erroneous by that test set. Error detection
is determined as discussed earlier (see Section 1, Figure 2). Note that the concept of
dominance in the context of design verification is slightly different than in physical fault
testing. Unlike in the testing problem, we cannot remove the actual design error from D 1
before injecting the dominated modeled error. This distinction is important because generating
a test for an error of omission, which is generally very hard, becomes easy if given
instead of D 1 .
The erroneous design D 1 considered in this experiment is somewhat artificial. In reality
the design evolves over time as bugs are introduced and eliminated. Only at the very end
of the design process, is the target circuit in a state where it differs from the final design D 0
in just a single design error. Prior to that time, the design may contain more than one
Al-Asaad, J. P. Hayes, T. Mudge, and R. B. Brown
design error. To the extent that the design errors are independent, it does not matter if we
consider a single or multiple design errors at a time. Furthermore, our results are independent
of the order in which one applies the generated tests.
We implemented the preceding coverage-evaluation experiment for two small but representative
designs: a simple microprocessor and a pipelined microprocessor. We present
our results in the remainder of this section.
4.1 A simple microprocessor
The Little Computer 2 (LC-2) [29] is a small microprocessor of conventional design used
for teaching purposes at the University of Michigan. It has a representative set of 16 instructions
which is a subset of the instruction sets of most current microprocessors. To serve as
a test case for design verification, one of us designed behavioral and RTL synthesizable
Verilog descriptions for the LC-2. The behavioral model (specification) of the LC-2 consists
of 235 lines of behavioral Verilog code. The RTL design (implementation) consists of
a datapath module described as an interconnection of library modules and a few custom
modules, and a control module described as an FSM with five states. It comprises 921 lines
of Verilog code, excluding the models for library modules such as adders, register files, etc.
A gate-level model of the LC-2 can thus be obtained using logic synthesis tools. The design
errors made during the design of the LC-2 were systematically recorded using our error collection
system (Section 2).
For each actual design error recorded, we derived the necessary conditions to detect it.
An error is detected by an instruction sequence s if the external output signals of the
behavioral and RTL models are distinguished by s. We found that some errors are undetectable
since they do not affect the functionality of the microprocessor. The detection
conditions are used to determine if a modeled error that is dominated by the actual error
6. Experiment to evaluate the proposed design verification methodology.
Simulate Simulate
Actual error
database
Debug by
Design error
collection
Test for
modeled
error
Evaluation of verification methodology
Expose
modeled error
Expose
actual error
Design and debugging process
Design
Inject
single
actual
error
Inject
modeled
error
Design error
model
designer
Actual error
Modeled error
revisions
High-Level Design Verification of Microprocessors via Error Modeling - 13
can be found. An example where we were able to do that is shown in Figure 7. The error is
a BSE on data input D 1 of the multiplexer attached to the program counter PC. Testing for
detect the BSE since the outputs of PC and its incrementer are always
different, i.e., the error is always activated, so testing for this SSL error will propagate the
signal on D 1 to a primary output of the microprocessor. A case where we were not able to
find a modeled error dominated by the actual error is shown in Figure 8. The error occurs
where a signal is assigned a value independent of any condition. However, the correct
implementation requires an if-then-else construct to control the signal assignment. To activate
this error, we need to set ir_out[15:12] == 4'b1101, ir_out[8:6] - 3'b111, and
refers to the contents of the register i in the
register file. An instruction sequence that detects this error is shown in Figure 8.
We analyzed the actual design errors in both the behavioral and RTL designs of the LC-
2, and the results are summarized in Table 2. A total of 20 design errors were made during
the design, of which four errors are easily detected by the Verilog simulator and/or logic
synthesis tools and two are undetectable. The actual design errors are grouped by cate-
gory; the numbers in parentheses refer to the corresponding category in Table 1. The columns
in the table give the type of the simplest dominated modeled error corresponding to
each actual error. For example, among the 4 remaining wrong-signal-source errors, 2 dom-
7. An example of an actual design error that is dominated by an SSL error.
design Correct design
Incrementer
Incrementer
Mux
// Instruction decoding
// Decoding of register file
inputs
// 1- Decoding of
CORRECT CODE:
if (ir_out[15:12] == 4'b1101)
else
ERRONEOUS CODE:
8. An example of an actual design error for which no dominated modeled
error was found, and an instruction sequence that detects the actual error.
// Instruction sequence
@3000
main:
JSR sub0
sub0:
// After execution of instructions
correct design
Design error Test sequence
14 - D. Van Campenhout, H. Al-Asaad, J. P. Hayes, T. Mudge, and R. B. Brown
inate an SSL error and 2 dominate a BSE error.
We can infer from Table 2 that most errors are detected by tests for SSL errors or BSEs.
About 75% of the actual errors in the LC-2 design can be detected after simulation with
tests for SSL errors and BSEs. The coverage increases to 90% if tests for CSSL1 are
added.
4.2 A pipelined microprocessor
Our second design case study considers the well-known DLX microprocessor [19], which
has more of the features found in contemporary microprocessors. The particular DLX version
considered is a student-written design that implements 44 instructions, has a five-stage
pipeline and branch prediction logic, and consists of 1552 lines of structural Verilog code,
excluding the models for library modules such as adders, registerfiles, etc. The design errors
committed by the student during the design process were systematically recorded using
our error collection system.
For each actual design error we painstakingly derived the requirements to detect it.
detection was determined with respect to one of two reference models (specifica-
tions). The first reference model is an ISA model, and as such is not cycle-accurate: only
the changes made to the ISA-visible part of the machine state, that is, to the register file
and memory, can be compared. The second reference model contains information about
the microarchitecture of the implementation and gives a cycle-accurate view of the ISA-
visible part of the machine state (including the program counter). We determined for each
actual error whether it is detectable or not with respect to each reference model. Errors
undetectable with respect to both reference models may arise from the following two rea-
sons: (1) Designers sometimes make changes to don't care features, and log them as
errors. This happens because designers can have a more detailed specification (design
in mind than that actually specified. (2) Inaccuracies can occur when fixing an error
requires multiple revisions.
We analyzed the detection requirements of each actual error and constructed a modeled
2. Actual design errors and the corresponding dominated modeled errors for LC-2.
Actual errors
Corresponding dominated
modeled errors
Category Total
Easily
detected
Undetec-
table SSL BSE CSSL1
Un-
known
Wrong signal source (1) 4
Expression error (7) 4
Bit width error
Missing assignment
Wrong constant
Unused signal
Wrong module (5) 1
Always statement
Total
High-Level Design Verification of Microprocessors via Error Modeling - 15
error dominated by the actual error, wherever possible. One actual error involved multiple
signal source errors, and is shown in Figure 9. Also shown are the truth tables for the
immediately affected signals; differing entries are shaded. Error detection via fanout Y1
requires setting sensitizing Y1. However, the combination
not achievable and thus error detection via Y1 is not possible. Detection
via fanout Y2 or Y3 requires setting sensitizing Y2 or Y3.
However, blocks error propagation via Y2 further downstream. Hence, the error
detection requirements are: sensitizing Y3.
Now consider the modeled error . Activation of E 1 in D1 requires
sensitizing Y1, Y2 or Y3. As mentioned
before, blocks error propagation via Y2. But as E 1 can be exposed via Y1
without sensitizing Y3, E 1 is not dominated by the given actual error. To ensure detection
of the actual error, we can condition S0 s-a-0 such that sensitization of Y3 is required. The
design contains a signal jump_to_reg_instr that, when set to 1, blocks sensitization of Y1,
but allows sensitization of Y3. Hence the CSSL1 error
dominated by the actual error.
The results of this experiment are summarized in Table 3. A total of 39 design errors
were recorded by the designer. The actual design errors are grouped by category; the numbers
in parentheses refer again to Table 1. The correspondence between the categories is
imprecise, because of inconsistencies in the way in which different student designers classified
their errors. Also, some errors in Table 3 are assigned to a more specific category
than in Table 1, to highlight their correlation with the errors they dominate. 'Missing mod-
ule' and `wrong signal source' errors account for more than half of all errors. The column
headed 'ISA' indicates how many errors are detectable with respect to the ISA-model;
'ISAb' lists the number of errors only detectable with respect to the micro-architectural
reference model. The sum of 'ISA' and `ISAb' does not always add up the number given
9. Example of an actual design error in our DLX implementation.
D. Van Campenhout, H. Al-Asaad, J. P. Hayes, T. Mudge, and R. B. Brown
in 'Total'; the difference corresponds to actual errors that are not undetectable with respect
to either reference model. The remaining columns give the type of the simplest dominated
modeled error corresponding to each actual error. Among the 10 detectable 'missing mod-
ule(s)' errors, 2 dominate an SSL error, 6 dominate a CSSL1 error, and one dominates a
CBOE; for the remaining one, we were not able to find a dominated modeled error.
A conservative measure of the overall effectiveness of our verification approach is
given by the coverage of actual design errors by complete test sets for modeled errors.
From
Table
3 it can be concluded that for this experiment, any complete test set for the
inverter insertion errors (INV) also detects at least 21% of the (detectable) actual design
errors; any complete test set for the INV and SSL errors covers at least 52% of the actual
design errors; if a complete test set for all INV, SSL, BSE, CSSL1 and CBOE is used, at
least 94% of the actual design errors will be detected.
5. DISCUSSION
The preceding experiments indicate that a high coverage of actual design errors can be obtained
by complete test sets for a limited number of modeled error types, such as those defined
by our basic and conditional error models. Thus our methodology can be used to construct
focused test sets aimed at detecting a broad range of actual design bugs. More impor-
tantly, perhaps, it also supports an incremental design verification process that can be
implemented as follows: First, generate tests for SSL errors. Then generate tests for other
basic error types such as BSEs. Finally, generate tests for conditional errors. As the number
of SSL errors in a circuit is linear in the number of signals, complete test sets for SSL errors
can be relatively small. In our experiments such test sets already detect at least half of the
actual errors. To improve coverage of actual design errors and hence increase the confidence
in the design, an error model with a quadratic number of error instances, such as BSE
and CSSL1, can be used to guide test generation.
The conditional error models proved to be especially useful for detecting actual errors
that involve missing logic. Most 'missing module(s)' and `missing input(s)' in Table 3
3. Actual design errors and the corresponding dominated modeled errors for DLX.
Actual errors Corresponding dominated modeled errors
Category ISA ISAb Total INV SSL BSE CSSL1 CBOE CSSL2
Un-
known
Missing module (2) 8 2 14
Wrong singal source (1) 9 2
Complex
Inversion
Missing input
Unconnected input
Missing minterm (2)
Extra input (2)
Total
High-Level Design Verification of Microprocessors via Error Modeling - 17
cannot be covered when only the basic errors are targeted. However, all but one of them is
covered when CSSL1 and CBOE errors are targeted as well. The same observation applies
to the 'missing assignment(s)' errors in Table 2.
The designs used in the experiments are small, but appear representative of real industrial
designs. An important benefit of such small-scale designs is that they allow us to analyze
each actual design error in detail. The coverage results obtained strongly demonstrate
the effectiveness of our model-based verification methodology. Furthermore the analysis
and conclusions are independent of the manner of test generation. Nevertheless, further
validation of the methodology using industrial-size designs is desirable, and will become
more practical when CAD support for design error test generation becomes available.
models of the kind introduced here can also be used to compute metrics to assess
the quality of a given verification test set. For example, full coverage of basic (uncondi-
tional) errors provides one level of confidence in the design, coverage of conditional errors
of order provides another, higher confidence level. Such metrics can also be used to
compare test sets and to direct further test generation.
We envision the proposed methodology eventually being deployed as suggested in
Figure
2. Given an unverified design and its specification, tests targeted at modeled design
errors are automatically generated and applied to the specification and the implementation.
When a discrepancy is encountered, the designer is informed and perhaps given guidance
on diagnosing and fixing the error.
ACKNOWLEDGMENTS
We thank Steve Raasch and Jonathan Hauke for their help in the design error collection
process. We further thank Matt Postiff for his helpful comments.
The research discussed in this paper is supported by DARPA under Contract No.
DABT63-96-C-0074. The results presented herein do not necessarily reflect the position
or the policy of the U.S. Government.
--R
"Logic design verification via test generation,"
"Verification of the IBM RISC System/6000 by dynamic biased pseudo-random test program generator"
"Design verification via simulation and automatic test pattern gen- eration"
"High-level design verification of microprocessors via error modeling,"
"From specification validation to hardware testing: A unified method"
"High-level test generation using bus faults,"
New York
"A neutral netlist of 10 combinational benchmark circuits and a target translator in fortran"
"Complementary GaAs technology for a GHz microprocessor"
Cadence Design Systems Inc.
"Functional verification methodology of Chameleon processor"
Version Management with CVS
"AVPGEN - a test generator for architecture verification"
"Latent design faults in the development of the Multiflow TRACE/200"
"Hints on test data selection: Help for the practicing programmer"
"Observability-based code coverage metric for functional simulation"
"Hardware emulation for functional verification of K5"
"High-level test generation using physically-induced faults"
Computer Architecture: A Quantitative Approach
"Code generation and analysis for the functional verification of microprocessors"
"Pentium Processor Specification Update,"
"I'm done simulating; Now what? Verification coverage analysis and correctness checking of the DECchip 21164 Alpha microprocessor"
high-level Verilog description"
"A Fortran language system for mutation-based software testing"
"Prototyping the M68060 for concurrent verification"
MIPS Technologies Inc.
"An experimental determination of sufficient mutant operators"
"Finite state machine trace analysis program"
The TTL Logic Data Book
"Mutation testing - its origin and evolution"
Formal Verification of Hardware Design
--TR
Computer architecture: a quantitative approach
Software testing techniques (2nd ed.)
A Fortran language system for mutation-based software testing
Verification of the IBM RISC System/6000 by a dynamic biased pseudo-random test program generator
AVPGENMYAMPERSANDmdash;a test generator for architecture verification
Design verification via simulation and automatic test pattern generation
An experimental determination of sufficient mutant operators
Code generation and analysis for the functional verification of micro processors
Hardware emulation for functional verification of K5
I''m done simulating; now what? Verification coverage analysis and correctness checking of the DEC chip 21164 Alpha microprocessor
Functional verification methodology of Chameleon processor
An observability-based code coverage metric for functional simulation
Formal Verification of Hardware Design
Prototyping the M68060 for Concurrent Verification
From Specification Validation to Hardware Testing
High-level test generation using physically-induced faults
--CTR
David Van Campenhout , Trevor Mudge , John P. Hayes, Collection and Analysis of Microprocessor Design Errors, IEEE Design & Test, v.17 n.4, p.51-60, October 2000
Katarzyna Radecka , Zeljko Zilic, Identifying Redundant Wire Replacements for Synthesis and Verification, Proceedings of the 2002 conference on Asia South Pacific design automation/VLSI Design, p.517, January 07-11, 2002
Tao Lv , Jian-Ping Fan , Xiao-Wei Li , Ling-Yi Liu, Observability Statement Coverage Based on Dynamic Factored Use-Definition Chains for Functional Verification, Journal of Electronic Testing: Theory and Applications, v.22 n.3, p.273-285, June 2006
Katarzyna Radecka , Zeljko Zilic, Design Verification by Test Vectors and Arithmetic Transform Universal Test Set, IEEE Transactions on Computers, v.53 n.5, p.628-640, May 2004
Wei Lu , Xiu-Tao Yang , Tao Lv , Xiao-Wei Li, An efficient evaluation and vector generation method for observability-enhanced statement coverage, Journal of Computer Science and Technology, v.20 n.6, p.875-884, November 2005
Anand L. D'Souza , Michael S. Hsiao, Error Diagnosis of Sequential Circuits Using Region-Based Model, Journal of Electronic Testing: Theory and Applications, v.21 n.2, p.115-126, April 2005
Serdar Tasiran , Kurt Keutzer, Coverage Metrics for Functional Validation of Hardware Designs, IEEE Design & Test, v.18 n.4, p.36-45, July 2001
David Van Campenhout , Trevor Mudge , John P. Hayes, High-level test generation for design verification of pipelined microprocessors, Proceedings of the 36th ACM/IEEE conference on Design automation, p.185-188, June 21-25, 1999, New Orleans, Louisiana, United States
Jian Shen , Jacob A. Abraham, An RTL Abstraction Technique for Processor MicroarchitectureValidation and Test Generation, Journal of Electronic Testing: Theory and Applications, v.16 n.1-2, p.67-81, Feb/April 2000
Ghazanfar Asadi , Seyed Ghassem Miremadi , Alireza Ejlali, Fast co-verification of HDL models, Microelectronic Engineering, v.84 n.2, p.218-228, February, 2007
Miroslav N. Velev , Randal E. Bryant, Effective use of boolean satisfiability procedures in the formal verification of superscalar and VLIW microprocessors, Journal of Symbolic Computation, v.35 n.2, p.73-106, February | design errors;error modeling;design verification |
296377 | Extracting Hidden Context. | Concept drift due to hidden changes in context complicates learning in many domains including financial prediction, medical diagnosis, and communication network performance. Existing machine learning approaches to this problem use an incremental learning, on-line paradigm. Batch, off-line learners tend to be ineffective in domains with hidden changes in context as they assume that the training set is homogeneous. An off-line, meta-learning approach for the identification of hidden context is presented. The new approach uses an existing batch learner and the process of {\it contextual clustering} to identify stable hidden contexts and the associated context specific, locally stable concepts. The approach is broadly applicable to the extraction of context reflected in time and spatial attributes. Several algorithms for the approach are presented and evaluated. A successful application of the approach to a complex flight simulator control task is also presented. | Introduction
In real world machine learning problems, there can be important properties of the
domain that are hidden from view. Furthermore, these hidden properties may
change over time. Machine learning tools applied to such domains must not only
be able to produce classifiers from the available data, but must be able to detect the
effect of changes in the hidden properties. For example, in finance, a successful stock
buying strategy can change dramatically in response to interest rate changes, world
events, or with the season. As a result, concepts learnt at one time can subsequently
become inaccurate. Concept drift occurs with changes in the context surrounding
observations. Hidden changes in context cause problems for any machine learning
approach that assumes concept stability.
In many domains, hidden contexts can be expected to recur. These domains
include: financial prediction, dynamic control and other commercial data mining
applications. Recurring contexts may be due to cyclic phenomena, such as seasons
of the year or may be associated with irregular phenomena, such as inflation rates
or market mood.
Machine learning systems applied in domains with hidden changes in context
have tended to be incremental or on-line systems, where the concept definition is
updated as new labeled observations are processed (Schlimmer & Granger, 1986).
Adaptation to new domains is generally achieved by decaying the importance of
older instances. Widmer and Kubat's on-line system, Flora3 (Widmer & Kubat,
1993), exploits recurring hidden context. As the system traverses the sequence of
input data, it stores concepts that appear to be stable over some interval of time.
These stable concepts can be retrieved allowing the algorithm to adapt quickly
when the observed domain changes to a context previously encountered.
Stable concepts can also be identified off-line using batch learning algorithms.
Financial institutions, manufacturing facilities, government departments, etc., all
store large amounts of historical data. Such data can be analyzed, off-line, to
discover regularities. Patterns in the data, however, may be affected by context
changes without any records of the context being maintained. To handle these sit-
uations, the batch learner must be augmented to detect hidden changes in context.
Missing context is often reflected in the temporal proximity of events. For exam-
ple, there may be days in which all customers buy chocolates. The hidden context
in this case might be that a public holiday is due in the following week. Hidden
context can also be distributed over a non-temporal dimension, making it completely
transparent to on-line learners. For example, in remote sensing, the task of
learning to classify trees by species may be affected by the surrounding forest type.
If the forest type is not available to the learning system, it forms a hidden context,
distributed by geographic region rather than time. Off-line methods for finding
stable concepts can be applied to these domains. For simplicity, this article retains
the convention of organizing hidden context over time but the methods presented
generalize to properties other than time.
Each stable concept is associated with one or more intervals in time. The shift
from one stable concept to another represents a change in context. Thus each interval
can be identified with a particular context. This presents the opportunity to
build models of the hidden context. Such a model may be desirable for explanatory
purposes to understand a domain or it may be incorporated into an on-line predictive
model. A model may also be used to identify a new attribute that correlates
with the hidden context.
In this paper, we present Splice, an off-line meta-learning system for context-sensitive
learning. Splice is designed to identify stable concepts during supervised
learning in domains with hidden changes in context.
We begin by reviewing related work on machine learning in context-sensitive do-
mains. This is followed by a description of the Splice methodology. An initial
implementation of Splice, Splice-1, was previously shown to improve on a standard
induction method in simple domains. We briefly discuss this work before
presenting an improved algorithm, Splice-2. Splice-2 is shown to be superior to
in more complex domains. The Splice-2 evaluation concludes with an
application to a complex control task.
2. Background
On-line learning methods for domains with hidden changes in context adapt to
new contexts by decaying the importance of older instances. Stagger (Schlimmer
and Granger, 1996), was the first reported machine learning system that dealt with
hidden changes in context. This system dealt with changes in context by discarding
any concepts that fell below a threshold accuracy.
Splice is most related to the Flora (Widmer & Kubat, 1996) family of on-line
learners. These adapt to hidden changes in context by updating the current concept
to match a window of recent instances. Rapid adaptation to changes in context
is assured by altering the window size in response to shifts in prediction accuracy
and concept complexity. One version, Flora3, (Widmer & Kubat, 1993) adapts
to domains with recurring hidden context by storing stable concepts, these can be
re-used whenever context change is suspected. When a concept is re-used, it is first
updated to match examples in the current window. This allows Flora3 to deal
with discrepancies between the recalled concept and the actual situation. Rather
than an adjunct to on-line learning, Splice makes the strategy of storing stable
concepts the primary focus of an off-line learning approach.
Machine learning with an explicit window of recent instances, as used in Flora,
was first presented by Kubat (1989) and has been used in many other on-line
systems dealing with hidden changes in context. The approach has been used for
supervised learning (Kubat & Widmer, 1995), and unsupervised learning (Kilander
Jansson, 1993). It is also used to adapt batch learners for on-line learning tasks
by repeatedly learning from a window of recent instances (Harries & Horn, 1995;
1989). The use of a window can also be made sensitive
to changes in the distribution of instances. Salganicoff (1993) replaces the first in,
first out updating method by discarding older examples only when a new item
appears in a similar region of attribute space.
Most batch machine learning methods assume that the training items are independent
and unordered. They also assume that all available information is directly
represented in the attributes provided. As a result, batch learners generally treat
hidden changes in context as noise. For example, Sammut, Hurst, Kedzier and
Michie (1992), report on learning to pilot an aircraft in a flight simulator. They
note that a successful flight could not be achieved without explicitly dividing the
flight into stages. In this case, there were known changes of context and so, learning
could be broken into several sub-tasks. Within each stage of the flight, the control
strategy, that is, the concept to be learnt, was stable.
Splice applies the assumption that concepts are likely to be stable for some
period of time to the problem of detecting stable concepts and extracting hidden
context.
3. SPLICE
Splice's input is a sequence of training examples, each consisting of a feature vector
and a known classification. The data are ordered over time and may contain hidden
4changes of context. From this data, Splice attempts to learn a set of stable
concepts, each associated with a different hidden context. Since contexts can recur,
several disjoint intervals of the data set may be associated with the same concept.
On-line learners for domains with hidden context assume that a concept will
be stable over some interval of time. Splice also uses this assumption for batch
learning. Hence, sequences of examples in the data set are combined into intervals
if they appear to belong to the same context. Splice then attempts to cluster
similar intervals by applying the notion that similarity of context is reflected by
the degree to which intervals are well classified by the same concept. This is called
contextual clustering.
Informally, a stable concept is an expression that holds true for some period of
time. The difficulty in finding a stable concept is in determining how long "some
period" should be. Clearly, many concepts may be true for very short periods.
Splice uses a heuristic to divide the data stream into a minimal number of partitions
(contextual clusters), which may contain disjoint intervals of the dataset,
so that a stable concept created from one contextual cluster will poorly classify
examples in all other contextual clusters. In a sense, the stable concept will be the
most specific concept describing the contextual cluster.
A more rigorous method for comparing different sets of contextual clusters might
use a Minimum Description Length (MDL) measure (Rissanen, 1983). The MDL
principle states that the best theory for a given concept should minimize the amount
of information that needs be sent from a sender to a receiver so that the receiver can
correctly classify items in a shared dataset. In this case, the information to be sent
would include stable concepts, a context switching method and a list of exceptions.
A good set of contextual clusters should result in stable concepts that give a shorter
description length for describing the data than would a single concept. An optimal
set of contextual clusters should achieve the minimum description length possible.
A brute force approach to finding a set of clusters that satisfy the MDL measure
would be to consider all possible combinations of contextual clusters in the dataset,
then select the combination with the minimum description length. Clearly, this is
impractical. Manganaris (1996) applies the minimum description length heuristic
to the creation of a piecewise polynomial function from a series of numbers (method
adapted from Pednault (1989). With dynamic programming the space of possible
partitions can be searched in O(n 2 ) in time. Adapting this method for Splice would
give time complexity of O(n 4 ). Therefore, Splice uses a heuristic approach to find
stable concepts that are "good enough".
The Splice algorithm is a meta-learning algorithm. Concepts are not induced
directly, but by application of an existing batch leaner. In this study, we use
Quinlan's C4.5 (Quinlan, 1993), but the Splice methodology could be implemented
using other propositional learning systems. C4.5 is used without modification.
Furthermore, since noise is dealt with by C4.5, Splice contains no explicit noise
handling mechanism. Unusual levels of noise can be dealt with by altering the C4.5
parameters.
The main purpose of this paper is to present the Splice-2 algorithm. However,
we first briefly describe it's predecessor, Splice-1 (Harries & Horn, in press) and
its shortcomings to motivate the development of the Splice-2 algorithm.
3.1. SPLICE-1
first uses a heuristic to identify likely context boundaries. Once the data
has been partitioned on these boundaries, the partitions are combined according to
their similarity of context. Stable concepts are then induced from the resulting contextual
clusters. Details of the Splice-1 algorithm have previously been reported
by Harries and Horn (in press) so we only give a brief overview in this section.
To begin, each example is time-stamped to give its position in the sequence of
training data. Thus, time forms a continuous attribute in which changes of context
can be expressed. For example, the hidden context, interest rate, might change at
time=99.
C4.5 is then used to induce a decision tree from the whole training set. Each
node of the tree contains a test on an attribute. Any test on the special attribute,
time, is interpreted as indicating a possible change of context.
For example, Table 1 shows a simple decision tree that might be used for stock
market investment. This tree includes a test on time, which suggests that a change
in context may have occurred at Time=1995. Splice-1 then uses 1995 as boundary
to partition the data set. We assume that each interval, as defined by all such
partitions, can be identified with a stable concept.
Table
1. Sample decision tree in a domain with hidden changes in context.
Attribute
Attribute
Attribute
Attribute
Time ?= 1995
If a stable concept induced from one interval accurately classifies the examples in
another interval, we assume that both intervals have similar contexts. The degree
of accuracy provides a continuous measure of the degree of similarity. Intervals are
grouped by contextual similarity. When adjacent intervals are combined, a larger
context is identified. When disjunctive sub-sets are combined, a recurring context
is identified. C4.5 is applied again to the resulting contextual clusters to produce
the final stable concepts.
3.2. SPLICE-1 prediction
Harries and Horn (in press) have shown that Splice-1 can build more accurate
classifiers than a standard induction algorithm in sample domains with hidden
Splice
Machine Learner Dynamic Concept
Switching
Training
stable concepts
Figure
1. Splice on-line prediction
changes in context. We summarize these results and provide a comparison with the
on-line method Flora3.
In this task, Splice-1 is used to produce a set of stable concepts, which are then
applied to an on-line prediction task. Figure 1 shows this process schematically.
On-line classification is achieved by switching between stable concepts according to
the current context.
3.2.1. STAGGER data set The data sets in the following experiments are based
on those used to evaluate Stagger (Schlimmer & Granger, 1986) and subsequently
used to evaluate Flora (Widmer & Kubat, 1996). While our approach is substantially
different, use of the same data set allows some comparison of results.
A program was used to generate data sets. This allows us to control the recurrence
of contexts and other factors such as noise 1 and duration of contexts. The task
has four attributes, time, size, color and shape. Time is treated as a continuous
attribute. Size has three possible values: small, medium and large. Color has three
possible values: red, green and blue. Shape also has three possible values: circular,
triangular, and square.
The program randomly generates a sequence of examples from the above attribute
space. Each example is given a unique time stamp and a boolean classification based
upon one of three target concepts. The target concepts are:
1.
2.
3.
Artificial contexts were created by fixing the target concepts to one of the above
Stagger concepts for preset intervals of the data series.
3.2.2. On-line prediction This experiment compares the accuracy of Splice-1
to C4.5 when trained on a data set containing changes in a hidden context. In
order to demonstrate that the Splice approach is valid for on-line classification
tasks, we show a sample prediction problem in which Splice-1 was used off-line
to generate a set of stable concepts from training data. The same training data
was used to generate a single concept using C4.5, again off-line. After training,
the resulting concepts were applied to a simulated on-line prediction task. C4.5
provides a baseline performance for this task and was trained without the time
attribute. C4.5 benefits from the omission of this attribute as the values for time
in the training set do not repeat in the test set. Even so, this comparison is not
altogether fair on C4.5, as it was not designed for use in domains with hidden
changes in context.
The training set consisted of concept (1) for 50 instances, (2) for 50 instances,
and (3) for 50 instances. The test set consisted of concepts (1) for 50 instances, (2)
for 50 instances, (3) for 50 instances, and repeated (1) for 50 instances, (2) for 50
instances, and (3) for 50 instances.
To apply the stable concepts identified by Splice for prediction, it was necessary
to devise a method for selecting relevant stable concepts. This is not a trivial
problem. Hence, for the purposes of this experiment we chose a simple voting
method. With each new example, the classification accuracy of each stable concept
over the last five examples was calculated. The most accurate concept was then
used to classify the new example. Any ties in accuracy were resolved by random
selection. The first case was classified by a randomly selected stable concept.
Figure
2 shows the number of correct classifications for each item in the test set
for both Splice-1 and C4.5 over 100 randomly generated training and test sets.
Figure
2 shows that Splice-1 successfully identified the stable concepts from the
training set and that the correct concept can be successfully selected for prediction
in better than 95% of cases. The extreme dips in accuracy when contexts change
are an effect of the method used to select stable concepts. C4.5 performs relatively
well on concept 2 with an accuracy of approximately 70% but on concepts 1 and 3,
it correctly classifies between 50% and 60% of cases.
As noise increases, the performance of Splice-1 gradually declines. At 30%
noise (Harries & Horn, in press), the worst result achieved by Splice-1 is an 85%
classification accuracy on concept 2. C4.5, on the other hand, still classifies with
approximately the same accuracy as it achieved in Figure 2.
This task is similar to the on-line learning task attempted using Flora (Widmer
Kubat, 1996) and Stagger (Schlimmer & Granger, 1986). The combination
of Splice-1 with a simple strategy for selection of the current stable concept is
effective on a simple context sensitive prediction task. As the selection mechanism
assumes that at least one of the stable concepts will be correct, Splice-1
almost immediately moves to its maximum accuracy on each new stable concept.
For a similar problem, the Flora family (Widmer & Kubat, 1996) (in particular
Flora3, the learner designed to exploit recurring context) appear to reach much
the same level of accuracy as Splice-1, although as an on-line learning method,
Flora requires some time to fully reflect changes in context.
correct
item number
Splice
Figure
2. On-line prediction task. Compares Splice-1 stable concepts with a single C4.5 concept.
This comparison is problematic for a number of reasons. Splice-1 has the advantage
of first seeing a training set containing 50 instances of each context before
beginning to classify. Furthermore, the assumption that all possible contexts have
been seen in the training set is correct for this task. On-line learners have the advantage
of continuous feedback with an unconstrained updating of concepts. Splice-1
does have feedback, but is constrained to select only from those stable concepts
learnt from the training data. When Splice-1 has not learnt a stable concept,
there is no second chance. For more complex domains, it could be beneficial to use
a combination of Splice-1 and an adaptive, on-line learner.
4. SPLICE-2
uses the assumption that splits on time resulting from a run of C4.5
will accurately reflect changes in context. This is not always true. Splice-2 was
devised to reduce the reliance on initial partitioning. Like Splice-1, Splice-2 is
a "meta-learner". In these experiments we again use C4.5 (Quinlan, 1993) as the
induction tool. The Splice-2 algorithm is detailed in Figure 2. The two stages of
the algorithm are discussed in turn.
4.1. Stage 1: Partition dataset
begins by guessing an initial partitioning of the data and subsequent
stages refine the initial guess.
Any of three methods may be used for the initial guess:
ffl Random partitioning. Randomly divide the data set into a fixed number of
partitions.
ffl Partitioning by C4.5. As used in Splice-1. Each test on time, found when
C4.5 is run on the entire data set, is used as an initial partition.
Table
2. The Splice-2 Algorithm
Ordered data set, Window size parameter.)
Partition Dataset
- Partition the dataset over time using either:
A preset number of random splits.
C4.5 as per Splice-1.
Prior domain knowledge.
- The identified partitions form the initial contextual clusters.
- C4.5 is applied to the initial contextual clusters to produce the initial interim
concepts.
ffl Stage 2: Contextual Clustering
Each combination of interim concept and item in the original data set is
allocated a score based upon the total accuracy of that concept on items in
a fixed size window over time surrounding the item.
- Cluster the original data set items that share maximum scores with the same
concept. These clusters form the new set of contextual clusters.
- C4.5 is used to create a new set of interim concepts from the new contextual
clusters.
Stage 2 is repeated until the interim concepts do not change or until a fixed
number of iterations are completed.
- The last iteration provides the set of stable concepts.
dataset
order
initial
contextual
clusters
initial
concepts
Figure
3. Splice-2: stage 1
ffl Prior Domain Knowledge. In some domains, prior knowledge is available about
likely stable concepts.
We denote the version of Splice-2 using random partitioning as Splice-2R, the
version using C4.5 partitioning as Splice-2C, and the version using prior domain
knowledge as Splice-2P.
Once the dataset has been partitioned, each interval of the dataset is stored as
an initial contextual cluster. C4.5 is applied to each cluster to produce a decision
tree known as an interim concept, see Figure 3.
4.2. Stage 2: Contextual clustering
Stage 2 iteratively refines the contextual clusters generated in stage 1. With each
iteration, a new set of contextual clusters is created in an attempt to better identify
stable concepts in the data set.
This stage proceeds by testing each interim concept for classification accuracy
against all items in the original data set. A score is computed for each pair of
concept and item number. This score is based upon the number of correct
classifications achieved in a window surrounding the item (see Figure 4). The
window is designed to capture the notion that a context is likely to be stable for
some period of time.
On-line learning systems apply this notion implicitly by using a window of recent
instances. Many of these systems use a fixed window size, generally chosen to be
the size of the minimum context duration expected. The window in Splice has the
same function as in an on-line method, namely, to identify a single context. Ideally,
the window size should be dynamic, as in Flora, allowing the window to adjust
Figure
4. Splice-2: stage 2. Using interim concept accuracy over a window to capture context.
for different concepts. For computational efficiency and simplicity, we use a fixed
sized window.
The context of an item can be represented by a concept that correctly classifies
all items within the window surrounding that item. At present the window size is
set to 20 by default. This window size was chosen so as to bias contexts identified
to be of more than 20 instances duration. This was considered to be the shortest
context that would be valuable during on-line classification. The window size can
be altered for different domains.
We define W ij to be the score for concept j when applied to a window centered
on example i.
Correct jm (1)
where:
correctly classifies example m
the window size
The current contextual clusters, from stage one or a previous iteration of stage
two, are discarded. New contextual clusters are then created for each interim
concept j. Once all scores are computed, each item, i, is allocated to a contextual
cluster associated with the interim concept, j, that maximizes W ij over all interim
concepts. These interim concepts are then discarded. C4.5 is applied to each new
contextual cluster to learn a new set of interim concepts (Figure 5).
The contextual clustering process iterates until either a fixed number of repetitions
is completed or until the interim concepts do not change from one iteration
to the next. The last iteration of this stage provides a set of stable concepts. The
final contextual clusters give the intervals of time for which different contexts are
active.
4.2.1. Alternate weight In domains with a strong bias toward a single class, such
as 90% class A and 10% class B, well represented classes can dominate the contextual
clustering process. This can lead to clusters that are formed by combining
l
clusters'
concepts'
Figure
5. Splice-2: stage 2. Create interim concepts.
contexts with similar classifications for well represented classes and quite dissimilar
classification over poorly represented classes. This is problematic in domains
where the correct classification of rare classes is important, as in "learning to fly"
(see Section 6). For such domains, the can be altered to give an
equal importance to accuracy on all classes in a window while ignoring the relative
representations of the different classes.
For a given item, i, and an interim concept, j, the new sums over all
possible classifications the proportion of items with a given class correctly classified.
The
(2)
where:
C is the number of classes
c m is the class number of example m
correctly classifies example m
the window size
4.3. SPLICE-2 walk through
In this section we present a walk through of the Splice-2R algorithm, using a
simple dataset with recurring context. The dataset consists of randomly generated
Stagger instances classified according to the following pattern of context: 3 repetitions
of the following structure (concept (1) for instances, concept (2) for
instances, and concept (3) for was applied to the dataset.
begins by randomly partitioning the dataset into four periods. Each
partition is then labeled as a contextual cluster. Figure 6 shows the instances
r
item no
Figure
6. Initial contextual clusters. Created by randomly partitioning the dataset.
associated with each contextual cluster as drawn from the original dataset. C4.5
is applied to each of these clusters, CC i , to induce interim concepts, IC i . Table 3
shows the induced concepts. Of these, only IC 4 relates closely to a target concept.
Table
3. Initial interim concepts.
no (34/7.1)
Color in -red,green-: yes (70.0/27.3)
IC3
Color in -red,blue-:
- Size in -medium,large-: yes (14.0/7.8)
IC4
Size in -medium,large-:
- Shape in -square,triangular-:
- Color in -red,blue-: no (33.0/16.5)
calculated for all combinations of item, i, and interim concept, j. Table 4
shows the calculated W ij scores for a fragment of the dataset. Each figure in this
table represents the number of items in a window surrounding the item number
classified correctly by a given interim concept, IC j . For instance, the interim
concept, IC 3 , classifies 12 items correctly in a window surrounding item number
Each item, i, is then allocated to a new contextual cluster based upon the interim
concept, IC j , that yielded the highest score, W ij . This is illustrated in Table 4
where the highest score in each column is italicized. At the base of the table we
note the new contextual cluster to which each item is to be allocated. Each new
contextual cluster is associated with a single interim concept. For example, interim
concept IC 1 gives the highest value for item 80, so that item is allocated to a new
contextual
cluster
item no
Figure
7. Contextual clusters0. Created by the first iteration of the contextual clustering process.24
contextual
cluster
item no
Figure
8. Contextual clusters000. Created by the third and final iteration of contextual clustering.
contextual cluster, CC 1 0. This table contains items allocated to more than one
contextual cluster, and implies a change of context around item 182.
Table
scores for the initial interim concepts on all items in the training set.
Concepts Item Number (i)
Allocate to
The new contextual clusters, CC i 0, are expected to improve on the previous clusters
by better approximating the hidden contexts. Figure 7 shows the distribution
of the new contextual clusters, CC i 0. From these clusters, a new set of interim
concepts, IC i 0 are induced.
Contextual clustering iterates until either a fixed number of repetitions is completed
or until the interim concepts do not change from one iteration to the next.
Figure
8 shows the contextual clusters after two more iterations of the clustering
stage. Each remaining contextual cluster corresponds with a hidden context in the
original training set.
At this point, the final interim concepts, IC i 000, are renamed as stable concepts,
SC i . Stable concept one, SC 1 000 is the first target concept. Stable concept three,
is the third target concept. Stable concept four, SC 4 000, is the second target
concept. Contextual cluster two, CC 2 000, did not contain any items so was not used
to induce a stable concept.
5. Experimental comparison of SPLICE-1 and SPLICE-2
This section describes two experiments comparing the performance of Splice-2
with Splice-1. The first is a comparison of clustering performance across a range
of duration and noise levels with the number of hidden changes in context fixed.
The second compares the performance of the systems across a range of noise and
hidden context changes with duration fixed. In these experiments performance
is determined by checking if the concepts induced by a system agree with the
original concepts used in generating the data. Both these experiments use Stagger
data, as described in the prior experiment. However, unlike the prior experiment
only training data is required as we are assessing performance against the original
concepts.
5.1. Contextual clustering: SPLICE-1 'vs' SPLICE-2
This experiment was designed to compare the clustering performance of the two
systems. Splice-1 has been shown to correctly induce the three Stagger concepts
under a range of noise and context duration conditions (Harries & Horn, in press).
This experiment complicates the task of identifying concepts by including more
context changes. In order to compare the clustering stage alone, Splice-2C was
used to represent Splice-2 performance. This ensured that the initial partitioning
used in both versions of Splice was identical.
Both versions of splice were trained on independently generated data sets. Each
data set consisted of examples classified according to the following pattern of con-
repetitions of the following structure (concept (1) for D instances; concept
(2) for D instances; and concept (3) for D instances), where the duration D ranged
from 10 to 100 and noise ranged from 0% to 30%. This gave a total of 14 context
changes in each training set. Splice-2 was run with the default window size of
20. The number of iterations of contextual clustering were set at three. C4.5 was
run with default parameters and with sub-setting on. The stable concepts learnt
by each system were then assessed for correctness against the original concepts.
The results were averaged over 100 repetitions at each combination of noise and
duration. They show the proportion of correct stable concept identifications found
and the average number of incorrect stable concepts identified.
Figure
9 shows the number of concepts correctly identified by Splice-1 and
for a range of context durations and noise levels. The accuracy of
both versions converges to the maximum number of concepts at the higher context
durations for all levels of noise. Both versions show a graceful degradation of accuracy
when noise is increased. Splice-2 recognizes more concepts at almost all
levels of noise and concept duration.
concepts
correctly
identified
context duration
no noise
10% noise
20% noise
30% noise0.51.52.510 20
concepts
correctly
identified
context duration
no noise
10% noise
20% noise
30% noise
Figure
9. Concepts correctly identified by Splice-1 and Splice-2 when duration is changed2610
incorrect
concepts
context duration
0% noise
10% noise
20% noise
30% noise2610
incorrect
concepts
context duration
0% noise
10% noise
20% noise
30% noise
Figure
10. Incorrect concepts identified by Splice-1 and Splice-2
Figure
compares the number of incorrect concepts identified by Splice-1 and
Splice-2. Both versions show a relatively stable level of performance at each level
of noise. For all levels of noise, Splice-2 induces substantially fewer incorrect
concepts than Splice-1.
As both versions of Splice used the same initial partitioning, we conclude that
the iterative refinement of contextual clusters, used by Splice-2, is responsible for
the improvement on Splice-1. These results suggests that the Splice-2 clustering
mechanism is better able to overcome the effects of frequent context changes and
high levels of noise.
In the next experiment we further investigate this hypothesis by fixing context
duration and testing different levels of context change.
5.2. The effect of context repetition
The previous experiment demonstrated that Splice-2 performs better than Splice-
1 with a fixed number of context changes. However the experiment provided little
insight into the effect of different levels of context repetition. This experiment investigates
this effect by varying the number of hidden context changes in the data.
Once again we use Splice-2C to ensure the same partitioning used for Splice-1.
We also examine the results achieved by Splice-2R.
In this experiment each system was trained on an independent set of data that
consisted of the following pattern of contexts: R repetitions of the structure (con-
cept (1) for 50 instances; concept (2) for 50 instances; concept (3) for 50 instances)
where R varies from one to five. The effects of noise were also evaluated with a
range of noise from 0% to 40%. Splice-2 was run with the default window size
of 20 with three iterations of contextual clustering. used
partitions. The parameters for C4.5 and the performance measure used were the
same as used in the prior experiment.
Figure
11 shows the number of concepts correctly induced by both Splice-1
and Splice-2C for each combination of context repetition and noise. The results
indicate that both systems achieve similar levels of performance with one context
repetition. Splice-2C performs far better than Splice-1 for almost all levels of
repetition greater than one.
Comparing the shapes of the performance graphs for each system is interesting.
shows an increasing level of performance across almost all levels of noise
with an increase in repetition (or context change). On the other hand, Splice-
1 show an initial rise and subsequent decline in performance as the number of
repetitions increases. The exception is at 0% noise, where both versions identify all
three concepts with repetition levels of three and more.
Figure
12 shows the number of correct Stagger concepts identified by Splice-
2R. The results shows a rise in recognition accuracy as repetitions increase (up to
the maximum of 3 concepts recognized) for all noise levels. The number of concepts
recognized is similar to those in Figure 11 for Splice-2C.
concepts
correctly
identified
context repetition
0% noise
10% noise
20% noise
30% noise
concepts
correctly
identified
context repetition
0% noise
10% noise
20% noise
30% noise
40% noise
Figure
11. Concepts correctly identified by Splice-1 and Splice-2C across a range of context
concepts
correctly
identified
context repetition
0% noise
10% noise
20% noise
30% noise
40% noise
Figure
12. Splice-2R concept identification
The similarity of results for Splice-2C and Splice-2R shows that, for this do-
main, C4.5 partitioning provides no benefit over the use of random partitioning for
The results of this experiment indicate that Splice-2C is an improvement on
as it improves concept recognition in response to increasing levels of
context repetition. The performance of Splice-1 degrades with increased levels
of context changes. The inability of Splice-1 to cope with high levels of context
change is probably due to a failure of the partitioning method. As the number of
partitions required on time increases, the task of inducing the correct global concept
becomes more difficult. As the information gain available for a given partition on
time is reduced, the likelihood of erroneously selecting another (possibly noisy)
attribute upon which to partition the data set is increased. As a result, context
changes on time are liable to be missed.
Splice-2C is not affected by poor initial partitioning as it re-builds context
boundaries at each iteration of contextual clustering. Hence, a poor initial partition
has a minimal effect and the system can take advantage of increases in context
examples. Splice-1 is still interesting, as it does substantially less work than
Splice-2, and can be effective in domains with relatively few context changes. We
anticipate that a stronger partitioning method would make Splice-1 more resilient
to frequent changes in context. The results also indicate that the C4.5 partitioning
method is not helpful in this domain.
6. Applying SPLICE to the "Learning to Fly" domain
To test the Splice-2 methodology, we wished to apply it to a substantially more
complex domain than the artificial data described above. We had available, data
collected from flight simulation experiments used in behavioral cloning (Sammut
et al., 1992). Previous work on this domain found it necessary to explicitly divide
the domain into a series of individual learning tasks or stages. Splice-2 was able
to induce an effective pilot for a substantial proportion of the original flight plan
with no explicitly provided stages. In the following sections we briefly describe the
problem domain and the application of Splice-2.
6.1. Domain
The "Learning to Fly" experiments (Sammut et al., 1992) were intended to demonstrate
that it is possible to build controllers for complex dynamic systems by recording
the actions of a skilled operator in response to the current state of the system.
A flight simulator was chosen as the dynamic system because it was a complex
system requiring a high degree of skill to operate successfully and yet is well un-
derstood. The experimental setup was to collect data from several human subjects
flying a predetermined flight plan. These data would then be input to an induction
program, C4.5.
The flight plan provided to the human subjects was:
1. Take off and fly to an altitude of 2,000 feet.
2. Level out and fly to a distance of 32,000 feet from the starting point
3. Turn right to a compass heading of approximately 330 degrees.
4. At a North/South distance of 42,000 feet, turn left to head back towards the
runway. The turn is considered complete when the azimuth is between 140
degrees and 180 degrees.
5. Line up on the runway.
6. Descend to the runway, keeping in line.
7. Land on the runway.
The log includes 15 attributes showing position and motion, and 4 control at-
tributes. The position and motion attributes record the state of the plane, whereas
the control attributes record the actions of the pilot. The position and motion attributes
were: on ground, g limit, wing stall, twist, elevation, azimuth, roll speed,
elevation speed, azimuth speed, airspeed, climbspeed, E/W distance, altitude, N/S
distance, fuel. The control attributes were: rollers, elevator, thrust and flaps. (The
rudder was not used as its implementation was unrealistic.) The values for each of
the control attributes provide target classes for the induction of separate decision
trees for each control attribute. These decision trees are tested by compiling the
trees into the autopilot code of the simulator and then "flying" the simulator.
In the original experiments, three subjects flew the above flight plan times each.
In all, a data set of about 90,000 records was produced. Originally, it was thought
that the combined data could be submitted to the learning program. However,
this proved too complex a task for the learning systems that were available. The
problems were largely due to mixing data from different contexts.
The first, and most critical type of context, was the pilot. Different pilots have
different flying styles, so their responses to the same situation may differ. Hence,
the flights were separated according to pilot. Furthermore, the actions of a particular
pilot differ according to the stage of the flight. That is, the pilot adopts
different strategies depending on whether he or she is turning the aircraft, climbing,
landing, etc. To succeed in inducing a competent control strategy, a learning algorithm
would have to be able to distinguish these different cases. Since the methods
available could not do this, manual separation of the data into flight stages was
required. Since the pilots were given intermediate flight goals, the division into
stages was not too onerous. Not all divisions were immediately obvious. For exam-
ple, initially, lining up and descending were not separated into two different stages.
However, without this separation, the decision trees generated by C4.5 would miss
the runway. It was not until the "line-up" stage was introduced that a successful
"behavioral clone" could be produced.
Until now, the stages used in behavioral cloning could only be found through
human intervention which often included quite a lot of trial-and-error experimen-
tation. The work described below suggests that flight stages can be treated as
different contexts and that the Splice-2 approach can automate the separation of
flight data into appropriate contexts for learning.
6.2. Flying with SPLICE-2
This domain introduces an additional difficulty for Splice. Previous behavioral
cloning experiments have built decisions trees for each of the four actions in each
of the seven stages, resulting in 28 decision trees. When flying the simulator these
decision trees are switched in depending on the current stage.
However, when Splice-2 is applied to the four learning tasks, viz, building a
controller for elevators, another for rollers, for thrust and flaps, there is no guarantee
that exactly the same context divisions will be found. This causes problems when
two or more actions must be coordinated. For example, to turn the aircraft, rollers
and elevators must be used together. If the contexts for these two actions do not
coincide then a new roller action, say, may be commenced, but the corresponding
elevator action may not start at the same time, thus causing a lack of coordination
and a failure to execute the correct manoeuvre. This problem was avoided by
combining rollers and elevators into a single attribute, corresponding to the stick
position. Since the rollers can take one of 15 discrete values and elevators can take
one of 11 discrete values, the combined attribute has 165 possible values. Of these,
are represented in the training set.
A further problem is how to know when to switch between contexts. The original
behavioral clones included hand-crafted code to accomplish this. However, Splice
builds its own contexts, so an automatic means of switching is necessary. In the
on-line prediction experiment reported in Section 3.2.2, the context was selected by
using a voting mechanism. This mechanism relied upon immediate feedback about
classification accuracy. We do not have such feedback during a flight, so we chose
learn when to switch. All examples of each stable concept were labelled with an
identifier for that concept. These were input to C4.5 again, this time, to predict
the context to which a state in the flight belongs, thus identifying the appropriate
stable concept, which is the controller for the associated context. In designing
this selection mechanism we remained with the "situation-action" paradigm that
previous cloning experiments adopted so that comparisons were meaningful.
We found that the original which uses only classification
accuracy, did not perform well when class frequencies were wildly different. This was
due to well represented classes dominating the contextual clustering process, leading
to clusters with similar classification over well represented classes, and dissimilar
classification over poorly represented classes. This was problematic as successful
flights depend upon the correct classification of rare classes. The problem was
reduced by adopting the alternative scoring method defined by Equation 2. In
addition we adjusted the C4.5 parameters (pruning level) to ensure that these rare
classes were not treated as noise.
was also augmented to recognize domain discontinuities such as the end
of one flight and the beginning of the next by altering W ij 0 such that no predictions
from a flight other than the flight of example i were incorporated in any W ij 0.
-45000 -40000 -35000 -30000 -25000 -20000 -15000 -10000 -5000 020004000600050015002500North/South (feet)
East/West (feet)
Height (feet)
Figure
13. Flight comparison
We have been able to successfully fly the first four stages of the flight training on
data extracted from using only data from the first four stages. It should
be noted that even with the changes in the domain (combining rollers and elevator)
C4.5 is unable to make the first turn without the explicit division of the domain
into stages.
Figure
13 shows three flights:
ffl The successful Splice-2 flight on stages 1 to 4.
ffl The best C4.5 flight.
ffl A sample complete flight.
The settings used in Splice-2P were:
post pruning turned off (-c 100).
ffl Three iterations of the clustering stage.
ffl A window size of 50 instances.
ffl Initial partitioning was set to four equal divisions of the first flight.
-40000 -35000 -30000 -25000 -20000 -15000 -10000 -5000 -10001000300050000100020003000
North/South (feet)
East/West (feet)
Height (feet)
Context 3
Context 4
Figure
14. Distribution of the local concepts used in the successful flight. The chart shows only
one in sixty time steps, this was necessary for clarity but does remove some brief context changes.
We initially investigated the use of a larger window with a random partitioning.
This successfully created two contexts: one primarily concerning the first five stages
of the flight and the other concerning the last two. With this number of contexts,
a successful pilot could not be induced. Reducing the window size lead to more
contexts, but they were less well defined and also could not fly the plane. The
solution was to bias the clustering by providing an initial partitioning based on
the first four stages of the flight. Further research is needed to determine if the
correspondence between the number of initial partitions and the number of flight
stages is accidental or if there is something more fundamental involved.
distinguished four contextual clusters with a rough correlation to flight
plan stages. Each contextual cluster contains items from multiple stages of the
training flights. Context 1 has instances from all four stages but has a better representation
of instances from the first half of stage 1. Context 2 roughly corresponds
with the second half of stage 1, stage 2, and part of 3. Instances in context 3 are
from stage 2 onward, but primarily from stage 4. Context 4 is particularly interesting
as it also corresponds primarily with stage 4, but contains less items than
context 3 for all parts of this stage.
It is not surprising that the correspondence of context to stage is not complete.
The original division of the flight plan into stages was for convenience of description,
and the same flight could well have been divided in many different ways. In short,
the Splice contexts are capturing something additional to the division of the flight
into stages.
Figure
14 shows the stable concept used at each instant of the Splice-2 flight.
To make context changes clearly visible, the number of points plotted in Figure 14
are fewer than were actually recorded. Because of the lower resolution, not all
details of the context changes are visible in this chart. Take off was handled by
context 1. Climb to 2000 feet and level out was handled by context 2. Context 3
also had occasional instances in the level out region of the flight. Context 1 again
took over for straight and level flight. The right hand turn was handled by a mix of
contexts 3 and 4. Subsequently flying to the North-West was handled by contexts 2
and 3 then by context 4. Initiating the left hand turn was done by context 1. The
rest of the left hand turn was handled by a combination of contexts 3 and 3.
When Sammut et al. (1992) divided the flight into stages, they did so by trial
and error. While the partitions found were successfully used to build behavioral
clones, there is no reason to believe that their partition is unique. In fact, we expect
similar kinds of behaviors to be used in different stages. For example, a behavior for
straight and level flight is needed in several points during the flight as is turning left
or turning right. Moreover, the same control settings might be used for different
maneouvres, depending on the current state of the flight. Figure 14 shows that
Splice intermixed behaviors much more than was done with the manual division
of stages. Although this needs further investigation, a reasonable conjecture is that
an approach like Splice can find more finely tuned control strategies than can be
achieved manually.
In a subsequent experiment we attempted to hand craft a "perfect" switching
strategy, by associating stable concepts with each stage of the flight plan. This
switching method did not successfully fly the plane.
At present, the addition of further stages of the flight causes catastrophic interference
between the first two stages and the last 3 stages. Splice-2 is, as yet, unable
to completely distinguish these parts of the flight. However, the use of Splice-2
in synthesizing controllers for stages 1 - 4 is the first time that any automated
procedure has been successful for identifying contexts in this very complex domain.
The use of a decision tree to select the current context is reasonably effective. As
the decision tree uses only the same attributes as the stable concepts, it has no way
to refer to the past. In effect, it is flying with no short term memory. This was
not an issue for this work as it is a comparison with the original "Learning to Fly"
project (Sammut et al., 1992) which used situation-action control.
This experiment serves to demonstrate that off-line context-sensitive learning can
be applied to quite complex data sets with promising results.
7. Related work
There has been a substantial amount of work on dealing with known changes in
context using batch learning methods. Much of this work is directly relevant to the
challenges faced in using stable concepts for on-line prediction.
Known context can be dealt with by dividing the domain by context, and inducing
different classifiers for each context. At classification time, a meta-classifier can then
be used to switch between classifiers according to the current context (Sammut et
al., 1992; Katz, Gately & Collins, 1990). The application of stable concepts to on-line
classification used in this paper (Sections 3.2.2 and 6.2) use a similar switching
approach. Unfortunately, it is not always possible to guarantee that all hidden
contexts are known.
New contexts might be dealt with by adapting an existing stable concept. Kubat
(1996) demonstrates that knowledge embedded in a decision tree can be transfered
to a new context by augmenting the decision tree with a second tier, which is
then trained on the new context. The second tier provides soft matching and
weights for each leaf of the original decision tree. Use of a two tiered structure was
originally proposed by Michalski (1990) for dealing with flexible concepts. Pratt
(1993) shows that knowledge from an existing neural network can be re-used to
significantly increase the speed of learning in a new context. These methods for
the transfer of knowledge between known contexts could be used on-line to adapt
stable concepts in a manner analogous to that used by Flora3.
It may be possible to improve the accuracy of stable concepts by combining data
from a range of contexts. Turney (1993) (Turney & Halasz, 1993) applies contextual
normalization, contextual expansion and contextual weighting to a range of
domains. He demonstrates that these methods can improve classification accuracy
for both instance based learning (Aha, Kibler & Albert, 1991) and multivariate re-
gression. This could be particularly valuable for a version of Splice using instance
based methods instead of C4.5.
A somewhat different on-line method designed to detect and exploit contextual
attributes is MetaL(B) (Widmer, 1996). In this case, contextual attributes predict
the relevance of other attributes. MetaL(B) uses any contextual attributes detected
to trigger changes in the set of features presented to the classifier. While this
approach and definition of context is quite different to that used by Splice, the
overall philosophy is similar. Widmer concludes by stating that:
". the identification of contextual features is a first step towards naming,
and thus being able to reason about, contexts."
This is one of the main goals of Splice. The result of such reasoning would be
a model of the hidden context. Models of hidden context could be used in on-line
classification systems to augment existing reactive concept switching with a pro-active
component. Models of hidden context might also be applied to improving
domain understanding.
Some first steps toward building models of hidden context have been taken in this
article. The "Learning to Fly" experiment (Section 6.2) used a model of hidden
context, based on the types of instances expected in each context, to switch between
stable concepts.
To summarize, Splice begins to build a bridge between on-line methods for
dealing with hidden changes in context and batch methods for dealing with known
change in context.
8. Conclusion
This article has presented a new off-line paradigm for recognizing and dealing with
hidden changes in context. Hidden changes in context can occur in any domain
where the prediction task is poorly understood or where context is difficult to
isolate as an attribute. Most previous work with hidden changes in context has
used an on-line learning approach.
The new approach, Splice , uses off-line, batch, meta-learning to extract hidden
context and induce the associated stable concepts. It incorporates existing machine
learning systems (in this paper, C4.5 (Quinlan, 1993)). The initial implementation,
was briefly reviewed and a new version, Splice-2, presented in full. The
evaluation of the Splice approach included an on-line prediction task, a series of
hidden context recognition tasks, and a complex control task.
was the initial implementation of the Splice approach and used C4.5
to divide a data series by likely changes of context. A process called contextual
clustering then grouped intervals appearing to be from the same context. This
process used the semantics of concepts induced from each interval as a measure
of the similarity of context. The resulting contextual clusters were used to create
context specific concepts and to specify context boundaries.
limitations of Splice-1 by permitting refinement of partition
boundaries. Splice-2 clusters on the basis of individual members of the data
series. Hence, context boundaries are not restricted to the boundaries found in the
partitioning stage and context boundaries can be refined. Splice-2 is much more
robust to the quality of the initial partitioning.
successfully detected and dealt with hidden context in a complex control
task. "Learning to Fly" is a behavioral cloning domain based upon learning an
autopilot given a series of sample flights with a fixed flight plan. Previous work on
this domain required the user to specify stages of the flight. Splice-2 was able to
successfully fly a substantial fragment of the initial flight plan without these stages
(or contexts) being specified. This is the first time that any automated procedure
has been successful for identifying context in this very complex domain.
A number of improvements could be made to the Splice algorithms. The partitioning
method used was shown to be problematic for Splice-1 at high levels of
noise and hidden changes in context. While the use of an existing machine learning
system to provide partitioning is elegant, a better solution may be to implement a
specialized method designed to deal with additional complexity over time. One approach
to this is to augment a decision tree algorithm to allow many splits (Fayyad
Irani, 1993) on selected attributes.
Neither Splice-1 nor Splice-2 provide a direct comparison of the relative advantage
of dividing the domain into one set of contexts over another. One comparison
method that could be used is the minimum description length (MDL) heuristic
(Rissanen, 1983). The MDL principle is that the best theory for a given concept
will minimize the amount of information that need be sent from a sender to a
receiver so that the receiver can correctly classify items in a shared dataset. In
this case, the information to be sent must contain any stable concepts, a context
switching method and a list of exceptions. At the very least, this would allow the
direct comparison of a given context-sensitive global concept (using stable concepts
and context switching) with a context-insensitive global concept. Further, a contextual
clustering method could use an MDL heuristic to guide a search through
the possible context divisions.
The approaches used here for selecting the current context were an on-line voting
method for domains with immediate feedback and a decision tree for a domain
without immediate feedback. More sophisticated approaches would use a model of
the hidden context. Such a model could use knowledge about the expected context
duration, order and stability. It could also incorporate other existing attributes and
domain feedback. The decision tree used for context switching in the learning to fly
task is a primitive implementation of such a model using only existing attributes
to select the context.
An exciting possibility is to use the characteristics of contexts identified by Splice
to guide a search of the external world for an attribute with similar characteristics.
Any such attributes could then be incorporated with the current attribute set allowing
a bootstrapping of the domain representation. This could be used within
the Knowledge Discovery in Databases (KDD) approach (Fayyad, Piatsky-Shapiro
Smyth, 1996) which includes the notion that analysts can reiterate the data selection
and learning (data mining) tasks. Perhaps too, this method could provide a
way for an automated agent to select potentially useful attributes from the outside
world, with which to extend its existing domain knowledge.
Acknowledgments
We would like to thank the editors and the anonymous reviewers. Their suggestions
led to a much improved paper.
Michael Harries was partially supported by an Australian Postgraduate Award
(Industrial) sponsored by RMB Australia.
Notes
1. In the following experiments, n% noise implies that the class was randomly selected with a
probability of n%. This method for generating noise was chosen to be consistent with Widmer
and Kubat (1996).
--R
Incremental batch learning.
Detecting concept drift in financial time series prediction using symbolic machine learning.
Neural Computation
Floating approximation in time-varying knowledge bases
Second tier for decision trees.
Adapting to drift in continuous domains.
Classifying sensor data with CALCHAS.
Learning flexible concepts: Fundimental ideas and a method based on two-tiered representation
Some experiments in applying inductive inference principles to surface reconstruction.
A universal prior for integers and estimation by minimum description length.
Annals of Statistics
Density adaptive learning and forgetting.
Learning to fly.
Robust classification with context-sensitive features
Recognition and exploitation of contextual clues via incremental meta- learning
Effective learning in dynamic environments by explicit concept tracking.
Learning in the presence of concept drift and hidden contexts.
--TR
--CTR
Francisco Ferrer-Troyano , Jesus S. Aguilar-Ruiz , Jose C. Riquelme, Incremental rule learning based on example nearness from numerical data streams, Proceedings of the 2005 ACM symposium on Applied computing, March 13-17, 2005, Santa Fe, New Mexico
Francisco Ferrer-Troyano , Jesus S. Aguilar-Ruiz , Jose C. Riquelme, Data streams classification by incremental rule learning with parameterized generalization, Proceedings of the 2006 ACM symposium on Applied computing, April 23-27, 2006, Dijon, France
Anand Narasimhamurthy , Ludmila I. Kuncheva, A framework for generating data to simulate changing environments, Proceedings of the 25th conference on Proceedings of the 25th IASTED International Multi-Conference: artificial intelligence and applications, p.384-389, February 12-14, 2007, Innsbruck, Austria
Chunsheng Yang , Sylvain Ltourneau, Learning to predict train wheel failures, Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, August 21-24, 2005, Chicago, Illinois, USA
Dwi H. Widyantoro , John Yen, Relevant Data Expansion for Learning Concept Drift from Sparsely Labeled Data, IEEE Transactions on Knowledge and Data Engineering, v.17 n.3, p.401-412, March 2005
Antonin Rozsypal , Miroslav Kubat, Association mining in time-varying domains, Intelligent Data Analysis, v.9 n.3, p.273-288, May 2005
Bruce Edmonds, Learning and exploiting context in agents, Proceedings of the first international joint conference on Autonomous agents and multiagent systems: part 3, July 15-19, 2002, Bologna, Italy | concept drift;hidden context;contextual clustering;batch learning;context-sensitive learning |
296379 | Tracking the Best Disjunction. | Littlestone developed a simple deterministic on-line learning algorithm for learning k-literal disjunctions. This algorithm (called {\sc Winnow}) keeps one weight for each of then variables and does multiplicative updates to its weights. We develop a randomized version of {\sc Winnow} and prove bounds for an adaptation of the algorithm for the case when the disjunction may change over time. In this case a possible target {\it disjunction schedule} &Tgr; is a sequence of disjunctions (one per trial) and the {\it shift size} is the total number of literals that are added/removed from the disjunctions as one progresses through the sequence.We develop an algorithm that predicts nearly as well as the best disjunction schedule for an arbitrary sequence of examples. This algorithm that allows us to track the predictions of the best disjunction is hardly more complex than the original version. However, the amortized analysis needed for obtaining worst-case mistake bounds requires new techniques. In some cases our lower bounds show that the upper bounds of our algorithm have the right constant in front of the leading term in the mistake bound and almost the right constant in front of the second leading term. Computer experiments support our theoretical findings. | Introduction
One of the most significant successes of the Computational Learning Theory community
has been Littlestone's formalization of an on-line model of learning and
the development of his algorithm Winnow for learning disjunctions (Littlestone,
1989, 1988). The key feature of Winnow is that when learning disjunctions of
constant size, the number of mistakes of the algorithm grows only logarithmically
with the input dimension. For many other standard algorithms such as the Perceptron
Algorithm (Rosenblatt, 1958), the number of mistakes can grow linearly in
the dimension (Kivinen, Warmuth, & Auer, 1997). In the meantime a number of
algorithms similar to Winnow have been developed that also show the logarithmic
growth of the loss bounds in the dimension (Littlestone & Warmuth, 1994; Vovk,
1990; Cesa-Bianchi et al., 1997; Haussler, Kivinen, & Warmuth, 1994).
* An extended abstract appeared in (Auer & Warmuth, 1995).
* M. K. Warmuth acknowledges the support of the NSF grant CCR 9700201.
In this paper we give a refined analysis of Winnow, develop a randomized version
of the algorithm, give lower bounds that show that both the deterministic and the
randomized version are close to optimal, and adapt both versions so that they can
be used to track the predictions of the best disjunction.
Consider the following by now standard on-line learning model (Littlestone, 1989,
1988; Vovk, 1990; Cesa-Bianchi et al., 1997). Learning proceeds in trials. In trial
the algorithm is presented with an instance x t (in our case an n-dimensional
binary vector) that is used to produce a binary prediction - y t . The algorithm then
receives a binary classification y t of the instance and incurs a mistake if -
The goal is to minimize the number of mistakes of the algorithm for an arbitrary
sequence of examples h(x This is of course a hopeless scenario: for any
deterministic algorithm an adversary can always choose the sequence so that the
algorithm makes a mistake in each trial. A more reasonable goal is to minimize the
number of mistakes of the algorithm compared to the minimum number of mistakes
made by any concept from a comparison class.
1.1. The (non-shifting) basic setup
In this paper we use monotone 1 k-literal disjunctions as the comparison class. If the
dimension (number of Boolean attributes/literals) is n then such disjunctions are
Boolean formulas of the form x
, where the (distinct) indices i j lie
in ng. The number of classification errors of such a disjunction with respect
to a sequence of examples is simply the total number of misclassifications that this
disjunction produces on the sequence. The goal is to develop algorithms whose
number of mistakes is not much larger than the number of classification errors of
the best disjunction, for any sequence of examples.
In this paper we consider the case where the mistakes of the best ("target") disjunction
are caused by attribute errors. The number of attribute errors of an example
with respect to a target disjunction u is the minimum
number of attributes/bits of x that have to be changed so that for the resulting x 0 ,
y. The number of attribute errors for a sequence of examples with respect
to a target concept is simply the total number of such errors for all examples of
the sequence. Note that if the target u is a k-literal monotone disjunction then the
number of attribute errors is at most k times the number of classification errors
with respect to u (i.e. k times the number of examples (x; y) in the sequence for
which u(x) 6= y).
Winnow can be tuned as a function of k so that it makes at most O(A+k ln(n=k))
mistakes on any sequence of examples where the best disjunction incurs at most
A attribute errors (Littlestone, 1988). We give a randomized version of Winnow
and give improved tunings of the original algorithm. The new algorithm can be
tuned based on k and A so that its expected mistake bound is at most A
any sequence of examples for which there
is a monotone k-literal disjunction with at most A attribute errors. We also show
how the original deterministic algorithm can be tuned so that its number of mistakes
is at most 2A
for the same set of sequences.
Our lower bounds show that these bounds are very close to optimal. We show
that for any algorithm the expected number of mistakes must be at least
our upper bound has the correct constant on the leading term
and almost the optimal constant on the second term. For deterministic algorithms
our lower bounds show that the constant on the leading term is optimal.
Our lower bounds for both the deterministic and the randomized case cannot be improved
significantly because there are essentially matching upper bounds achieved
by non-efficient algorithms with the correct factors on the first and the second term.
These algorithms use
experts (Cesa-Bianchi et al., 1997): each expert simply
computes the value of a particular k-literal disjunction and one weight is kept per
expert. This amounts to expanding the n-dimensional Boolean inputs into
Boolean inputs and then using single literals (=experts) (Littlestone & Warmuth,
1994; Vovk, 1990; Cesa-Bianchi et al., 1997) as the comparison class instead of k-
literal disjunctions. The expected number of mistakes of the randomized algorithm
is at most Q+
a bound on the number of
classification errors of the best k-literal disjunction. The mistake bound of the deterministic
algorithm is exactly twice as high. Observe that these algorithms have
to use about n k weights, and that they need that much time in each trial to calculate
their prediction and update the weights. Thus their run time is exponential in
k.
In contrast, our algorithm uses only n weights. On the other hand the noise in the
upper bounds of our efficient algorithm is measured in attribute errors rather than
classification errors. This arises since we are using just one weight per attribute.
Recall that a classification error with respect to a k-literal disjunction can equate to
up to k attribute errors. To capture errors that affect up to k attributes efficiently
the expansion to
experts seems to be unavoidable. Nevertheless, it is surprising
that our version of Winnow is able to get the right factor before the number of
attribute errors A and for the randomized version almost the right factor before the
square root term. In some sense Winnow compresses
\Delta weights to only n weights.
At this point we don't have a combinatorial interpretation of our weights. Such
an interpretation was only found for the single literal (expert) case (Cesa-Bianchi,
Freund, Helmbold, & Warmuth, 1996).
As Littlestone (Littlestone, 1991) we use an amortized analysis with an entropic
potential function to obtain our worst-case loss bounds. However besides the more
careful tuning of the bounds we take the amortized analysis method a significant
step further by proving mistake bounds of our algorithm as compared to the best
shifting disjunction.
1.2. Shifting disjunctions
Assume that a disjunction u is specified by a n-dimensional binary vector, where
the components with value 1 correspond to the monotone literals of the disjunction.
For two disjunctions u and u 0 the Hamming distance measures how many
literals have to be "shifted" to obtain u 0 from u. A disjunction schedule T for a
sequence of examples of length T is simply a sequence of T disjunctions u t . The
(shift) size of the schedule T is
is the all zero vector). In
the original non-shifting case all u are equal to some k-literal disjunction
u and accordingly to the above definition the "shift size" is k.
At trial t the schedule T predicts with disjunction u t . We define the number of
attribute errors of an example sequence h(x t ; y t )i with respect to a schedule T as
the total number of attributes that have to be changed in the sequence of examples
to make it consistent with the schedule T , i.e. for which the changed instances x 0
Note that the loss bounds for the non-shifting case can be written as cA+O(
\Delta is the number of bits it takes to describe a disjunction with
k literals, and where for the randomized and for the deterministic
algorithm. Surprisingly, we were able to prove bounds of the same form for the
shifting disjunction case. B is now the number of bits it takes to describe the best
schedule T and A is the number of attribute errors of this schedule. If Z is the shift
size of schedule T then it takes log 2
Z
bits to describe a schedule
T in respect to a given sequence of examples. 2
Our worst-case mistake bounds are similar to bounds obtained for "competitive
algorithms" in that we compare the number of mistakes of our algorithm against
the number of attribute errors of the best off-line algorithm that is given the whole
sequence ahead of time. The off-line algorithm still incurs A attribute errors and
here we bound the additional loss of the on-line algorithm over the number of
attribute errors of the best schedule (as opposed to the coarser method of bounding
the ratio of on-line over off-line).
Winnow does multiplicative updates to its weights. Whenever the algorithm makes
a mistake then the weights of all the literals for which the corresponding bit in the
current input instance is one are multiplied by a factor. In the case of Winnow2,
the version of Winnow this paper is based on (Littlestone, 1988), this factor is
either ff or 1=ff, where ff ? 1 is a parameter of the algorithm. The multiplicative
weight updates might cause the weights of the algorithm to decay rather rapidly.
Since any literal might become part of the disjunction schedule even when it was
misleading during the early part of the sequence of examples, any algorithm that is
to predict well as compared to the best disjunction schedule must be able to recover
weights quickly. Our extension of Winnow2 simply adds a step to the original
algorithm that resets a weight to fi=n whenever it drops below this boundary.
Similar methods for lower bounding the weights were used in the algorithm Wml
of (Littlestone & Warmuth, 1994) which was designed for predicting as well as the
best shifting single literal (which is called expert in (Cesa-Bianchi et al., 1997)).
In addition to generalizing the work of (Littlestone & Warmuth, 1994) to arbitrary
size disjunctions we were able to optimize the constant in the leading term of the
mistake bound of Winnow and develop a randomized version of the algorithm.
In (Herbster & Warmuth, 1998) the work of (Littlestone & Warmuth, 1994) was
generalized in a different direction. The focus there is to predict as well as the
best shifting expert, where "well" is measured in terms of other loss functions than
the discrete loss (counting mistakes) which is the loss function used in this paper.
Again the basic building block is a simple on-line algorithm that uses multiplicative
weight updates (Vovk, 1990; Haussler et al., 1994) but now the predictions and the
feedback in each trial are real-valued and lie in the interval [0; 1]. The class of loss
functions includes the natural loss functions of log loss, square loss and Hellinger
loss. Now the loss does not occur in "large" discrete units. Instead the loss in a
trial my be arbitrarily small and thus more sophisticated methods are needed for
recovering small weights quickly (Herbster & Warmuth, 1998) than simply lower
bounding the weights.
Why are disjunctions so important? Whenever a richer class is built by (small)
unions of a large number of simple basic concepts, our methods can be applied.
Simply expand the original input into as many inputs as there are basic concepts.
Since our mistake bounds only depend logarithmically on the number of basic con-
cepts, we can even allow exponentially many basic concepts and still have polynomial
mistake bounds. This method was previously used for developing noise robust
algorithms for predicting nearly as well as the best discretized d-dimensional axis-parallel
box (Maass & Warmuth, 1998; Auer, 1993) or as well as the best pruning
of a decision tree (Helmbold & Schapire, 1997). In these cases a multiplicative
algorithm maintains one weight for each of the exponentially many basic concepts.
However for the above examples, the multiplicative algorithms with the exponentially
many weights can still be simulated efficiently. Now, for example, the methods
of this paper immediately lead to an efficient algorithm for predicting as well as the
best shifting d-dimensional box. Thus by combining our methods with existing al-
gorithms, we can design efficient learning algorithms with provably good worst-case
loss bounds for more general shifting concepts than disjunctions.
Besides doing experiments on practical data that exemplify the merits of our worst-case
mistake bounds, this research also leaves a number of theoretical open prob-
lems. Winnow is an algorithm for learning arbitrary linear threshold functions
and our methods for tracking the best disjunction still need to be generalized to
learning this more general class of concepts.
We believe that the techniques developed here for learning how to predict as well
as the best shifting disjunction will be useful in other settings such as developing
algorithms that predict nearly as well as the best shifting linear combination. Now
the discrete loss has to be replaced by a continuous loss function such as the square
loss, which makes this problem more challenging.
1.3. Related work
There is a natural competitor to Winnow which is the well known Perceptron
algorithm (Rosenblatt, 1958) for learning linear threshold functions. This algorithm
does additive instead of multiplicative updates. The classical Perceptron
Convergence Theorem gives a mistake bound for this algorithm (Duda & Hart,
1973; Haykin, 1994), but this bound is linear in the number of attributes (Kivinen
et al., 1997) whereas the bounds for the Winnow-like algorithms are logarithmic
in the number of attributes. The proof of the Perceptron Convergence Theorem
can also be seen as an amortized analysis. However the potential function needed
for the perceptron algorithm is quite different from the potential function used for
the analysis of Winnow. If w t is the weight vector of the algorithm in trial t and
u is a target weight vector, then for the perceptron algorithm
2 is the
potential function where jj:jj 2 is the Euclidean length of a vector. In contrast the
potential function used for the analysis of Winnow (Littlestone, 1988, 1989) that
is also used in this paper is the following generalization 3 of relative entropy (Cover,
In the case of linear regression a framework was developed (Kivinen & Warmuth,
1997) for deriving updates from the potential function used in the amortized anal-
ysis. The same framework can be adapted to derive both the Perceptron algorithm
and Winnow. The different potential functions for the algorithms lead to the
additive and multiplicative algorithms, respectively. The Perceptron algorithm is
seeking a weight vector that is consistent with the examples but otherwise minimizes
some Euclidean length. Winnow instead minimizes a relative entropy and
is thus rooted in the Minimum Relative Entropy Principle of Kullback (Kapur &
Kesavan, 1992; Jumarie, 1990).
1.4. Organization of the paper
In the next section we formally define the notation we will use throughout the
paper. Most of them have already been discussed in the introduction. Section 3
presents our algorithm and Section 4 gives the theoretical results for this algorithm.
In Section 5 we consider some more practical aspects, namely how the parameters
of the algorithm can be tuned to achieve good performance. Section 6 reports some
experimental results. The analysis of our algorithm and the proofs for Section 4
are given in Section 7. Lower bounds on the number of mistakes made by any
algorithm are shown in Section 8 and we conclude in Section 9.
2. Notation
A target schedule a sequence of disjunctions represented by n-ary
bit vectors u . The size of the shift from disjunction
u to disjunction u t is z
j. The total
shift size of schedule T is
z t where we assume that u
To get more precise bounds
for the case when there are shifts in the target schedule we will distinguish between
shifts where a literal is added to the disjunction and shifts where a literal is removed
from the disjunction. Thus we define z
t as the
number of times a literal is switched on, and
t as the number of
times a literal is switched off.
A sequence of examples consists of attribute vectors
classifications y t 2 f0; 1g. The prediction of
disjunction u t for attribute vector x t is u t
The number of attribute errors a t at trial t with respect to
a target schedule T is the minimal number of attributes that have to be changed,
resulting in x 0
t , such that u t
That is a
g. The total number of attribute errors of sequence S with respect to
schedule T is
a t . We denote by S(Z; A; n) the class of example sequences
S with n attributes which are consistent with some target schedule T with shift
size Z and with at most A attribute errors. If we wish to distinguish between
positive and negative shifts we denote the corresponding class by S(Z
are the numbers of literals added and removed, respectively,
in the target schedule. By S 0 (k; A; n) we denote the class of example sequences
S with n attributes which are consistent with some non-shifting target schedule
(i.e.
and with at most A attribute errors.
For the case that only upper bounds on Z, Z are known we denote
the corresponding classes by S - (Z; A;
z-Z
-k
The loss of a learning algorithm L on an example sequence S is the number of
misclassifications
is the binary prediction of the learning algorithm L in trial t.
3. The algorithm
We present algorithm Swin ("Shifting Winnow"), see Table 1, an extension of
Littlestone's Winnow2 algorithm (Littlestone, 1991). Our extension incorporates
a randomization of the algorithm, and it guarantees a lower bound on the weights
used by the algorithm. The algorithm maintains a vector of n weights for the n
attributes. By w we denote the weights at the end of trial t,
Table
1. Algorithm Swin
Parameters:
The algorithm uses parameters ff ?
Initialization:
Set the weights to initial values
In each trial t - 1 set r predict
ae
Receive the binary classification y t .
If y
Update:
If y t 6= p(r t ) then for all
1. w 0
2. w
and w 0 denotes the initial value of the weight vector. In trial t the algorithm
predicts using the weight vector w . The prediction of the algorithm depends
on r
1]. The algorithm
predicts 1 with probability p(r t ), and it predicts 0 with probability
obtain a deterministic algorithm one has to choose a function p
predicting the algorithm receives the classification y t . If y
i.e. the weight vector is not modified. Since y t 2 f0; 1g and p(r t ) 2 [0; 1] this can
only occur when the prediction was deterministic, i.e. p(r t correct.
An update occurs in all other cases when the prediction was wrong or p(r t
The updates of the weights are performed in two steps. The first step is the original
Winnow update, and the second step guarantees that no weight is smaller than fi
for some parameter fi (a similar approach was taken in (Littlestone & Warmuth,
1994)). Observe that the weights are changed only if the probability of making a
mistake was non-zero. For the deterministic algorithm this means that the weights
are changed only if the algorithm made a mistake. Furthermore the i-th weight is
modified only if x 1. The weight is increased (multiplied by ff) if y
it is decreased (divided by ff) if y The parameters ff, fi, w 0 , and the function
p(\Delta), have to be set appropriately. A good choice for function p(\Delta) is the following:
for a randomized prediction let
if
(RAND)
and for a deterministic version of the algorithm let
(DET)
For the randomized version one has to choose fi ! ln ff
. Observe that (DET)
is obtained from (RAND) by choosing the threshold
in (RAND). This corresponds to the straightforward conversion from
a randomized prediction algorithm into a deterministic prediction algorithm.
Theoretically good choices of the parameters ff, fi, and w 0 are given in the next
section and practical issues for tuning the parameters are discussed in Section 5.
4. Results
In this section we give rigorous bounds on the (expected) number of mistakes of
Swin, first in general and then for specific choices of ff, fi, and w 0 , all with p(\Delta)
chosen from (RAND) or (DET). These bounds can be shown to be close to optimal
for adversarial example sequences, for details see Section 8.
Theorem 1 (randomized version) Let ff ? 1,
n , and p(\Delta)
as in (RAND). Then for all S
A
If fi - n
e then the bound holds for all S
Theorem 2 (deterministic version) Let ff ? 1,
n , and
p(\Delta) as in (DET). Then for all S
A
If fi - n
e then the bound holds for all S
Theorem 3 (non-shifting case) Let ff ? 1,
if Swin uses the function p(\Delta) given by (RAND), and
if Swin uses the function p(\Delta) given by (DET).
e then the bounds hold for all S 2 S -
Remark. The usual conversion of a bound M for the randomized algorithm into a
bound for the deterministic algorithm would give 2M as the deterministic bound. 4
But observe that our deterministic bound is just 1 times the randomized
bound.
Since at any time a disjunction cannot contain more than n literals we have Z
which gives the following corollary.
, and w
. If p(\Delta) as in
(RAND) then for all S 2 S - (Z; A; n)
A
j.
j.
If p(\Delta) as in (DET) then for all S 2 S - (Z; A; n)
A
ffn
j.
j.
At first we give results on the number of mistakes of Swin, if no information besides
n, the total number of attributes, is given.
Theorem 4 Let
n , and p(\Delta) be as in (RAND). Then
for all S 2 S - (Z; A; n)
n , and p(\Delta) be as in (DET). Then for all S 2
n , and p(\Delta) be as in (RAND). Then for all S 2
then the above bound holds for all S 2 S -
n). For n - 2 we have
, and p(\Delta) be as in (DET). Then for all S 2 S 0 (k; A; n)
then the above bound holds for all S 2 S -
n). For n - 2 we have
In Section 8 we will show that these bounds are optimal up to constants. If A
and Z are known in advance then the parameters of the algorithm can be tuned to
obtain even better results. If for example in the non-shifting case the number k of
attributes in the target concept is known we get
Theorem 5 Let
n , and p(\Delta) be as in (RAND). Then for
e
then the above bound holds for all S 2 S -
n). For k - n
e
we set
e
and get EM(Swin;S) - 1:44
e
for all S 2 S -
, and p(\Delta) be as in (DET). Then for all S 2 S 0 (k; A; n)
e then the above bound holds for all S 2 S -
n). For k - n
e we set
e and get M(Swin;S) - 2:75
e for all S 2 S -
Of particular interest is the case when A is the dominant term, i.e. A AE k ln n
Theorem 6 Let A - k ln n
A
n , and p(\Delta) be as in
(RAND). Then for all S 2 S 0 (k; A; n)
r
e then the above bound holds for all S 2 S -
n). For k - n
e , A - n
e ,
e , we have EM(Swin;S) - A
e for all
If A - 2k
A
, and p(\Delta) be as in (DET), then
for all S 2 S 0 (k; A; n)
r
e
then the above bound holds for all S 2 S -
n). For k - n
e
e
e , we have M(Swin;S) - 2A
e for all
In the shifting case we get for dominant A AE Z ln n
Theorem 7 Let
Z+minfn;Zg
Z
, and A and Zsuch that ffl - 1
. Then for
n , and p(\Delta) as in (RAND), and for all S 2 S - (Z; A; n),
r
An
Z
Z+minfn;Zg
A
An
Z
n , and p(\Delta) as in
(DET), then for all S 2 S - (Z; A; n),
r
An
Z
In Section 8 we will show that in Theorems 6 and 7 the constants on A are optimal.
Furthermore we can show for the randomized algorithm that also the magnitude of
the second order term in Theorem 6 is optimal.
5. Practical tuning of the algorithm
In this section we give some thoughts on how the parameters ff, fi, and w 0 of
Swin should be chosen for particular target schedules and sequences of examples.
Our recommendations are based on our mistake bounds which hold for any target
schedule and for any sequence of examples with appropriate bounds on the number
of shifts and attribute errors. Thus it has to be mentioned that, since many target
schedules and many example sequences are not worst case, our bounds usually
overestimate the number of mistakes made by Swin. Therefore parameter settings
different from our recommendations might result in a smaller number of errors for
a specific target schedule and example sequence. On the other hand Swin is quite
insensitive to small changes in the parameters (see Section 6) and the effect of such
changes should be benign.
If little is known about the target schedule or the example sequence than the parameter
settings of Theorems 4 or 5 are advisable since they balance well between
the effect of target shifts and attribute errors. If good estimates for the number of
target shifts and the number of attribute errors are known than good parameters
can be calculated by numerically minimizing the bounds in Theorems 1, 2, 3 or
Corollary 1, respectively.
If the average rate of target shifts and attribute errors is known such that Z -
r Z T and A - r AT with r Z ? 0; r A - 0 then for large T the error rate r
M(Swin;S)=T is by Corollary 1 approximately upper bounded by
r A
for randomized predictions and by
r A
ffn
for deterministic predictions. Again, optimal choices for ff and fi can be obtained
by numerical minimization.
6. Experimental results
The experiment reported in this section is not meant to give a rigorous empirical
evaluation of algorithm Swin. Instead, it is intended as an illustration of the typical
behavior of Swin, compared with the theoretical bound and also with a version of
Winnow which was not modified to adapt to shifts in the target schedule.
In our experiment we used attributes and a target schedule T of
length which starts with 4 active literals. After 1000 trials one of the
literals is switched off and after another 1000 trials another literal is switched on.
This switching on and switching off of literals continues as depicted in Figure 1.
Thus there are initially 4 active literals).
The example sequence h(x t ; y t )i was chosen such that for half of the examples y
and for the other half y The values of attributes not appearing in the target
schedule were chosen at random such that x probability 1/2. For examples
with y exactly one of the active attributes (chosen at random) was set to
number of trials
number
of
active
literals
Figure
1. Shifts in the target schedule used in the experiment.
1. For examples with attribute errors all relevant attributes were either set to 1 (for
the case y
set to 0 (for the case y
Attribute errors occurred at trials
with y 1 and at trials
a
Figure
2 shows the performance of Swin compared with the theoretical bound
where the parameters were set by numerically minimizing the bound of Corollary 1
as described in the previous section, which yielded
The theoretical bound at trial t is calculated from the actual number of shifts and
attribute errors up to this trial. Thus an increase of the bound is due to a shift
in the target schedule or an attribute error at this trial. In Figure 2 the reasons
for these increases are indicated by z + for a literal switched on, z \Gamma for a literal
switched off, and a for attribute errors.
Figure
2 shows that the theoretical bound very accurately depicts the behavior of
Swin, although it overestimates the actual number of mistakes by some amount.
It can be seen that switching off a literal causes far less mistakes than switching on
a literal, as predicted by the bound. Also the relation between attribute errors and
mistakes can be seen.
The performance of Swin for the whole sequence of examples is shown in Figure 3
and it is compared with the performance of a version of Winnow which was not
modified for target shifts. As can be seen Swin adapts very quickly to shifts in
the target schedule. On the other hand, the unmodified version of Winnow makes
more and more mistakes for each shift.
The unmodified version of Winnow we used is just Swin with Thus the
weights are not lower bounded and can become arbitrarily small which causes a
large number of mistakes if the corresponding literal becomes active. We used the
z - a
a=4
z
number of trials
number
of
mistakes theoretical bound
performance of SWIN
Figure
2. Comparison of Swin with the theoretical bound for a particular target schedule and
sequence of examples. Shifts and attribute errors are indicated by z
number of trials
number
of
mistakes
theoretical bound
performance of SWIN
performance of Winnow
Figure
3. A version of Winnow which does not lower bound the weights makes many more
mistakes than Swin.
same ff for the unmodified version but we set w which is optimal for the
initial part of the target schedule. Therefore the unmodified Winnow adapts very
quickly to this initial part, but then it makes an increasing number of mistakes
for each shift in the target schedule. For each shift the number of mistakes made
approximately doubles.
In the last plot, Figure 4, we compare the performance of Swin with tuned parameters
to the performance of Swin with the generic parameter setting given by
Theorem 4. Although the tuned parameters do perform better the difference is
number of trials
number
of
mistakes
theoretical bound
performance of SWIN
Figure
4. Tuned parameters of Swin versus the generic parameters
relatively small.
The overall conclusion of our experiment is that, first, the theoretical bounds capture
the actual performance of the algorithm quite well, second, that some mechanism
of lower bounding the weights of Winnow is necessary to make the algorithm
adaptive to target shifts, and third, that moderate changes in the parameters do
not change the qualitative behavior of the algorithm.
7. Amortized analysis
In this section we first prove the general bounds given in Theorems 1 and 2 for the
randomized and for the deterministic version of Swin. Then from these bounds we
calculate the bounds given in Theorems 4-7 for specific choices of the parameters.
The analysis of the algorithm proceeds by showing that the distance between the
weight vector of the algorithm w t , and vector u t representing the disjunction at trial
t, decreases, if the algorithm makes a mistake. The potential/distance function used
for the previous analysis of Winnow (Littlestone, 1988, 1989, 1991) is the following
generalization of relative entropy to arbitrary non-negative weight vectors:
(This distance function was also used for the analysis of the Egu regression algorithm
(Kivinen & Warmuth, 1997), which shows that Winnow is related to the
algorithm.) By taking derivatives it is easy to see that the distance is minimal
and equal to 0 if and only if w . With the convention that 0 and the
assumption that u 2 f0; 1g n the distance function simplifies to
We start with the analysis of the randomized algorithm with shifting target dis-
junctions. The other cases will be derived easily from this analysis. At first we
calculate how much the distance D(u t ; w t ) changes between trials:
Observe that term (1) might be non-zero in any trial, but that terms (2) and (3)
are non-zero only if the weights are updated in trial t. For any fl, ffi with
can lower bound term (1) by
If the weights are updated in trial t, term (2) is bounded by
The third equality holds since each x t;i 2 f0; 1g. Remember that x 0 t is obtained
from x t by removing the attribute errors from x t . The last inequality follows from
the fact that
At last observe that w t;i 6= w 0
only if y
. In this case w 0
ffn
and we get for term (3)
Summing over all trials we have to consider the trials where the weights are updated
and we have to distinguish between trials with y
denote these trials. Then by the above bounds on terms (1), (2), and (3) we have
\Theta r t
Now we want to lower bound the sum over M 0 and M 1 by the expected (or total)
number of mistakes of the algorithm. We can do this by choosing an appropriate
function p(\Delta; w t ). We denote by p t the probability that the algorithm makes a
mistake in trial t. Then the expected number of mistakes is
. Observe that
since in this case y
Thus it is sufficient to find a function p(\Delta)
and a constant C with
and
For such a function p(\Delta) satisfying (4) and (5) we get
assuming that S 2 S(Z
we can upper bound the expected number of mistakes by
Hence we want to choose p(\Delta) and C such that C is as big as possible. For that
fix p(\Delta) and let r be a value where p(r ) becomes 1. 5 Since the left hand sides of
equations (4) and (5) are continuous we get (r
combining these two we have
ff
This can be achieved by choosing p(\Delta) as in (RAND) which satisfies (4) and (5) for
course we have to choose fi ! ln ff
. Putting everything
together we have the following lemma.
and assume that
are the weights used by algorithm
Swin. Then for all S
if Swin uses the function p(\Delta) given by (RAND).
For the deterministic variant of our algorithm the function p(\Delta) has to take values in
f0; 1g. Thus we get from (4) and (5) that (r\Gammafi)(1\Gamma1=ff) - C and r(1\Gammaff)+ln ff - C
which yields
This we get by choosing p(\Delta) as in (DET) which satisfies (4) and (5) for
and assume that
are the weights used by algorithm
Swin. Then for all S
if Swin uses the function p(\Delta) given by (DET).
Now we are going to calculate bounds fl; ffi on 1 We get these bounds by
lower and upper bounding w t;i . Obviously w t;i - fi
for all t and i. The upper
bound on w t;i is derived from the observation that w t;i ? w t\Gamma1;i only if y
with the p(\Delta) as in (RAND) or
(DET), and r t - w t\Gamma1;i x t;i we find that w t;i - ff. Thus ln efi
Lemma 3 If fi
the weights w t;i
of algorithm Swin with function p(\Delta) as in (RAND) or (DET) satisfy
and
Proof of Theorems 1 and 2. By Lemmas 1, 2, and 3. 2
Proof of Theorem 3 In the non-shifting case where u
and it is
is the number of attributes in the target disjunction
u. Thus in the non-shifting case the term in the upper bounds of
Lemmas 1 and 2 can be replaced by k
, which gives the theorem. 2
7.1. Proofs for specific choices of the parameters
Proofs of Theorems 4 and 5. By Theorem 3 and Corollary 1 and simple calcu-
lations. 2
Proof of Theorem 6. For we get from Theorem 3 that
with
with 2. Then
In the second inequality we used that ffl - 1. Substituting the values for c and ffl
gives the statements of the theorem. 2
Proof of Theorem 7. For
we get from
Corollary 1 that
j.
j.
with
j.
j.
with 2. Then for ffl - 1=10
j.
j.
c
c
for
r
A
An
Z
This gives the bounds of the theorem. 2
8. Lower bounds
We start by proving a lower bound for the shifting case. We show that for any
learning algorithm L there are example sequences S for which the learning algorithm
makes "many" mistakes. Although not expressed explicitly in the following
theorems we will show that these sequences S can be generated by target schedules
each disjunction u t consists of exactly one literal, i.e.
is the jth unit vector.
Our first lower bound shows that for any deterministic algorithm there is an adversarial
example sequence in S(Z; A; n) such that it makes at least 2A
many mistakes. Related upper bounds are given in Theorems 4 and 7.
Theorem 8 For any deterministic learning algorithm L, any n - 2, any Z - 1,
and any A - 0, there is an example sequence S 2 S(Z; A; n) such that
Proof. For notational convenience we assume that
R - 1. We construct the example sequence S depending on the predictions of
the learning algorithm such that the learning algorithm makes a mistake in each
trial. We partition the trials into R rounds. The first R \Gamma 1 rounds have length -,
the last round has length - errors will occur only within the last
trials. We choose the target schedule such that during each round the target
disjunction does not change and is equal to some e j .
At the beginning of each round there are disjunctions consistent with the
examples of this round. After l trials in this round there are still 2 - \Gammal consistent
disjunctions: we construct the attribute vector by setting half of the attributes
which correspond to consistent disjunctions to 1, and the other attributes to 0.
Furthermore we set y
y t is the prediction of the algorithm for
this attribute vector. Obviously half of the disjunctions are consistent with this
example, and thus the number of consistent disjunctions is divided by 2 in each
trial. Thus in each of the first R \Gamma 1 rounds there is a disjunction consistent with
all - examples of this round.
After in the last round there are two disjunctions consistent with the
examples of this round. For the remaining 2A+1 trials we fix some attribute vector
for which these two disjunctions predict differently, and again we set y
y t .
Thus one of these disjunctions disagrees at most A times with the classifications in
these This disagreement can be seen as caused by A attribute errors,
so that the disjunction is consistent with all the examples in the last round up to
A attribute errors. 2
Remark. Observe that a lower bound for deterministic algorithms like
implies the following lower bound on randomized algorithms:
This follows from the fact that any randomized learning algorithm can be turned
into a deterministic learning algorithm which makes at most twice as many mistakes
as the randomized algorithm makes in the average. This means that Theorem 8
implies for any randomized algorithm L that there are sequences S 2 S(Z; A; n)
with
Remark. As an open problem it remains to show lower bounds that have the same
form as the upper bounds of Theorem 7 with the square root term.
Now we turn to the non-shifting case. For are already lower bounds
known.
Lemma 4 (Littlestone & Warmuth, 1994) For any deterministic learning algorithm
L, any n - 2, and any A - 0, there is an example sequence S 2 S 0 (1; A; n) such
that
A slight modification of results in (Cesa-Bianchi et al., 1997) gives
Lemma 5 (Cesa-Bianchi et al., 1997) There are functions n(j) and A(n; j) such
that for any j ? 0, any randomized learning algorithm L, any n - n(j), and any
A - A(n; j), there is an example sequence S 2 S 0 (1; A; n) such that
A ln n:
We extend these results and obtain the following theorems. The corresponding
upper bounds are given in Theorems 5 and 6.
Theorem 9 For any deterministic learning algorithm L, any k - 1, any n - 2k,
and any A - 0, there is an example sequence S 2 S 0 (k; A; n) such that
Theorem 10 There are functions n(j) and A(n; j) such that for any j ? 0, any
randomized learning algorithm L, any k - 1, any n - kn(j), and any A - kA(n; j),
there is an example sequence S 2 S 0 (k; A; n) such that
r
Proof of Theorems 9 and 10. The proof is by a reduction to the case 1. The
n attributes are divided into k groups G i , such that each group consists
of
attributes. Furthermore we choose numbers a i -
\Xi A
with
A. For each group G i we choose a sequence S i
accordingly to Lemmas 4 and 5, respectively, such that for any learning algorithm
and
a
These sequences S i can be extended to sequences S 0
i with n attributes by setting
all the attributes not in group i to 0. Concatenating the expanded sequences S 0
we get a sequence S. It is easy to see that S 2 S(k; A; n). On the other hand
any learning algorithm for sequences with n attributes can be transformed into a
learning algorithm for sequences with a smaller number of attributes by setting the
missing attributes to 0. Thus on each subsequence S 0
of S learning algorithm L
makes at least as many mistakes as given in (6) and (7), respectively. Hence
and
r
s
A
r
if the function A(n; j) is chosen appropriately. 2
The last theorem shows that for randomized algorithms the constant of 1 before A
in Theorem 6 is optimal and that the best constant before the square root term is
in [1; 2].
9. Conclusion
We developed algorithm Swin which is a variant of Winnow for on-line learning
of disjunctions subject to target shift. We proved worst case mistake bounds for
Swin which hold for any sequence of examples and any kind of target drift (where
the amount of error in the examples and the amount of shifts is bounded). There
is a deterministic and a randomized version of Swin where the analysis of the randomized
version is more involved and interesting in its own right. Lower bounds
show that our worst case mistake bounds are close to optimal in some cases. Computer
experiments highlight that an explicit mechanism is necessary to make the
algorithm adaptive to target shifts.
Acknowledgments
We would like to thank Mark Herbster and Nick Littlestone for valuable discussions.
We also thank the anonymous referees for their helpful comments.
Notes
1. By expanding the dimension to 2n, learning non-monotone disjunctions reduces to the monotone
case.
2. Essentially one has to describe when a shift occurs and which literal is shifted. Obviously there
is no necessity to shift if the current disjunction is correct on the current example. Thus only
in some of the trials in which the current disjunction would make a mistake the disjunction is
shifted. Since the target schedule might make up to A mistakes due to attribute errors and
there are up to Z shifts, we get up to A + Z trials which are candidates for shifts. Choosing
Z of them and choosing one literal for each shift gives
Z
possibilities.
3. For this potential function the weights must be positive. Negative weights are handled via a
reduction (Littlestone, 1988, 1989; Haussler et al., 1994).
4. In the worst case the randomized algorithm makes a mistake with probability 1/2 in each trial
and the deterministic algorithm always breaks the tie in the wrong way such that it makes a
mistake in each trial. Thus the number of mistakes of the deterministic algorithm is twice the
expected number of mistakes of the randomized algorithm.
5. Formally let r Such a sequence hr j i exists if p(\Delta)
is not equal to 1 everywhere and if there is a value r with 1. For functions p(\Delta) not
satisfying these conditions algorithm Swin can be forced to make an unbounded number of
mistakes even in the absence of attribute errors.
--R
Tracking the best disjunction.
How to use expert advice.
Pattern classification and scene analysis.
Tight worst-case loss bounds for predicting with expert advice (Tech
Predicting nearly as well as the best pruning of a decision tree.
Tracking the best expert.
Entropy optimization principles with applications.
Additive versus exponentiated gradient updates for linear prediction.
The perceptron algorithm vs.
linear vs. logarithmic mistake bounds when few input variables are relevant.
Mistake bounds and logarithmic linear-threshold learning algorithms
Redundant noisy attributes
learning theory (pp.
The weighted majority algorithm.
Information and Computation
Efficient learning with virtual threshold gates.
learning theory (pp.
--TR
--CTR
Mark Herbster , Manfred K. Warmuth, Tracking the Best Expert, Machine Learning, v.32 n.2, p.151-178, Aug. 1998
D. P. Helmbold , S. Panizza , M. K. Warmuth, Direct and indirect algorithms for on-line learning of disjunctions, Theoretical Computer Science, v.284 n.1, p.109-142, 6 July 2002
Chris Mesterharm, Tracking linear-threshold concepts with Winnow, The Journal of Machine Learning Research, 4, 12/1/2003
Mark Herbster , Manfred K. Warmuth, Tracking the best regressor, Proceedings of the eleventh annual conference on Computational learning theory, p.24-31, July 24-26, 1998, Madison, Wisconsin, United States
Manfred K. Warmuth, Winnowing subspaces, Proceedings of the 24th international conference on Machine learning, p.999-1006, June 20-24, 2007, Corvalis, Oregon
Gentile , Nick Littlestone, The robustness of the
Olivier Bousquet , Manfred K. Warmuth, Tracking a small set of experts by mixing past posteriors, The Journal of Machine Learning Research, 3, 3/1/2003
Claudio Gentile, The Robustness of the p-Norm Algorithms, Machine Learning, v.53 n.3, p.265-299, December
Giovanni Cavallanti , Nicol Cesa-Bianchi , Claudio Gentile, Tracking the best hyperplane with a simple budget Perceptron, Machine Learning, v.69 n.2-3, p.143-167, December 2007
Mark Herbster , Manfred K. Warmuth, Tracking the best linear predictor, The Journal of Machine Learning Research, 1, p.281-309, 9/1/2001
S. B. Kotsiantis , I. D. Zaharakis , P. E. Pintelas, Machine learning: a review of classification and combining techniques, Artificial Intelligence Review, v.26 n.3, p.159-190, November 2006
Peter Auer, Using confidence bounds for exploitation-exploration trade-offs, The Journal of Machine Learning Research, 3, 3/1/2003 | concept drift;prediction;computational learning theory;amortized analysis;on-line learning |
296382 | Tracking the Best Expert. | We generalize the recent relative loss bounds for on-line algorithms where the additional loss of the algorithm on the whole sequence of examples over the loss of the best expert is bounded. The generalization allows the sequence to be partitioned into segments, and the goal is to bound the additional loss of the algorithm over the sum of the losses of the best experts for each segment. This is to model situations in which the examples change and different experts are best for certain segments of the sequence of examples. In the single segment case, the additional loss is proportional to log n, where n is the number of experts and the constant of proportionality depends on the loss function. Our algorithms do not produce the best partition; however the loss bound shows that our predictions are close to those of the best partition. When the number of segments is k+1 and the sequence is of length ℓ, we can bound the additional loss of our algorithm over the best partition by O(k \log n+k \log(ℓ/k)). For the case when the loss per trial is bounded by one, we obtain an algorithm whose additional loss over the loss of the best partition is independent of the length of the sequence. The additional loss becomes O(k\log n+ k \log(L/k)), where L is the loss of the best partitionwith k+1 segments. Our algorithms for tracking the predictions of the best expert aresimple adaptations of Vovk's original algorithm for the single best expert case. As in the original algorithms, we keep one weight per expert, and spend O(1) time per weight in each trial. | Introduction
Consider the following on-line learning model. The learning occurs in a series of
trials labeled In each trial t the goal is to predict the outcome y t 2 [0; 1]
which is received at the end of the trial. At the beginning of trial t, the algorithm
receives an n-tuple x t . The element x t;i 2 [0; 1] of the n-tuple x t represents the
prediction of an expert E i of the value of the outcome y t on trial t. The algorithm
then produces a prediction -
based on the current expert prediction tuple x t , and
on past predictions and outcomes. At the end of the trial, the algorithm receives the
outcome y t . The algorithm then incurs a loss measuring the discrepancy between
the prediction -
y t and the outcome y t . Similarly, each expert incurs a loss as well.
A possible goal is to minimize the total loss of the algorithm over all ' trials on
an arbitrary sequence of instance outcome pairs (such pairs are called examples).
Since we make no assumptions about the relationship between the prediction of
experts and the outcome (y t ), there is always some sequence of y t that is
* The authors were supported by NSF grant CCR-9700201. An extended abstract appeared in
(Herbster & Warmuth, 1995)
M. HERBSTER AND M. K. WARMUTH
"far away" from the predictions - y t of any particular algorithm. Thus, minimizing
the total loss over an arbitrary sequence of examples is an unreasonable goal. A
refined relativized goal is to minimize the additional loss of the algorithm over the
loss of the best expert on the whole sequence. If all experts have large loss then
this goal might actually be easy to achieve, since for all algorithms the additional
loss over the loss of the best expert may then be small. However, if at least one
expert predicts well, then the algorithm must "learn" this quickly and produce
predictions which are "close" to the predictions of the best expert in the sense that
the additional loss of the algorithm over the loss of the best expert is bounded.
This expert framework might be used in various settings. For example, the experts
might predict the chance of rain or the likelihood that the stock market will rise or
fall. Another setting is that the experts might themselves be various sub-algorithms
for recognizing particular patterns. The "master" algorithm that combines the
experts' predictions does not need to know the particular problem domain. It simply
keeps one weight per expert, representing the belief in the expert's prediction, and
then decreases the weight as a function of the loss of the expert.
Previous work of Vovk (Vovk, 1998) and others (Littlestone & Warmuth, 1994;
Haussler, Kivinen & Warmuth, 1998) has produced an algorithm for which there
is an upper bound on the additional loss of the algorithm over the loss of the best
expert. Algorithms that compare against the loss of the best expert are called
Static-expert algorithms in this paper. The additional loss bounds for these
algorithms have the form c ln n for a large class of loss functions, where c is a
constant which only depends on the loss function L, and n is the number of experts.
This class of loss functions contains essentially all common loss functions except for
the absolute loss and the discrete loss 1 (counting prediction mistakes), which are
treated as special cases (Littlestone & Warmuth, 1994; Vovk, 1995; Cesa-Bianchi,
Freund, Haussler, Helmbold, Schapire & Warmuth, 1997). For example, if the loss
function is the square or relative entropy loss, then
respectively
(see Section 2 for definitions of the loss functions).
In the paper we consider a modification of the above goal introduced by Littlestone
and Warmuth (Littlestone & Warmuth, 1994), in which the sequence of examples is
subdivided into k segments of arbitrary length and distribution. Each segment
has an associated expert. The sequence of segments and its associated sequence of
experts is called a partition. The loss of a partition is the sum of the total losses
of the experts associated with each segment. The best partition of size k is the
partition with k segments that has the smallest loss. The modified goal is to
perform well relative to the best partition of size k. This goal is to model real life
situations where the "nature" of the examples might change and a different expert
produces better predictions. For example, the patterns might change and different
sub-algorithms may predict better for different segments of the on-line sequence
of patterns. We seek to design master algorithms that "track" the performance of
the best sequence of experts in the sense that they incur small additional loss over
the best partition of size k. If the whole sequence of examples was given ahead of
time, then one could compute the best partition of a certain size and the associated
experts using dynamic programming. Our algorithms get the examples on-line and
never produce the best partition. Even so, we are able to bound the additional loss
over the best off-line partition for an arbitrary sequence of examples.
When there are ' trials, k experts, there are
distinct partitions. We can immediately get a good bound for this problem by
expanding the set of n experts into
experts." Each partition-expert represents a single partition of the trial sequence,
and predicts on each trial as the expert associated with the segment which contains
the current trial. Thus, using the Static-expert algorithm we obtain a bound
of c ln
of the additional loss of the
algorithm over the loss of the best partition. There are two problems: first, the
algorithm is inefficient, since the number of partition-experts is exponential in the
number of partitions; and second, the bound on the additional loss grows with the
sequence length.
We were able to overcome both problems. Instead of keeping one weight for
the exponentially many partitions, we can get away with keeping only one weight
per expert, as done in the Static-expert algorithm. So the "tracking" of the
predictions of the best partition is essentially for free. If there are n sub-algorithms
or experts whose predictions we want to combine, then as in the Static-expert
algorithm the new master algorithm takes only O(n) additional time per trial over
the time required for simulating the n sub-algorithms.
We develop two main algorithms: the Fixed-share Algorithm, and the Variable-
share Algorithm. Both of these are based on the Static-expert algorithms which
maintain a weight of the form e \GammajT i for each expert (cf. Littlestone & Warmuth,
1994; Vovk, 1995), where T i is total past loss of the expert i in past trials. In
each trial the master algorithm combines the experts' predictions using the current
weights of the experts. When the outcome of the trial is received, we multiply the
weight of every expert i by e \GammajL i , where L i is the loss of expert i in the current
trial. We call this update of the weights the Loss Update.
We modify the Static-expert Algorithm by adding an additional update to
obtain our algorithms. Since in our model the best expert may shift over a series of
trials, we cannot simply use weights of the form e \GammajT i , because before an expert is
optimal for a segment its loss in prior segments may be arbitrarily large, and thus
its weight may become arbitrarily small. So we need to modify the Static-expert
Algorithm so that small weights can be recovered quickly.
For this reason, each expert "shares" a portion of its weight with the other experts
after the Loss Update; we call this the Share Update. Both the Fixed-share and
Variable-share Algorithm first do the Loss Update followed by a Share Update,
which differs for each algorithm. In a Share Update, a fraction of each experts'
weight is added to the weight of each other expert. In the Fixed-share Algorithm
the experts share a fixed fraction of their weights with each other. This guarantees
that the ratio of the weight of any expert to the total weight of all the experts
may be bounded from below. Different forms of lower bounding the weights have
been used by the Wml algorithm and in the companion paper for learning shifting
disjunctions (Auer & Warmuth, 1998) that appears in this journal issue. The latter
two methods have been applied to learning problems where the loss is the discrete
4 M. HERBSTER AND M. K. WARMUTH
loss (i.e. counting mistakes). In contrast our methods work for the same general
class of continuous loss functions that the Static-expert algorithms can handle
(Vovk, 1998; Haussler et al., 1998). This class includes all common loss functions
such as the square loss, the relative entropy loss, and the hellinger loss. For this
class there are tight bounds on the additional loss (Haussler et al., 1998) of the
algorithm over the loss of the best expert (i.e., the non-shifting case). The Fixed-
share Algorithm obtains the additional loss of O(c[(k+1) log n+k log '
k +k]), which
is essentially the same as the sketched algorithm that uses the Static-expert
algorithm with exponentially many partition-experts. The salient feature of the
Fixed-share Algorithm is that it still uses O(1) time per expert per trial. However,
this algorithm's additional loss still depends on the length of the sequence. Our
lower bounds give some partial evidence that this seems to be unavoidable for loss
functions for which the loss in a single trial can be unbounded (such as for the
relative entropy loss). For the case when the loss in a particular trial is at most one
(such as for the square loss), we develop a second algorithm called the Variable-
share Algorithm. This algorithm obtains bounds on the additional loss that are
independent of the length of the sequence. It also shares weights after the Loss
Update; however, the amount each expert shares now is commensurate with the
loss of the expert in the current trial. In particular, when an expert has no loss, it
does not share any weight.
Both versions of our Share Update are trivial to implement and cost a constant
amount of time for each of the n weights. Although the algorithms are easy to
describe, proving the additional loss bounds takes some care. We believe that our
techniques constitute a practical method for tracking the predictions of the best
expert with provable worst-case additional loss bounds. The essential ingredient for
our success in a non-stationary setting, seems to be an algorithm for the stationary
setting with a multiplicative weight update whose loss bound grows logarithmically
with the dimension of the problem. Besides Vovk's Aggregating Algorithm
(Vovk, 1998) and the Weighted Majority Algorithm (Littlestone & Warmuth,
1994), which only use the Loss Update, and are the basis of this work, a number
of such algorithms have been developed. Examples are algorithms for learning
linear threshold functions (Littlestone, 1988; Littlestone, 1989), and algorithms
whose additional loss bound over the loss of the best linear combination of experts
or sigmoided linear combination of experts is bounded (Kivinen & Warmuth,
1997; Helmbold, Kivinen & Warmuth, 1995). Significant progress has recently been
achieved for other non-stationary settings building on the techniques developed in
this paper (see discussion in the Conclusion Section).
The paper is outlined as follows. After some preliminaries (Section 2), we present
the algorithms (Section 3), and give the basic proof techniques (Section 4). Sections
5 and 6 contain the detailed proofs for the Fixed-share and Variable-
share algorithms, respectively. The absolute loss is treated as a special case in
Section 7. Section 8 discusses a subtle but powerful generalization of the Variable-
share Algorithm, called the Proximity-variable-share Algorithm. The generalization
leads to improved bounds for the case when best expert of the next
segment is always likely to be "close" to the previous expert. Some preliminary
lower bounds are given in Section 9. Simulation results on artificial data that exemplify
our methods are given in Section 10. Finally, in Section 11 we conclude
with a discussion of recent work. The casual reader who might not be interested in
the detailed proofs is recommended to read the sections containing the preliminaries
(Section 2), the algorithms (Section 3) and the simulations (Section 10).
2. Preliminaries
Let ' denote the number of trials and n denote the number of experts labeled
When convenient we simply refer to an expert by its index; thus
"expert i" refers to expert E i . The prediction of all n experts in trial t is referred
to by the prediction tuple x t , while the prediction of expert i on trial t is denoted
by x t;i : These experts may be viewed as oracles external to the algorithm, and thus
may represent the predictions of a neural net, a decision tree, a physical sensor or
perhaps even of a human expert. The outcome of a trial t is y t , while the prediction
of the algorithm in trial t is - y t . The instance-outcome pair called the t-th
example. In this paper the outcomes, the expert predictions and the predictions of
the algorithm are all in [0; 1]. Throughout this paper S always denotes an arbitrary
sequence of examples, i.e. any sequence of elements from [0; 1] n \Theta [0; 1] of any length
'. A loss function L(p; q) is a function We consider four
loss functions in this paper: the square, the relative entropy, the hellinger, and the
absolute loss:
ent (p;
hel (p;
On trial t the loss of the algorithm A is L(y
Similarly, the loss of expert i
on trial t is L(y t ; x t;i ). We call a subsequence of contiguous trials a segment. The
notation non-negative integers t - t 0 denotes a segment starting on trial
number t and ending on the trial t 0 . Rounded parens are used if the ending trial
is not included in the segment. For the current sequence S we abbreviate the loss
of expert i on the segment [t::t 0 ) by L([t::t 0
The loss of the
algorithm A over the whole trial sequence S is defined as L(S;
We are now ready to give the main definition of this paper that is used for
scenarios in which the best expert changes over time. Informally a k-partition
slices a sequence into k segments with an expert being associated with each
segment. Formally, a k-partition, denoted by P ';n;k;t;e (S), consists of three positive
integers '; n; k; and two tuples t and e of positive integers. The number ' is the
length of the trial sequence S, n is the size of the expert pool, and k is number of
target shifts '). The tuple t has k elements
and Each t i refers to one of the ' trials, and by convention we use
1: The tuple t divides the trial sequence S into
6 M. HERBSTER AND M. K. WARMUTH
Parameters:
Initialization: Initialize the weights to w s
t;i . Predict with
Loss Update: After receiving the tth outcome y t ,
Share Updates of all three algorithms:
Static-expert
t;i "no Share Update"
Fixed-share (4)
Variable-share (5)
Figure
1. The Static-expert, Fixed-share, and Variable-share algorithms
called the ith segment. The
0th segment is also referred to as the initial segment. The tuple e has k+1 elements
. The element e i denotes the
expert which is associated with the ith segment [t i ::t i+1 ). The loss of a given
k-partition for loss function L and trial sequence S is
3. The Algorithms
There are four algorithms considered in this paper - Static-expert, Fixed-
share, Variable-share and Proximity-variable-share. The first three are
summarized in Figure 1. The Proximity-variable-share Algorithm is a generalization
of the Variable-share Algorithm; this algorithm is given in Figure 3.
The discussion of this generalization is deferred to Section 8. For all algorithms the
learning process proceeds in trials, where t - 1 denotes the trial number. The algo-
rithms maintain one positive weight per expert. The weight w s
t;i (or its normalized
version v s
should be thought of as a measurement of the algorithm's belief in the
quality of the ith expert's predictions at the start of trial t. The weight of each
expert is initialized to 1=n.
The algorithms have the following three parameters: j; c and ff. The parameter
j is a learning rate quantifying how drastic the first update will be. The parameter
c will be set to 1=j for most loss functions. (The absolute loss is an exception
treated separately in Section 7.) The parameter ff quantifies the rate of shifting
that is expected to occur. The Fixed-share Algorithm is designed for potentially
unbounded loss functions, such as the relative entropy loss. The Variable-share
Algorithm assumes that the loss per trial lies in [0; 1]. For the Fixed-share Al-
gorithm, ff is the rate of shifting per trial. Thus, if five shifts are expected in a
1000 trial sequence, then 1=200. For the Variable-share Algorithm, ff is
approximately the rate of shifting per unit of loss of the best partition. That is, if
five shifts are expected to occur in a partition with a total loss of 80, then ff - 1=16.
The tunings of the parameters j and c are considered in greater depth in Section 4,
and for ff in sections 5 and 6. Finally, the Static-expert Algorithm does not use
the parameter ff since it assumes that no shifting occurs.
In each trial t the algorithm receives an instance summarizing the predictions of
the n experts x t . The algorithm then plugs the current instance x t and normalized
weights v t into the prediction function pred(v; x) in order to produce a prediction
y t . In the simplest case, the algorithm predicts with the weighted mean of the
experts' predictions, i.e., pred(v; more sophisticated prediction
function introduced by Vovk (Vovk, 1998) will be discussed in Section 4. After
predicting, the algorithm performs two update steps. The first update is the Loss
Update; the second is the Share Update.
In the Loss Update the weight of expert i is multiplied by e \GammajL i , where L i is the
loss of the i-th expert in the current trial. Thus, no update occurs when L
The learning rate j intensifies the effect of this update. We use w m
t;i to denote
the weights in the middle of the two updates. These weights will be referred to
as intermediate weights. The Share Update for the Static-expert Algorithm is
vacuous. However, for the other algorithms the Share Update is crucial. We briefly
argue for the necessity of the share updates in the non-stationary setting, and then
give an intuitive description of how they function.
When we move from predicting as well as the best expert to predicting as well as
a sequence of experts, the Loss Update is no longer appropriate as the sole update.
Assume we have two experts and two segments. In the first segment Expert 1 has
small loss and Expert 2 a large loss. The roles are reversed for the second segment.
By the end of the first segment, the Loss Update has caused the weight of Expert 2
to be almost zero. However, during the second segment the predictions of Expert 2
are important, and its weight needs to be recovered quickly. The share updates
make sure that this is possible. The simulation in Section 10 furthers the intuition
for why the share updates are needed. The two share updates are summarized
8 M. HERBSTER AND M. K. WARMUTH
below. A straightforward implementation costs O(n) time per expert per trial:
Fixed-share: w s
ff
Variable-share: w s
In contrast, the implementations in Figure 1, that use the intermediate variable
"pool" cost O(1) time per expert per trial. After the Loss Update, every expert
"shares" a fraction of its weight equally with every other expert. The received
weight enables an expert to recover its weights quickly relative to the other experts.
In the Fixed-share Update (6) each expert shares a fraction of ff of its weight in
each trial. If one expert is perfect for a long segment, this type of sharing is not
optimal, since the perfect expert keeps on sharing weight with possible non-perfect
experts. The Variable-share Update (7) is more sophisticated: roughly, an expert
shares weight when its loss is large. A perfect expert doesn't share, and if all other
experts have high loss, it will eventually collect all the weight. However, when a
perfect expert starts to incur high loss, it will rapidly begin to share its weight with
the other experts, allowing a now good expert with previously small relative weight
to recover quickly. As discussed above the parameter ff is the shifting rate.
In the introduction we discussed an algorithm that uses exponentially many static
experts, one for each partition. Our goal was to achieve bounds close to those of
this inefficient algorithm by using only n weights. The bounds we obtain for our
share algorithms are only slightly weaker than the partition-expert algorithm and
gracefully degrade when neither the length of the sequence ' nor the number of
shifts k are known in advance.
4. Prediction Functions and Proof Techniques
We consider two choices of prediction functions. The simplest prediction is the
weighted mean (Warmuth, 1997):
pred wmean (v;
A more sophisticated prediction function giving slightly better bounds was introduced
by Vovk (Vovk, 1998; Haussler et al., 1998). Define L 0
z). Both of these functions must be monotone. Let L \Gamma1
1 (z) denote the inverses of L 0 (z) and L 1 (z). Vovk's prediction is now defined in
two steps by
pred Vovk (v;
Loss c values: (j = 1=c)
Functions: pred wmean (v; x) pred Vovk (v; x)
ent (p;
hel (p; q) 1 1=
Figure
2. (c; 1=c)-realizability: c values for loss and prediction function pairings.
The following definition is a technical condition on the relation between the prediction
function pred(v; x), the loss function L, and the constants c and j.
et al., 1998; Vovk, 1998) A loss function L and prediction
function pred are (c; j)-realizable for the constants c and j if
for all
of total weight 1.
We consider four loss functions in this paper: the square, the relative entropy,
the hellinger, and the absolute loss (see Section 2). However, the algorithms are
not limited to these loss functions. The techniques in (Vovk, 1998; Haussler et
al., 1998; Warmuth, 1997) can determine the constants c and j for a wide class
of loss functions. The algorithm is also easy to adapt for classification by using
the majority vote (Littlestone & Warmuth, 1994) for the prediction function, and
counting mistakes for the loss. In a practical application, no worst-case loss bounds
may be provable for the given loss function. However, the share updates may still
be useful. For an interesting application to the prediction of disk idle time see the
work of Helmbold et al. (Helmbold, Long & Sherrod, 1996).
The square, relative entropy and hellinger losses are (c; j)-realizable for both
pred wmean and pred Vovk with (j = 1=c). The values of c (and hence of
the two prediction functions are summarized in Figure 2. Since the absolute loss
has more complex bounds, we treat it in a section of its own. A smaller value of
c leads to a smaller loss bound (see Lemma 1). The c values for pred Vovk (cf.
column two of Figure 2) are optimal for a large class of loss functions (Haussler et
al., 1998).
The proof of the loss bounds for each of the algorithms is based on the following
lemma. The lemma embodies a key feature of the algorithms: the prediction is
done such that the loss incurred by the algorithm is tempered by a corresponding
change in total weight. This lemma gives the same inequality as the lemmas used
in (Vovk, 1998; Haussler et al., 1998). The proof here is essentially the same, since
the share updates do not change the total weight W
t;i .
Haussler et al., 1998) For any sequence of examples
S and for any expert i, the total loss of the master algorithms in Figure 1 may be
M. HERBSTER AND M. K. WARMUTH
bounded by
when the loss function L and prediction function pred is (c; j)-realizable (cf. Definition
1 and Figure 2).
Proof: Since L and pred are (c; j)-realizable, we have by Definition 1 that
Since the share updates do not change, the total weight
t;i is
W t+1 . This implies that
Hence, since W
So far we have used the same basic technique as in (Littlestone & Warmuth, 1994;
Vovk, 1995; Cesa-Bianchi et al., 1997; Haussler et al., 1998), i.e., c ln W t becomes
the potential function in an amortized analysis. In the static expert case (when
1=c) the final weights have the form w s
=n. Thus the above
lemma leads to the bound
relating the loss of the algorithm to the loss of any static expert.
The share updates make it much more difficult to lower bound the final weights.
Intuitively, there has to be sufficient sharing so that the weights can recover quickly.
However, there should not be too much sharing, so that the final weights are not
too low. In the following sections we bound final weights of individual experts
in terms of the loss of a partition. The loss of any partition (L(P ';n;k;t;e (S))) is
just the sum of the sequence of losses defined by the sequence of experts in the
partition. When an expert accumulates loss over a segment, we bound its weight
using Lemma 2 for the Fixed-share Algorithm and Lemma 7 for the Variable-
share Algorithm. Since a partition is composed of distinct segments, we must also
quantify how the weight is transferred from the expert associated with a segment
to the expert associated with the following segment; this is done with Lemma 3 for
the Fixed-share Algorithm and Lemma 8 for the Variable-share Algorithm.
The lower bounds on the weights are then combined with Lemma 1 to bound the
total loss of the Fixed-share Algorithm (Theorem 1) and the Variable-share
Algorithm (Theorem 2).
5. Fixed-share Analysis
This algorithm works for unbounded loss functions, but its total additional loss
grows with the length of the sequence.
Lemma 2 For any sequence of examples S the intermediate weight of expert i on
trial t 0 is at least e \GammajL([t::t 0 ];i) times the weight of expert i at the start
of trial t, where t - t 0 . Formally we have
Proof: The combined Loss and Fixed-share Update (Equation (6)) can be rewritten
as
Then if we drop the additive term produced by the Share Update, we have
We apply the above iteratively on the trials [t::t 0 ). Since we are bounding w m
weights in trial t 0 after the Loss Update), the weight on trial t 0 is only reduced by
a factor of e \GammajL(y t 0 ;x t 0 ;i ) . Therefore we have
Y
r=t
By simple algebra and the definition of L([a::b]; i) the bound of the lemma follows.
Lemma 3 For any sequence of examples S, the weight of an expert i at the start
of trial 1 is at least ff
times the intermediate weight of any other expert j on
trial t.
Proof: Expanding the Fixed-share Update (4) we have
Thus w s
ff
and we are done.
We can now bound the additional loss.
M. HERBSTER AND M. K. WARMUTH
Theorem 1 Let S be any sequence of examples and let L and pred be (c; j)-
realizable. Then for any k-shift sequence partition P ';n;k;t;e (S) the total loss of the
Fixed-share Algorithm with parameter ff satisfies
Proof: Recall that e k is the expert of the last segment. By Lemma 1, with
we have
We bound w s
'+1;ek by noting that it "follows" the weight in an arbitrary partition.
This is expressed in the following telescoping product:
Y
Thus, applying lemmas 3 and 2, we have
Y
The final term w s
equals one, since we do not apply the Share Update on
the final trial; therefore by the definition of L(P ';n;k;t;e (S)), we have
e \GammajL(P ';n;k;t;e
We then substitute the above bound on w s
'+1;ek into (16) and simplify to obtain (15).
The bound of Theorem 1 holds for all k, and there is a tradeoff between the terms
ck ln n and cjL(P ';n;k;t;e (S)); i.e., when k is small the ck ln n term is small and the
cjL(P ';n;k;t;e (S)) term is large, and vice-versa. The optimal choice of ff (obtained
by differentiating the bound of Theorem 1) is ff
. The following corollary
rewrites the bound of Theorem 1 in terms of the optimal parameter choice ff . The
corollary gives an interpretation of the theorem's bound in terms of code length.
We introduce the following notation. Let
1\Gammap be the
binary entropy measured in nats, and
1\Gammaq be the binary
relative entropy in nats. 2
Corollary 1 Let S be any sequence of examples and let L and pred be (c; j)-
realizable. Then for any k-shift sequence partition P ';n;k;t;e (S) the total loss of the
Fixed-share Algorithm with parameter ff satisfies
ck
where ff
. When
, then this bound becomes
For the interpretation of the bound we ignore the constants c, j and the difference
between nats and bits. The terms ln n and k ln(n \Gamma 1) account for encoding the
experts of the partition: log n bits for the initial expert and log(n \Gamma 1) bits
for each expert thereafter. Finally, we need to encode where the k shifts occur (the
inner boundaries of the partition). If ff is interpreted as the probability that a shift
occurs on any of the '\Gamma1 trials, then the term ('\Gamma1) [H(ff ) +D(ff kff)] corresponds
to the expected optimal code length (see Chapter 5 of (Cover & Thomas, 1991))
if we code the shifts with the estimate ff instead of the true probability ff . This
bound is thus an example of the close similarity between prediction and coding as
brought out by many papers (e.g. (Feder, Merhav & Gutman, 1992)).
Note that the ff that minimizes the bound of Theorem 1 depends on k and ' which
are unknown to the learner. In practice a good choice of ff may be determined
experimentally. However, if we have an upper bound on ' and a lower bound on k
we may tune ff in terms of these bounds.
Corollary 2 Let S be any sequence of examples and -
' and -
k be any positive
integers such that -
1. Then by setting
1), the loss of the Fixed-
share Algorithm can be bounded by
where P ';n;k;t;e (S) is any partition of S such that ' -
' and k - k.
Proof: Recall the loss bound given in Theorem 1. By setting
, we have
We now separate out the term
apply the inequality
The last inequality follows from the condition that ' -
' and -
We obtain the
bound of the corollary by replacing
in Equation (20) by
its upper bound - k. 3
14 M. HERBSTER AND M. K. WARMUTH
6. Variable-share analysis
The Variable-share algorithm assumes that the loss of each expert per trial lies
in [0; 1]. Hence the Variable-share Algorithm works in combination with the
square, hellinger, or absolute loss functions but not with the relative entropy loss
function. The Variable-share Algorithm has an upper bound on the additional
loss of the algorithm which is independent of the length of the trial sequence. We
will abbreviate w s
t;i with w t;i , since in this section we will not need to refer to the
weight of an expert in the middle of a trial. We first give two technical lemmas
that follow from convexity in r of fi r .
c). Applying the first inequality of Lemma 4 to the
RHS we have c + db d - b 1\Gammac , and thus
Lemma 6 At the beginning of trial t +1, we may lower bound the weight of expert i
by either Expression (a) or Expression (b), where j is any expert different from i:
ae w t;i e \GammajL(y t ;x t;i
Proof: Expanding the Loss Update and the Variable-share Update for a trial (cf.
(7)) we have
Expression (a) is obtained by dropping the summation term. For Expression (b) we
drop all but one summand of the second term: w
.
We then apply Lemma 4 and obtain (b).
Lemma 7 The weight of expert i from the start of trial t to the start of trial t 0 ,
reduced by no more than a factor of [e \Gammaj
\Theta
Proof: From Lemma 6(a), we have that on trial t the weight of expert i is reduced
as follows: w t+1;i
we apply this iteratively on the
Y
r=t
\Theta
In Lemma 6(b) we lower bound the weight transferred from expert p to expert
q in a single trial. In the next lemma we show how weight is transferred over a
sequence of trials.
Lemma 8 For any distinct experts p and q, if L([t::t 0
2, then on trial t may lower bound the weight of expert q by
- \Theta e \Gammaj
Proof: As expert p accumulates loss in trials t::t 0 , it transfers part of its weight
to the other specifically to expert q, via the Variable-share Update.
Let a i , for t - i - t 0 , denote the weight transferred by expert p to expert q in trial
i=t a i denote the total weight transferred from expert p to expert q
in trials [t::t 0 ]: The transferred weight, however, is still reduced as a function of the
loss of expert q in successive trials. By Lemma 7, the weight a i added in trial i is
reduced by a factor of [e \Gammaj
during
a i
\Theta
We lower bound each factor [e \Gammaj
by [e \Gammaj
, and thus
\Theta e \Gammaj
To complete the proof of the lemma we still need to lower bound the total transferred
weight A by w t;p ff
l i be the loss of expert p on trial i, i.e.
From our assumption, we have 1 -
2.
By direct application of Lemma 6(b), the weight a t transferred by expert p to
expert q in the first trial t of the segment is at least w t;p ff
l t e \Gammajl t . Likewise, we
apply Lemma 7 over trials [t::i) to expert p, and then apply Lemma 6(b) on trial i.
This gives us a lower bound for the transferred weights a i and the total transferred
weight A:
a
ff
a
ff
l
M. HERBSTER AND M. K. WARMUTH
We split the last sum into two terms:
ff
l
ff
We upper bound all exponents of (1 \Gamma ff) by one; we also replace the sum in the first
exponent by its upper bound,
. The substitutions
1, and then lead to an application of Lemma 5. Thus we rewrite the
above inequality as
ff
\Theta
ff
and then apply Lemma 5. This gives us
ff
The proof of the loss bound for the Variable-share Algorithm proceeds analogously
to the proof of the Fixed-share Algorithm's loss bound. In both cases
we "follow" the weight of a sequence of experts along the sequence of segments.
Within a segment we bound the weight reduction of an expert with Lemma 2 for
the Fixed-share analysis and Lemma 7 for Variable-share analysis.
When we pass from one segment to the next, we bound the weight of the expert
corresponding to the new segment by the weight of the expert in the former
segment with lemmas 3 and 8, respectively. The former lemma used for the Fixed-
share Algorithm is very simple, since in each trial each expert always shared a
fixed fraction of its weight. However, since the weight was shared on every trial,
this produced a bound dependent on sequence length. In the Variable-share
Algorithm we produce a bound independent of the length. This is accomplished
by each expert sharing weight in accordance to its loss. However, if an expert
does not accumulate significant loss, then we cannot use Lemma 8 to bound the
weight of the following expert in terms of the previous expert. Nevertheless, if
the former expert does not make significant loss in the current segment, this implies
that we may bound the current segment with the former expert by collapsing
the segments together. In other words, the collapsing of two consecutive segments
([t creates a single segment ([t which is associated with
the expert of the first segment of the original two consecutive segments. We can do
this for any segment; thus we determine our bound in terms of the related collapsed
partition whose loss is not much worse.
Lemma 9 For any partition P ';n;k;t;e (S) there exists a collapsed partition P ';n;k 0
such that for each segment (except the initial segment), the expert associated with
the prior segment incurs at least one unit of loss, and the loss on the whole sequence
of the collapsed partition exceeds the loss of the original partition by no more than
the following properties hold:
Proof: Recall that e i is the expert associated with the ith segment, which is
comprised of the trials [t i ::t i+1 If in any segment i, the loss of the expert e
associated with the prior segment (i \Gamma 1) is less than one, then we merge segment
segment i. This combined segment in the new partition is associated with
expert e i\Gamma1 . Formally in each iteration, we decrement k by one, and we delete e i and
t i from the tuples e and t. We continue until (24) holds. We bound the loss of the
collapsed partition P ';n;k 0 by noting that the loss of the new expert on the
subsumed segment is at most one. Thus per application of the transformation, the
loss increases by at most one. Thus since there are applications, we are done.
Theorem 2 4 Let S be any sequence of examples, let L and pred be (c; j)-realizable,
and let L have a [0,1] range. Then for any partition P ';n;k;t;e (S) the total loss of
the Variable-share algorithm with parameter ff satisfies
Proof: By Lemma 1 with
Let P ';n;k;t;e (S) be an arbitrary partition. For this proof we need the property
that the loss in each segment (except the initial segment), with regard to the expert
associated with the prior segment, is at least one (cf (24)). If this property does not
hold, we use Lemma 9 to replace P ';n;k;t;e (S) by a collapsed partition P ';n;k 0
for which the property does hold. If the property holds already for P ';n;k;t;e (S), then
for notational convenience we will refer to P ';n;k;t;e (S) by P ';n;k 0 Recall that
the loss of exceeds the loss of P ';n;k;t;e (S) by no more than
Since (24) holds, there exists a trial q i in the ith segment (for 1
that L([t 0
1. We now express w '+1;e 0
as
the telescoping product
Y
Applying lemmas 7 and 8 we have
ii
ff
M. HERBSTER AND M. K. WARMUTH
which simplifies to the following bound:
ff
The last inequality follows from (25). Thus if we substitute the above bound on
simplify, we obtain the bound of the theorem.
Again we cannot optimize the above upper bound as a function of ff, since k and
L(P ';n;k;t;e (S)) are not known to the learning algorithm. Below we tune ff based on
an upper bound of L(P ';n;k;t;e (S)). The same approach was used in Corollary 2. 5
Corollary 3 Let S be any sequence of examples and -
L and - k be any positive
reals. Then by setting
L , the loss of the Variable-share Algorithm can
be bounded as follows:
ck
where P ';n;k;t;e (S) is any partition such that L(P ';n;k;t;e (S)) -
L, and in addition
L.
For any partition P ';n;k;t;e (S) for which L(P ';n;k;t;e (S)) -
L, we
obtain the upper bound
ck
Proof: We proceed by upper bounding the three terms containing ff from the
bound of Theorem 2 (we use
We rewrite the above as:
We apply the identity ln(1 x and bound L(P ';n;k;t;e (S)) by -
L, giving the
following upper bound of the previous expression:
L, then -
L.
Therefore the above is upper bounded by
Using this expression to upper bound Equation (29), we obtain Equation (27).
When
L, we upper bound Equation (29) by
The first term is bounded by 1- k. The second term (2 - k+ -
is at most k ln 9
2 in the
region
thus the above is upper bounded by
We use the above expression to upper bound Equation (29). This gives us Equation
(28) and we are done.
7. Absolute Loss Analysis
The absolute loss function L abs (p; j)-realizable with both the
prediction functions pred Vovk and pred wmean ; however, cj ? 1. Thus the tuning
is more complex, and for the sake of simplicity we use the weighted mean prediction
(Littlestone & Warmuth, 1994) in this section.
Theorem 3 (Littlestone & Warmuth, 1994) For 1), the absolute
loss function L abs (p;
j)-realizable for the prediction function
pred wmean (v; x).
To obtain a slightly tighter bound we could also have used the Vee Algorithm for
the absolute loss, which is ((2
j)-realizable (Haussler et al., 1998). This
algorithm takes O(n log n) time to produce its prediction. Both the weighted mean
and the Vee prediction allow the outcomes to lie in [0; 1]. For binary outcomes with
the absolute loss, O(n) time prediction functions exist with the same realizability
criterion as the Vee prediction (Vovk, 1998; Cesa-Bianchi et al., 1997).
Unlike the (c; 1=c)-realizable loss functions discussed earlier (cf. Figure 2), the
absolute value loss does not have constant parameters, and thus it must be tuned.
In practice, the tuning of j may be produced by numerical minimization of the
upper bounds. However, we use a tuning of j produced by Freund and Schapire
Theorem 4 (Lemma 4
P and
Q.
Q), where
M. HERBSTER AND M. K. WARMUTH
We now use the above tuning in the bound for the Variable-share Algorithm
(Theorem 2).
Theorem 5 Let the loss function be the absolute loss. Let S be any sequence of
examples, and -
L and -
k be any positive reals such that k - k, L(P ';n;k;t;e (S)) -
L, and k -
L. Set the two parameters of the Variable-share algorithm ff
and j to - k
respectively, where -
k and -
k. Then the loss of the Algorithm with weighted
mean prediction can be bounded as follows:
Alternatively, let -
L and - k be any positive reals such that k - k, L(P ';n;k;t;e (S)) -
L, and k -
L. Set the two parameters of the Variable-share algorithm ff
and j to - k
respectively, where -
k and -
k. Then the loss of the Algorithm with weighted mean
prediction can be bounded as follows:
8. Proximity-variable-share Analysis
In this section we discuss the Proximity-variable-share Algorithm (see Figure
3). Recall that in the Variable-share Algorithm each expert shared a fraction
of weight dependent on its loss in each trial; that fraction is then shared
uniformly among the remaining experts. The Proximity-variable-share
Algorithm enables each expert to share non-uniformly to the other experts.
The Proximity-variable-share Update now costs O(n) per expert per trial instead
of O(1) (see Figure 3). This algorithm allows us to model situations where we have
some prior knowledge about likely pairs of consecutive experts.
Let us consider the parameters of the algorithm. The n-tuple -
contains the initial weights of the algorithm, i.e., w s
. The
Parameters:
Initialization: Initialize the weights to w s
n .
t;i . Predict with
Loss Update: After receiving the tth outcome y t ,
Proximity-variable-share Update
Figure
3. The Proximity-variable-share algorithm
second additional parameter besides j and c is a complete directed graph - of size n
without loops. The edge weight - j;k is the fraction of the weight shared by expert j
to expert k. Naturally, for any vertex, all outgoing edges must be nonnegative and
sum to one. The - 0 probability distribution is a prior for the initial expert and the
probability distribution is a prior for which expert will follow expert j. Below
is the upper bound for the Proximity-variable-share Algorithm. The Fixed-
share Algorithm could be generalized similarly to take proximity into account.
Theorem 6 Let S be any sequence of examples, let L and pred be (c; j)-realizable,
and let L have a [0,1] range. Then for any partition P ';n;k;t;e (S), the total loss of
the Proximity-variable-share Algorithm with parameter ff satisfies
Proof: We omit the proof of this bound since it is similar to the corresponding
proof of Theorem 2 for the Variable-share Algorithm: The only change is that
the 1
fractions are replaced by the corresponding - parameters.
Note that setting -
gives the previous bound for the
Variable-share Algorithm (Theorem 2). In that case the last sum is O(k ln n),
accounting for the code length of the names of the best experts (except the first
one). Using the Proximity-variable-share Algorithm we can get this last sum
to O(k) in some cases.
22 M. HERBSTER AND M. K. WARMUTH
For a simple example, assume that the processors are on a circular list and that
for the two processors of distance d from processor i, - i;i+d mod
1=d 2 . Now if the next best expert is always at most a constant away from the
previous one, then the last sum becomes O(k). Of course, other notions of closeness
and choices of the - parameters might be suitable. Note that there is a price for
decreasing the last sum: the update time is now O(n 2 ) per trial. However, if for
each expert i all arrows that end at i are labeled with the same value, then the
Share Update of the Proximity-variable-share Algorithm is still O(n).
9. Lower Bounds
The upper bounds for the Fixed-share Algorithm grow with the length of the
sequence. The additional loss of the algorithm over the loss of the best k-partition
is approximately This holds for unbounded loss functions
such as the relative entropy loss. When restricting the loss to lie in [0; 1], the
Variable-share Algorithm gives an additional loss bound of approximately
is the loss of the best k-partition and k ! L. One natural
question is whether a similar reduction is possible for unbounded loss functions. In
other words, whether for an unbounded loss function a bound of the same form is
possible with ' replaced by minf'; Lg. We give evidence to the contrary. We give
an adversary argument that forces any algorithm to make loss over
the best one-partition (for which the adversary sets In this
section we limit ourselves to giving this construction. It can easily be extended to an
adversary that forces ln(n)+ln(' \Gamma log 2 n) additional loss over the best one-partition
with n experts. By iterating the adversary, we may force
additional loss over the best k-partition. (Here we assume that log 2 (n \Gamma 1) and '
are positive integers, and '
Theorem 7 For the relative entropy loss there exists an example sequence S of
length ' with two experts such that L(P ';2;1;t;e there is a partition with a
single shift of loss 0, and furthermore, for any algorithm A,
Proof: The adversary's strategy is described in Figure 4. We use -
y t to denote
the prediction of an arbitrary learning algorithm, and L
the loss at trial t. For convenience we number the trials from
of
There are two experts; one always predicts 0 and the other always predicts 1.
The adversary returns a sequence of 0 outcomes followed by a sequence of 1 outcomes
such that neither sequence is empty. Thus, there is a single shift in the best
partition, and this partition has loss 0.
2. On trial
2 and to 1 otherwise.
(Assume without loss of generality that -
2 and thus y
3. New trial:
4. If -
'\Gammat then
5. else
Go to step 7.
then go to step 3.
7. Let y for the remaining trial(s) and exit.
Figure
4. Adversary's strategy
We now prove that L(S; thus proving the lemma. Clearly
loss of generality assume - y
. Note that the
threshold for -
y t is 1
'\Gammat . Furthermore, L ent (0; 1
and L ent (1; 1
t). Thus the conditions 4(i) and 5(i) follow. Condition 4(ii) holds by simple
induction. If a shift occurs, then Condition 5(ii) holds, since by Condition 4(ii) in
we have that
'\Gammat . Therefore, when we add L t , which
is at least ln(' \Gamma t) by Condition 5(i), we obtain Condition 5(ii) and we are done. If
Step 5 is never executed then the shift to y occurs in the last trial
Step 6 is skipped. Thus if Step 5 is never executed then
in trial (Condition 4(ii)), which is again the bound of the lemma.
We first reason that this lower bound is tight by showing that the upper bounds
of the algorithms discussed in this paper are close to the lower bound. The number
of partitions when 1). Thus we may expand the set of
experts into partition-experts as discussed in the introduction. Using
the Static-expert Algorithm with the weighted mean prediction gives an upper
bound of on the total loss of the algorithm when the loss of the
best partition is zero. This matches the above lower bound. Second, the bound of
the Fixed-share Algorithm (cf. Corollary 1) is larger than the lower bound by
'\Gamma2 ), and this additional term may be upper bounded by 1.
M. HERBSTER AND M. K. WARMUTH
total
loss
of
the
algorithms
trials
Loss of Variable Share Algorithm
Loss of Static Algorithm (Vovk)
Loss of Fix Share Algorithm
Loss of typical expert
Loss of best partition (k=3)
Variable Share Loss Bound
Fix Share Loss Bound
Figure
5. Loss of the Variable-share Algorithm vs the Static-expert Algorithm
scaled
weights
Figure
6. Relative Weights of the Variable-share Algorithm
10. Simulation Results
In this section we discuss some simulations on artificial data. These simulations are
mainly meant to provide a visualization of how our algorithms track the predictions
of the best expert and should not be seen as empirical evidence of the practical
usefulness of the algorithms. We believe that the merits of our algorithms are
more clearly reflected in the strong upper bounds we prove in the theorems of
8000.10.30.50.70.9scaled
weights
trials
Vovk Relative Weights
Figure
7. Relative Weights of the Static-expert Algorithm
the earlier sections. Simulations only show the loss of an algorithm for a typical
sequence of examples. The bounds of this paper are worst-case bounds that hold
even for adversarially-generated sequences of examples. Surprisingly, the losses
of the algorithms in the simulations with random sequences are very close to the
corresponding worst-case bounds which we have proven in this paper. Thus our
simulations show that our loss bounds are tight for some sequences.
We compared the performance of the Static-expert Algorithm to the two Share
algorithms in the following setting. We chose to use the square loss as our loss
function, because of its widespread use and because the task of tuning the learning
rate for this loss function is simple. We used the Vovk prediction function (cf.
Equation 9), and we chose accordance with Figure 2. We
considered a sequence of 800 trials with four distinct segments, beginning at trials
1, 201, 401, and 601. On each trial the outcome (y t ) was 0. The prediction tuple
contained the predictions of 64 experts. When we generated the predictions of
the 64 experts, we chose a different expert as the best one for each segment. The
best experts always have an expected loss of 1=120 per trial. The other 63 experts
have an expected loss of 1=12 per trial. At the end of each segment a new "best
expert" was chosen. Since the outcome was always 0, we generated these expected
losses by sampling predictions from a uniform random distribution on (0; 1
for the "typical" and "best" experts, respectively. Thus the expected
loss for the best 6 partition, denoted by the segment boundaries above, is 800
with a variance of oe 2 - :044. The actual loss of the best partition in the particular
simulation used for the plots was 6:47. For the Fixed-share Algorithm we tuned
f based on the values of
using the ff f
26 M. HERBSTER AND M. K. WARMUTH
tuning suggested in Corollary 1. For the Variable-share Algorithm we tuned ff v
based on the values of
using the ff v
tuning suggested in Corollary 3. Using theorems 1 and 2 we calculated a worst case
upper bound on the loss of the Fixed-share Algorithm and the Variable-share
Algorithm of 24:89 and 21:50, respectively (see "\Theta" and "+" marks in Figure 5).
The simulations on artificial data show that our worst-case bounds are rather tight
even on this very simple artificial data.
There are many heuristics for finding a suitable tuning. We used the tunings
prescribed by our theorem, but noticed that for these types of simulations the
results are relatively insensitive to the tuning of ff. For example, in calculating
ff v for the Variable-share Algorithm when -
was overestimated by 10 standard
deviations, the loss bound for our algorithm increased by only 0:02, while the actual
loss of the algorithm in the simulation increased by 0:17.
In
Figure
5, we have plotted the loss of the Static-expert Algorithm versus the
loss of the two Share algorithms. Examination of the figure shows that on the first
segment the Static-expert Algorithm performed comparably to the Share algo-
rithms. However, on the remaining three segments, the Static-expert Algorithm
performed poorly, in that its loss is essentially as bad as the loss of a "typical" expert
(the slope of the total loss of a typical expert and the Static-expert Algorithm
is essentially the same for the later segments). The Share algorithms performed
poorly at the beginning of a new segment; however, they quickly "learned" the new
"best" expert for the current segment. The Share algorithms' loss plateaued to
almost the same slope as the slope of the total loss of the best expert. The two
Share algorithms had the same qualitative behavior, even though the Fixed-share
Algorithm incurred approximately 10% additional loss over the Variable-share
Algorithm. In our simulations we tried learning rates j slightly smaller than two,
and verified that even with other choices for the learning rates, the total loss of the
Static-expert algorithm does not improve significantly.
In
Figures
6 and 7, we plotted the weights of the normalized weight vector w t
that is maintained by the Variable-share Algorithm and the Static-expert
Algorithm over the trial sequence. In Figure 6, we see that the Variable-share
Algorithm shifts the relative weights rapidly. During the latter part of each segment,
the relative weight of the best expert is almost one (the corresponding plot of the
Fixed-share Algorithm is similar). On the other hand, we see in Figure 7 that the
Static-expert Algorithm also "learned" the best expert for segment 1. However,
the Static-expert Algorithm is unable to shift the relative weight sufficiently
quickly, i.e. it takes the length of the second segment to partially "unlearn" the best
expert of the first segment. The relative weights of the best experts for segments
one and two essentially perform a "random walk" during the third segment. In
the final segment, the relative weight of the best expert for segment three also
performs a "random walk." In summary, we see these simulations as evidence that
the Fixed-share and Variable-share Updates are necessary to track shifting experts.
11. Conclusion
In this paper, we essentially gave a reduction for any multiplicative update algorithm
that works well compared to the best expert for arbitrary segments of
examples, to an algorithm that works well compared to the best partition, i.e. a
concatenation of segments. Two types of share updates were analyzed. The Fixed-
share Algorithm works well when the loss function can be unbounded, and the
Variable-share Algorithm is suitable for the case when the range of the loss lies
in [0,1]. The first method is essentially the same as the one used in the Wml algorithm
of (Littlestone & Warmuth, 1994) and a recent alternate developed in (Auer
Warmuth, 1998) for learning shifting disjunctions. When the loss is the discrete
loss (as in classification problems), then these methods are simple and effective if
the algorithm only updates after a mistake occurs (i.e., conservative updates). Our
second method, the Variable-share Update, is more sophisticated. In particular, if
one expert predicts perfectly for a while, then it can collect all the weight. However,
if this expert is starting to incur large loss, then it shares weight with the other
experts, helping the next best expert to recover its weight from zero.
The methods presented here and in (Littlestone &Warmuth, 1994) have inspired a
number of recent papers. Auer and Warmuth (1998) adapted the Winnow algorithm
to learn shifting disjunctions. Comparing against the best shifting disjunction is
more complicated than comparing against the best expert. However, since this is a
classification problem a simple Sharing Update similar to the Fixed-share Update is
sufficient. Our focus in this paper was to track the prediction of the best expert for
the same class of loss functions for which the original Static-expert Algorithm
of Vovk was developed (Vovk, 1998; Haussler et al., 1998).
Our share updates have been applied experimentally for predicting disk idle times
(Helmbold et al., 1996) and for the on-line management of investment portfolios
(Singer, 1997). In addition, a reduction has been shown between expert and metrical
task systems algorithms (Blum & Burch, 1997). The Share Update has been
used successfully in the new domain of metrical task systems. A natural probabilistic
interpretation of the Share algorithms has recently been given in (Vovk,
1997).
In any particular application of the Share algorithms, it is necessary to consider
how to choose the parameter ff. Theoretical techniques exist for the Fixed-share
Algorithm for eliminating the need to choose the value of ff ahead of time. One
method for tuning parameters (among other things) is the "specialist" framework
of (Freund, Schapire, Singer & Warmuth, 1997), even though the bounds produced
this way are not always optimal. Another method incorporates a prior distribution
on all possible values of ff. For the sake of simplicity we have not discussed these
methods (Herbster, 1997; Vovk, 1997; Singer, 1997) in this paper.
28 M. HERBSTER AND M. K. WARMUTH
Acknowledgments
We would like to thank Peter Auer, Phillip Long, Robert Schapire, and Volodya
Vovk for valuable discussions. We also thank the anonymous referees for their
helpful comments.
Notes
1. The discrete loss is defined to be
ae
2. Note that Lent (p; q). We use the D(pkq) notation here as is customary in information
theory.
3. If we replace the assumption that k - k by 2 - k - ', we obtain a bound where the final term c - k
is replaced by 2c - k:
4. Vovk has recently proved a sharper bound for this algorithm (Vovk, 1997):
ff
5. Unlike Corollary 2 we do not need a lower bound on k.
6. We call the partition described by the segment boundaries 1, 201, 401, and 601, the best
partition with respect to the tradeoff between k and L(P ';n;k;t;e (S)), as expressed implicitly
in Theorem 2.
--R
Tracking the best disjunction.
How to use expert advice.
Elements of Information Theory.
Universal prediction of individual sequences.
IEEE Transactions on Information Theory
A decision-theoretic generalization of on-line learning and an application to boosting
Using and combining predictors that specialize.
Sequential prediction of individual sequences under general loss functions.
A dynamic disk spin-down technique for mobile computing
Tracking the best expert II.
Additive versus exponentiated gradient updates for linear prediction.
Learning when irrelevant attributes abound: A new linear-threshold algorithm
Mistake Bounds and Logarithmic Linear-threshold Learning Algorithms
PhD thesis
The weighted majority algorithm.
Towards realistic and competitive portfolio selection algorithms.
A game of prediction with expert advice.
Derandomizing stochastic prediction strategies.
Predicting with the dot-product in the experts framework
--TR
--CTR
Atsuyoshi Nakamura, Learning specialist decision lists, Proceedings of the twelfth annual conference on Computational learning theory, p.215-225, July 07-09, 1999, Santa Cruz, California, United States
Jeremy Z. Kolter , Marcus A. Maloof, Using additive expert ensembles to cope with concept drift, Proceedings of the 22nd international conference on Machine learning, p.449-456, August 07-11, 2005, Bonn, Germany
V. Vovk, Probability theory for the Brier game, Theoretical Computer Science, v.261 n.1, p.57-79, 06/17/2001
Peter Auer , Manfred K. Warmuth, Tracking the Best Disjunction, Machine Learning, v.32 n.2, p.127-150, Aug. 1998
Olivier Bousquet , Manfred K. Warmuth, Tracking a small set of experts by mixing past posteriors, The Journal of Machine Learning Research, 3, 3/1/2003
Avrim Blum , Carl Burch, On-line Learning and the Metrical Task System Problem, Machine Learning, v.39 n.1, p.35-58, April 2000
Chris Mesterharm, Tracking linear-threshold concepts with Winnow, The Journal of Machine Learning Research, 4, 12/1/2003
Peter Auer, Using confidence bounds for exploitation-exploration trade-offs, The Journal of Machine Learning Research, 3, 3/1/2003
Giovanni Cavallanti , Nicol Cesa-Bianchi , Claudio Gentile, Tracking the best hyperplane with a simple budget Perceptron, Machine Learning, v.69 n.2-3, p.143-167, December 2007
Marco Barreno , Blaine Nelson , Russell Sears , Anthony D. Joseph , J. D. Tygar, Can machine learning be secure?, Proceedings of the 2006 ACM Symposium on Information, computer and communications security, March 21-24, 2006, Taipei, Taiwan
Wei Yan , Christopher D. Clack, Diverse committees vote for dependable profits, Proceedings of the 9th annual conference on Genetic and evolutionary computation, July 07-11, 2007, London, England
Mark Herbster , Manfred K. Warmuth, Tracking the best regressor, Proceedings of the eleventh annual conference on Computational learning theory, p.24-31, July 24-26, 1998, Madison, Wisconsin, United States
Claudio Gentile, The Robustness of the p-Norm Algorithms, Machine Learning, v.53 n.3, p.265-299, December
Mark Herbster , Manfred K. Warmuth, Tracking the best linear predictor, The Journal of Machine Learning Research, 1, p.281-309, 9/1/2001
Amol Deshpande , Zachary Ives , Vijayshankar Raman, Adaptive query processing, Foundations and Trends in Databases, v.1 n.1, p.1-140, January 2007 | experts;multiplicative updates;shifting;amortized analysis;on-line learning |
296419 | On Asymptotics in Case of Linear Index-2 Differential-Algebraic Equations. | Asymptotic properties of solutions of general linear differential-algebraic equations (DAEs) and those of their numerical counterparts are discussed. New results on the asymptotic stability in the sense of Lyapunov as well as on contractive index-2 DAEs are given. The behavior of the backward differentiation formula (BDF), implicit Runge--Kutta (IRK), and projected implicit Runge--Kutta (PIRK) methods applied to such systems is investigated. In particular, we clarify the significance of certain subspaces closely related to the geometry of the DAE. Asymptotic properties like A-stability and L-stability are shown to be preserved if these subspaces are constant. Moreover, algebraically stable IRK(DAE) are B-stable under this condition. The general results are specialized to the case of index-2 Hessenberg systems. | Introduction
. The present paper is devoted to the study of asymptotic properties
of solutions of differential-algebraic equations (DAE's) on infinite intervals and
those of their numerical counterparts in integration methods. It is rather surprising
that, in spite of numerous papers on numerical integration, there are very few results
in this respect.
For index-1 DAE's, asymptotic properties on infinite intervals have been investigated
by Griepentrog and M-arz [4]. Among other things, the notion of contractivity
and that of B-stability were generalized to the case of DAE's and criteria for total
stability were formulated. Algebraically stable IRK(DAE) were shown to be B-stable
for index-1 DAE's, too, provided that the nullspace N of the leading Jacobian was
constant. If this nullspace rotates, stability properties may change.
In this paper, we study general linear index-2 DAE's
exclusively, where the nullspace N := ker A(t) is assumed to be independent of t. A(t)
and B(t) are assumed to be continuous in t. Equation (1.1) is not assumed to be in
Hessenberg form and the coefficients A(t) and B(t) need not commute. Recall that
Hessenberg index-2 DAE's have the special form
This corresponds to the special coefficient matrices in (1.1)
Moreover, it corresponds to a trivially constant nullspace N , since A(t) itself does not
vary with t.
This paper is a heavily revised and enlarged version of an earlier manuscript with the same title
(Preprint 94-5).
y Humboldt-Universit?t zu Berlin, Institut f?r Mathematik, D-10099 Berlin, Germany
M. HANKE AND E. IZQUIERDO MACANA AND R. M -
ARZ
Presenting statements on the linear case we hope, as in the case of regular ordinary
differential equations (ODE's), that it will be possible to carry over some properties
to nonlinear DAE's via linearization.
As far as we know, in case of index-2 DAE's stability analyses of integration
methods on infinite intervals have only been presented for linear systems (see M-arz and
Tischendorf [13], Wensch, Weiner, and Strehmel [16]). The latter paper is restricted
to special Hessenberg form systems and relies on the so-called essentially underlying
ODE introduced in Ascher and Petzold [2] for these special systems. We consider this
case in Section 3 and describe the close relation between the inherent regular ODE
which we will take up from [8] in Section 2, and the essentially underlying ODE in
detail.
Although the above mentioned paper [2] is not concerned with asymptotic stability
on infinite intervals, it contains an observation that is highly interesting for us:
Among other things, Ascher and Petzold point out that the backward Euler method
applied to (1.2) may yield rather an explicit Euler formula for the essentially underlying
ODE, and they discuss the influence of the blocks B on this phenomenon.
We will show that not the derivatives of B 12 (t); B 21 (t), but the derivatives of the
projector matrix H(t) := B 12 (t)(B 21 (t)B constitute the essential term,
i.e., the rotation velocity of the subspace described by H(t) is the decisive feature.
Our paper is aimed at explaining the importance of additional subspaces for
answering questions concerning the asymptotic behaviour of integration methods.
Hence, besides introducing the necessary fundamentals, Section 2 provides new results
on the asymptotic stability of DAE solutions in the sense of Lyapunov as well as on
contractive DAE's.
In Section 4, BDF, IRK, and PIRK are investigated in detail. Asymptotic properties
like A-stability and L-stability are shown to be preserved if a certain subspace
it does not rotate. Moreover, we show that an algebraically
stable IRK(DAE) is B-stable under these conditions.
Section 5 illustrates our results by means of examples.
For convenience of the reader, the short appendix provides the basic linear algebra
facts once more.
2. Linear continuous coefficient index-2 equations. Consider the linear
equation
with continuous coefficients. Assume the nullspace of A(t) 2 L(R m ) to be independent
of t and let
Furthermore, set
Obviously, S(t) is a subspace of R m which contains the solutions of the homogeneous
form of the DAE (2.1). Note that the condition
ASYMPTOTICS IN INDEX-2 DAE'S 3
characterizes the class of index-1 DAE's (see Appendix for related facts from linear
algebra). Equation (2.2) implies that the matrix
is nonsingular for all t 2 J , where Q projector onto N . Let
Higher index DAE's are characterized by nontrivial intersections S(t) " N or
equivalently by singular matrices G 1 (t).
Definition. The DAE (2.1) is said to be index-2 tractable if the following two
conditions
hold, where
In the following, let Q 1 (t) denote the projector onto N 1 (t) along S 1 (t), and
(t). Due to the decomposition (2.5), Q 1 (t) is uniquely defined.
Remarks.
1. It holds that dimN 1
2. Due to Lemma A.1, (2.4) and (2.5) imply that the matrix
is nonsingular. But G 1 (t) is singular, independently of how Q is chosen [8].
3. Applying Lemma A.1 once more we obtain the identities
4. Each DAE (2.1) having global index 2 is index-2 tractable with a continuously
differentiable assuming Q 1 to belong to the class C 1 in the sequel is
not restrictive.
The conditions (2.4), (2.5) imply the decompositions
which are relevant for the index-2 case. Taking this into account we decompose the
DAE solution x into
Multiplying (2.1) by PP
respectively, and carrying out a
few technical computations, we decouple the index-2 DAE into the system
4 M. HANKE AND E. IZQUIERDO MACANA AND R. M -
ARZ
Equation (2.10) represents the inherent regular ODE of the DAE system.
On the other hand, if we consider (2.10) separately from its origin via the decomposition
(2.9), we know that imPP 1 (t) is an invariant subspace of this explicit ODE
in u. To be more precise: If we have
at some t 0 2 J , then (2.10) implies Furthermore, (2.12) and (2.11) lead
to respectively. Thus, solving (2.10) - (2.13) and setting
we obtain the solutions of the DAE (2.1).
Inspired by the above decoupling procedure, we state initial conditions for (2.1)
as
This yields
but we do not expect x(t 0
Next, we shortly turn to the case of a homogeneous equation (2.1): For the
system
The matrix \Pi(t) :=
is also a projector, and
is said to be the canonical projector for the index-2 case.
Now, the following assertion is easily proved by means of the decoupling explained
above.
Theorem 2.1. Let (2.1) be index-2 tractable with continuously differentiable Q 1 .
Then it holds:
(i) The initial value problems (2.1), (2.14) are uniquely solvable in
N
provided that q 2
(ii) If x(:) solves the homogeneous equation, then it holds that
(iii) Through each x exactly one solution of the homogeneous
equation at time t 2 J . The solution space M (t) is a proper subspace of S(t)
and
Remarks.
1. The inherent regular ODE (2.10) is determined by the complete coefficient
not only by its first term PP
2 B. If PP 1 (t) varies
rapidly with t, the second term (PP 1 may be the dominant one. This should also be
taken into account when considering the asymptotic behaviour of solutions of (2.1).
ASYMPTOTICS IN INDEX-2 DAE'S 5
2. In general, the linear DAE (2.1) appears to be much simpler if the relevant
subspaces N , N 1 , S 1 and the two projectors Q, Q 1 are constant. In that case (2.10)
simplifies to
3. The value x 0 involved in the initial condition (2.14) is not expected
to be a consistent initial value. What we have is
As shown above, a consistent initial value for the homogeneous
equation always belongs to M (t 0 ), which is precisely the set of consistent initial values
then.
4. If the product PP 1 is time invariant, we have
hence
Note that (QP 1 G \Gamma1
2 B)(t) is also a projector onto ker A(t). It should be mentioned
that the solution space M (t) remains time-invariant provided that both projectors
are constant.
Now we turn to the asymptotic behaviour of the solutions of the homogeneous
equation. Considering the decoupled system (2.10) - (2.12) once more, we see that the
component represents the dynamic one. Supposed the canonical projector
remains bounded on the whole interval 1), the asymptotic behaviour of
the solution
is completely determined by that of its component u(t). Clearly, if u solves a constant
coefficient regular ODE, we may characterize asymptotics by means of the corresponding
eigenvalues. This is what we try to realize for the DAE in the following
theorem.
Theorem 2.2. Let (2.1) be index-2 tractable, Q 1 be of class C 1 , (PP 1
(i) Then the pencil -A(t) + B(t) has the eigenvalues - uniformly for
implies each homogeneous equation solution to
tend to zero as t !1, provided that the projector \Pi(t) remains uniformly bounded.
Proof. Due to our assumptions, the inherent regular ODE has the constant coefficient
2 B. On the other hand, the nontrivial eigenvalues of \GammaP P
(that
is, eigenvalues that do not correspond to ker PP 1 ) are exactly the pencil eigenvalues
of -A +B (cf. [10]).
Let U (\Delta) denote the fundamental solution matrix of u
Taking the solution representation
into account, the assertion follows right away.
Roughly speaking, the assumptions that PP 1 and PP
have to be constant
mean that there is a constant coefficient inherent regular ODE and a possible time
dependence of the system may be caused by (time dependent) couplings only.
6 M. HANKE AND E. IZQUIERDO MACANA AND R. M -
ARZ
Next, what about contractivity in case of index-2 DAE's? In the regular ODE
theory, contractivity is well-known to permit very attractive asymptotic properties of
numerical integration methods. Corresponding results are obtained for index-1 DAE's
in [4] by means of an appropriate contractivity notion. In particular, this notion says
that a linear index-1 DAE (2.1) is contractive if there are a constant c ? 0 and a
positive-definite matrix S such that the inequality
holds true for all
Here, we have used the scalar product hz; vi S := hSz; vi and the norm jzj S := hz; zi 1=2
S .
Clearly, this reminds us of the one-sided Lipschitz condition used for contractivity
in the regular ODE case (i.e. I in (2.1)). In the latter case we have,
with
However, things are more difficult for index-2 DAE's. First, considering the decoupled
system again, we observe that each solution of the homogeneous
DAE (2.1) satisfies the identities
inspired by the notion of contractivity given for the index-1 case in [4], we state
the following definition.
Definition. The index-2 tractable DAE (2.1) is called contractive if the following
holds: There is a constant c ? 0 and a symmetric positive-definite matrix S such that
imply
As usually, with this notion of contractivity, too, we aim at an inequality
for all solutions of the homogeneous DAE, that shows the component
decrease in that norm. The following theorem will show: If the canonical projector
\Pi(t) is uniformly bounded, then the complete solution x(t) decreases.
Theorem 2.3. Let (2.1) be index-2 tractable, Q 1 belong to C 1 , \Pi(t) be uniformly
bounded on J and (2.1) be contractive. Then, it holds for each solution of the
homogeneous equation that
where fl is a bound of j\Pi(t)j S .
ASYMPTOTICS IN INDEX-2 DAE'S 7
Proof. We have
Not surprisingly, we obtain
Corollary 2.4. Let (2.1) be index-2 tractable with continuously differentiable
uniformly bounded \Pi(t). If the condition
is satisfied for all u the estimate (2.21) is valid.
Proof. It may be checked immediately that (2.19) and (2.22) lead to (2.20), i.e.,
implies contractivity.
Note that there is no need for assuming (2.22) for all u . For the assertion of
Corollary 2.4 to become true, it is sufficient that (2.22) holds for all u 2 im PP 1 (t),
only.
Inequality (2.22) looks like the usual contractivity condition for the regular ODE
(2.10), i.e., the inherent regular ODE of (2.1). The only difference is that the values
are taken from the subspace imPP 1 (t) instead of all of R m . Roughly speaking, one
has: The DAE (2.1) is contractive if the inherent regular ODE (2.10) is contrctive on
the subspace imPP 1 (t).
As a direct consequence of other results on stability ([15], e.g.) one can deduce
counterparts for linear index-2 DAE's, e.g. the well-known Poincar'e-Lyapunov Theorem
3. Specification of the projector framework for index-2 Hessenberg-
form DAE's. Most authors restrict their interest to so-called Hessenberg-form equa-
tions, i.e., to systems
In our context this
corresponds to
I
Obviously, z
lar, which is the well-known Hessenberg-form index-2 condition. Under this condition
the block
is also a projector. It projects onto im B 12 (t) along ker B 21 (t).
. It holds
Furthermore, one has
8 M. HANKE AND E. IZQUIERDO MACANA AND R. M -
ARZ
The canonical projector \Pi is
Recall that M precisely the solution space of the homogeneous
form of (3.1). It is time dependent if the projector H(t) is. However, M (t) may also
rotate with t even if H(t) is independent of time. Note that PP 1 is easier to compute
than \Pi.
Furthermore, the nontrivial part (i.e., dropping the zero rows) of the inherent
regular ODE (2.10) reads now as
where us emphasize once more that quickly varying subspaces
may cause the term H 0 to dominate within this regular ODE. Clearly, H 0 u 1 corresponds
to the term (PP 1
Theorems 2.1 and 2.2 apply immediately. In particular, we obtain: Suppose
H(t) and are time-invariant. Then the eigenvalues of
determine the asymptotic behaviour of the solution.
us turn to the discussion of aspects of contractivity. For index-2 Hes-
senberg-form DAE's (3.1), relation (2.20) applies to the first components only, i.e.
should be satisfied if (cf. (2.19))
Moreover, (2.22) simplifies to
for all
Again we see that the constant-subspace case H 0 (t) j 0 becomes much easier.
It should be stressed that the above decoupling as well as the inherent ODE are
stated in the original coordinates. In particular, the subspace M (t) ae R m is precisely
the one that contains the solutions of the original DAE. No coordinate transformation
is applied and only a decomposition into characteristic components is employed.
Ascher and Petzold [2] use a different approach to decouple characteristic parts of
linear index-2 Hessenberg systems: They use a coordinate change z such that
and
ASYMPTOTICS IN INDEX-2 DAE'S 9
(cf. also [16]). In [2] the matrices R and S are constructed in the following way. Let
First, a matrix R with linearly independent rows is chosen so
that
is satisfied. As a consequence, the m 1 \Theta m 1 block
is nonsingular. Choosing S in such a way that
hold true, we have
The relation
z 3
the main idea of that transformation. R(t) and S(t) are assumed to be smooth.
Carrying out a few straightforward computations one obtains a regular ODE for the
component z namely
z 0
Equation (3.4) is said to be the essentially underlying ODE (EUODE) of the DAE
(3.1).
What does the EUODE have in common with the inherent regular ODE? What
is the difference?
Multiplying the EUODE (3.4) by S and taking into account that
is given, we obtain (3.3). On the other hand, scaling the inherent regular ODE (3.3)
by R leads back to the EUODE (3.4) because
Thus, the EUODE turns out to be nothing else but a scaled version of the inherent
regular ODE and vice versa. Due to
the uniformly
traced back to R m1 \Gammam 2 . Thus, the EUODE has the advantage to be written in the
minimal coordinate space R m1 \Gammam 2 . Unfortunately, the matrices R and S are not
uniquely determined. Consequently, the EUODE is strongly affected by the choice of
R, S. Note that once an R is chosen, we may multiply by any regular K 2 L(R m1 \Gammam 2 )
to obtain another one by ~
R := KR.
From this point of view, the inherent regular ODE (3.3) seems to be more natural,
since all its terms are uniquely determined by the original data.
is a direct component of the original variable x 1 , but the ODE (3.3) lives in the
higher-dimensional space R m1 , and im(I \Gamma represents an invariant
subspace.
Ascher and Petzold [2] observed that the Euler backward method applied to the
may behave like an explicit Euler method. Choose
which simplifies the EUODE to
z 0
(3.
M. HANKE AND E. IZQUIERDO MACANA AND R. M -
ARZ
Via the transform z, the Euler backward formula applied to this special DAE
If additionally B 12 (t) and R 0 (t) do not vary with t, (3.6) simplifies toh
This is the explicit Euler formula for (3.5). Clearly this phenomenon is closely related
to time-varying blocks R(t) and S(t) of the coordinate transformation. Let us mention
again that this behaviour depends on the choice of R and S.
In the following section we show that the behaviour of the characteristic subspace
in the general case is decisive for understanding what
really happens.
4. Asymptotic stability of integration methods. A number of widely used
notions for the characterization of asymptotic properties of integration methods for
explicit ODE's relies on the complex scalar test equation
The asymptotic behaviour of a numerical method applied to (4.1) characterizes the
asymptotics in the case of linear constant coefficient systems
Here, the role of - is replaced by the eigenvalues of \GammaB. The justification for restricting
the consideration to (4.2) is given by Lyapunov's theory: The linearization of a
nonlinear autonomous explicit system at a stationary point provides criteria for the
asymptotic behaviour of solutions. In essence, the same is true for index-1 and -2
DAE's [12]. Therefore, we are led to the constant coefficient DAE
with regular matrix pencil -A +B. This equation can be transformed into the Kronecker
canonical form
z 0
I
z
0:
J 0 is a nilpotent matrix (J k
discretization and transformation
to (4.4) commute for many methods, the numerical solution for z vanishes
identically, whereas y is discretized like an explicit system. Hence, numerical methods
applied to constant coefficient linear DAE's trivially preserve their asymptotic stability
properties that are based on the test equation (4.1) (e.g. A-, A(ff)-, L-stability).
Thus, at first glance, one could expect the well-known concepts of asymptotics in
the numerical integration of explicit ODE's to be sufficient for DAE's, too. How-
ever, as described in Sections 2 and 3, DAE's have a more difficult structure than
explicit ODE's, even in view of numerical integration. Roughly speaking, we should
ASYMPTOTICS IN INDEX-2 DAE'S 11
not expect the numerical methods to match the subspace structure exactly if those
subspaces rotate. The scalar test equation (4.1) turns out to be an inappropriate
model in case of DAE's.
Similar results about B-stability are more difficult to obtain. It is well-known
that so-called algebraically stable Runge-Kutta methods are B-stable [6, p. 193] for
explicit systems. In [4, p.129] a similar result is shown to be true for index-1 DAE's
provided that (i) the nullspace N (A(t)) of A(t) does not depend on t, and (ii) the
Runge-Kutta method is a so-called IRK(DAE) (a stiffly accurate method [6, p. 45]).
There are simple linear examples showing that the backward Euler method loses its
B-stability if (i) is not valid.
We recall the notion of B-stability for DAE's having a constant leading nullspace:
Definition [4]. The one-step method x called B-stable if
for each contractive DAE the inequalities
and
jQx (1)
are satisfied. Here, K ? 0 is a constant and x (1), x (2)are arbitrary consistent initial
values.
4.1. BDF applied to linear index-2 DAE's. The k-step BDF applied to
reads as
At each step, equation (4.5) provides an approximation x ' of the exact solution value
Recall that the nullspace of A(t) is assumed to be constant.
Supposed (2.1) is index-2 tractable, we may decouple (4.5) and (2.1) simultaneously
(cf.
ARZ
where we have used the above decomposition again, i.e.,
In particular, if the inhomogeneity q vanishes identically, then the Q 1 -components
are both zero, and one hash
for approximation of
and
for approximation of
The following proposition is an immediate consequence.
Proposition 4.1. Let (2.1) be index-2 tractable with continuously differentiable
. Then the BDF method applied to (2.1) generates exactly the same BDF method
applied to the inherent regular ODE (4.11) if and only if the projector PP 1 (t) does not
vary with t. For a constant projector PP 1 , the BDF methods retain their asymptotic
stability properties for index-2 DAE's provided the canonical projector \Pi(t) remains
uniformly bounded.
On the other hand, varying subspaces may cause the term PP 0
1 to dominate the
inherent regular ODE itself. For instance, the backward Euler method provides thenh
which shows that u(t ' may or may not happen. As it was mentioned
in Section 3, Ascher and Petzold [2] have observed this phenomenon in case of linear
index-2 Hessenberg systems (3.1) (cf. also Section 3). However, this is not surprising
since we cannot expect any discretization method to follow the subspaces precisely
without profound information on the inner structure of the DAE.
Naturally, similar arguments apply to Runge-Kutta methods, too.
4.2. Implicit Runge-Kutta methods and their projected counterparts
applied to linear index-2 DAE's. According to the originally conceived method
for the numerical solution of ordinary differential equations, an implicit Runge-Kutta
(IRK) method can be realized for the DAE (2.1) in the following way [14]: Given
an approximation x '\Gamma1 of the solution of (2.1) at t '\Gamma1 , a new approximation x ' at
obtained via
s
ASYMPTOTICS IN INDEX-2 DAE'S 13
'i is defined by
and the internal stages are given by
s
s:
The coefficients a ij , b i , c i determine the IRK method, and s represents the number
of stages. Assume the matrix A := (a ij
i;j=1 to be nonsingular and denote its inverse
s
s
Equations (4.12) - (4.14) are equivalent to
s
s
s
s:
Looking at (4.16) we observe that the internal stages do not depend on Qx '\Gamma1 .
The special class of IRK methods (IRK(DAE)) with coefficients
is shown to stand out from all IRK methods in view of their applicability to DAE's
in that case, the new value x belongs to the obvious
constraint manifold
Therefore we have
For Hessenberg equations (3.1), relation (4.18) simplifies to
In general, if (4.17) is not fulfilled, then we have % 6= 0, and (4.18) resp. (4.19) are
no longer true. Since this behaviour is a source of instability (for h ! 0), Ascher and
Petzold [1] propose another version for the application of IRK methods to index-2
Hessenberg DAE's (2.18), the so-called Projected IRK (PIRK). Actually, after realizing
the standard internal stage computation, the recursion (4.15) is now replaced
by
s
s
and - ' is determined by
14 M. HANKE AND E. IZQUIERDO MACANA AND R. M -
ARZ
If we multiply (4.20) by I \Gamma H(t ' ), - ' can be eliminated:
s
s
On the other hand, (4.21) is equivalent to
It should be mentioned that for IRK(DAE) the projected version is exactly the same
as the original one, since (4.17) implies -
Considering (4.22) - (4.23) in association with the projector formulae (3.2), an
immediate generalization of PIRK methods to fully implicit linear index-2 systems
(2.1) is suggested by
s
s
Since the internal stages -
X 'j do not depend on Q-x '\Gamma1 , there is no need to compute
Q-x ' at this stage.
Now return to the standard IRK (4.15) - (4.16) and decouple (4.16) in the same
way as (2.12). For that, we decompose
A straightforward computation yieldsh
s
s
\GammaP
s
s
(4.26)h
s
s
s
The recursion (4.15) can be decomposed simply by multiplying by the projections:
s
s
ASYMPTOTICS IN INDEX-2 DAE'S 15
s
s
s
s
s
s
s
s
Now, consider the homogeneous case, that is we set If the inhomogeneity q
vanish identically, then v does so, too. Moreover, all values V 'i are equal to zero.
However, if % 6= 0, this is no longer true for This means that, in general,
the resulting x ' has a nontrivial component in contrast to the exact solution
that fulfills Q
In more detail, (4.26) reduces toh
s
s
\GammaP
s
which supposedly approximates
Moreover, (4.27) yields
s
\GammaQ
s
for approximating
In the consequence, the following result holds true for IRK methods analogously to
Proposition 4.1 for the case of BDF methods:
Proposition 4.2. Let (2.1) be index-2 tractable with continuously differentiable
. Then the IRK method applied to (2.1) generates exactly the same IRK method
applied to the inherent regular ODE (4.30) if and only if PP 1 (t) does not vary with
t.
For constant PP 1 , the solution
(4.
M. HANKE AND E. IZQUIERDO MACANA AND R. M -
ARZ
of the homogeneous equation is approximated at t ' by
s
s
(4.39)h
s
s
s
Starting with a consistent initial value x 0 (with the components v ' vanish
step by step, too.
For IRK(DAE), (4.42) provides
that is, in the case of constant PP 1 , the approximation x ' belongs to the solution
manifold M (t ' ) given in Theorem 2.1.
Let us briefly turn to PIRK methods (4.24), (4.25). For homogeneous equations,
The decoupled system parts (4.32), (4.34) remain
valid also for the "-" values.
Proposition 4.3. Proposition 4.2 is true for PIRK methods, too.
It should be mentioned that, for constant PP 1 , in PIRK methods we have simply
instead of (4.41). Ascher and Petzold [1] have not considered a recursion for the
component Q-x ' for Hessenberg systems (2.19). Nevertheless, if one is interested in
approximations Q-x ' , a recursion like (4.42) will come up again. In that case, the only
difference between PIRK and IRK methods is the determination of the Q 1 -components
versus (4.41)). Note again that PIRK and IRK are identical for IRK(DAE).
Next, concerning B-stability, the following assertion shows the notion of contractivity
given in Section 2 to be useful.
Theorem 4.4. Let (2.1) be index-2 tractable with continuously differentiable Q 1 ,
each algebraically stable IRK(DAE)
applied to (2.1) is B-stable.
Proof. Denote . Due to the algebraical stability,
positively semi-definite matrix.
Since we deal with linear DAE's only, it remains to show the inequalities jP x '
for the case of the homogeneous equation (2.1).
ASYMPTOTICS IN INDEX-2 DAE'S 17
therefore Additionally, with an IRK (DAE) we also have
s
s
since
holds true for all ~ t. Hence, using the contractivity (cf.
Section 2) we obtain the inequalities
s:
Now, following the standard lines, we compute
s
s
'i
s
s
s
'i
s
s
'i
s
s
Finally, x
It should be noted that Theorem 4.4 does not apply to PIRK. While the first
part, i.e., jP x ' holds true analogously, the necessary relation for the
nullspace component is not given at all for % 6= 0.
5. A numerical counterexample. In the previous sections we have seen that
BDF and Runge-Kutta methods preserve their stability behaviour if PP 1 is constant.
The following example shows that these properties get lost if PP 1 varies with time.
Consider the DAE
with
M. HANKE AND E. IZQUIERDO MACANA AND R. M -
ARZ
where are constant. Note that (5.1) is an index-2 Hessenberg system. One
easily computes (using
such that
Compute the projections
Taking into account that in (5.1), the inherent regular ODE (2.10) reads
The solution subspace M (t) (cf. Theorem 2.2) is given by
subject to consistent initial values (2.13) may be
reduced to the scalar ODE,
together with
Consequently, the asymptotic stability of (5.2) is governed by the sign of - (indepen-
dently of j 2 R). The parameter j measures the change of N 1 (t). fi serves only for
mixing the P component with the nullspace component. Now the complete solution
of (5.1) can be easily computed using (2.10) - (2.12). If x 0 2 R 3 is a consistent initial
value at
the solution of (5.1) is
was solved using the 5-step BDF (Fig. 5.1) and an algebraically stable 2-stage
Runge-Kutta method introduced by Crouzeix (cf. [5, p. 207]) with ae - \Gamma0:73
5.2). The figures show the norm of the numerical solution at the end of the
different values of j and -. Note that, for
a constant coefficient system. The results indicate that the asymptotic behaviour of
the numerical solution depends not only on the asymptotic stability of the differential
equation (5.3) (controlled by -), but also on the geometry of the problem (controlled
by j).
ASYMPTOTICS IN INDEX-2 DAE'S 19
Appendix
Basic linear algebra lemma. A basic connection between the
spaces appearing in the tractability index and the choice of the corresponding projectors
is given by the following lemma, which may be directly obtained from Theorem
A.13. and Lemma A.14. in [4].
Lemma A.1. Let -
be a projector onto ker( -
A). Denote -
A)g. Then the following
conditions are equivalent:
(i) The matrix -
Q is nonsingular.
A).
If -
G is nonsingular, then the relation
holds for the canonical projector -
along -
S).
Proof. (i) ! (ii) The space R m can be described as -
A), because
holds for any z 2 R m . z 2 obviously lies in ker( -
Q is a projector onto
A). For z 1 we obtain
S.
It remains to show that -
f0g. To this end, let x 2 -
A).
Qx holds and there exists a z 2 R m such that -
Qx and
Qx. Consequently,
This holds trivially by definition.
chosen such that -
Ax and so
S. On the other hand, -
Qx lies in ker( -
A). Thus, x 2
holds due to the
assumption. That means, -
Q). Then has to be true, and
G is nonsingular.
Because of the uniqueness of the decomposition ( ), the latter assertion follows
immediately.
--R
Projected implicit Runge-Kutta methods for differential-algebraic equations
Stability of computational methods for constrained dynamic systems
Numerical Solution of Initial-Value Problems in Differential-Algebraic Equations
Solving Ordinary Differential Equations I
Solving Ordinary Differential Equations II
Approximation von Algebro-Differentialgleichungen mit bereich Mathematik
Order results for implicit Runge-Kutta methods applied to differential algebraic systems
Nonlinear Differential Equations and Dynamical Systems
Stability investigations for index-2-systems
--TR
--CTR
Roswitha Mrz , Antonio R. Rodrguez-Santiesteban, Analyzing the stability behaviour of solutions and their approximations in case of index-2 differential-algebraic systems, Mathematics of Computation, v.71 n.238, p.605-632, April 2002
Hong Liu , Yongzhong Song, Stability of numerical methods for solving linear index-3 DAEs, Applied Mathematics and Computation, v.134 n.1, p.35-50, 10 January
I. Higueras , R. Mrz , C. Tischendorf, Stability preserving integration of index-1 DAEs, Applied Numerical Mathematics, v.45 n.2-3, p.175-200, May
I. Higueras , R. Mrz , C. Tischendorf, Stability preserving integration of index-2 DAEs, Applied Numerical Mathematics, v.45 n.2-3, p.201-229, May
Roswitha Mrz, Differential algebraic systems anew, Applied Numerical Mathematics, v.42 n.1, p.315-335, August 2002 | asymptotic properties;backward differentiation formulas;differential-algebraic equation;runge-kutta method;stability |
296420 | On Improving the Convergence of Radau IIA Methods Applied to Index 2 DAEs. | This paper presents a simple new technique to improve the order behavior of Runge--Kutta methods when applied to index 2 differential-algebraic equations. It is then shown how this can be incorporated into a more efficient version of the code {\sc radau5} developed by E. Hairer and G. Wanner. | Introduction
In recent years, differential algebraic equations (DAEs) have been studied by various authors (see
[HW91], [HLR89], [BCP91]), and their importance acknowledged by the development of specific solvers
such as DASSL from [Pet86] or RADAU5 from [HW91]. An especially important class of DAEs arising
in practice are semi-explicit systems of the form
where g y f z is assumed to be of bounded inverse in a neighbourhood of the solution of (S). Here, we
are interested in obtaining a numerical approximation to (S) accurate for both the differential and the
algebraic components. Although some of the ideas presented in this paper also apply to more general
Runge-Kutta methods, we will focus on Radau IIA methods, given that they were used to build the
code RADAU5. Their construction as well as some of their properties are briefly recalled in Section 1.1.
When applying a s-stage Radau IIA method to (S), the orders of convergence are respectively
for the y-component and s for the z-component (see [HLR89]). In some situations, where getting an
accurate value of z may be important (in mechanics for instance), one is led to use a different approach.
Generally speaking, the order reduction phenomenon may be overcome by the following techniques:
(i) a first possibility consists in applying the Radau IIA method to the index one formulation (S) of
Since Radau IIA methods applied to index one DAEs exhibit full order of convergence for y and
z (see Theorem 3-1 [HLR89]), the order of convergence is now 2s \Gamma 1 also for z. However, solving
(S) can be considerably more costly: as a matter of fact, this requires to evaluate the Jacobian
of the function F (y; at each step (or whenever the convergence rate of
the Newton iteration gets too small), instead of the function F (y; Another
drawback is that the numerical solution is not forced any longer to lie on the constraint manifold
(ii) a second idea consists in computing the z-component by solving the additional equation g y (y)f(y; z),
i.e. in projecting the numerical solution on the so-called "hidden constraint". The corresponding
numerical scheme now reads
The order of convergence for the y-component is still 2s \Gamma 1 and can be shown to be now
also for the z-component by using the Implicit Function Theorem. However, this technique is
once again computationally more demanding than the original one: solving the new implicit
part of (S n ) requires an accurate evaluation of g y (y) at each step, and not only, as previously
mentioned, whenever the convergence rate of the Newton iteration becomes too small. It can be
nevertheless noted that in a parallel environment, this additional cost would be shadowed by the
use of a second processor.
In this paper, we present a third approach which does not require an analytical form of g y and whose
computational cost is basically equal to what it is for the standard formulation. It is based on the
PI n-976
4 A. Aubry , P. Chartier
observation that the errors in the z-component are essentially of a local nature, at least up to the
order of convergence of the y-component. As a consequence, making z more accurate is a matter of
recovering the significant terms that appear in the so-called "B-series of the error". This is made
possible by considering the composition of the basic method with itself over several steps. It can be
noted that similar ideas were used by R.P.K. Chan to deal with the order reduction of Gauss methods
when applied to certain stiff problems (see [BC93]).
The new order conditions will be determined in Section 2. While they can be derived in a straightforward
manner from the work of E. Hairer, C. Lubich and M. Roche ([HLR89]), they are still relatively
unknown since they are not satisfied by most classical Runge-Kutta methods (see also [BC93] or
[CC95]). It will then be shown that some of those conditions are actually redundant and may be
omitted. This is a crucial aspect of the method, since it allows the construction of formulas with a
manageable level of complexity.
In Section 3, the implied modifications to the code RADAU5 are listed. Finally, numerical results are
presented that illustrate the advantages of this new technique.
1.1 Basic properties of Radau IIA methods
Radau IIA methods can be defined by the s
A particular method R will be characterized by the triple (A; b; c) where (a ij ) i;j=1;\Delta\Delta\Delta;s is a s \Theta s matrix,
s-dimensional vector and s-dimensional vector. In the sequel,
we will furthermore use the notations
1.1.1 Construction of Radau IIA methods
Their coefficients are uniquely determined by the following conditions:
1. are the ordered zeros of the Radau right polynomial
dx
x
2.
3. The coefficients a ij of the matrix A satisfy C(s):
As the c i are all distinct, A is non-singular.
1.1.2 Some useful properties
We will refer here to the additional simplifying assumptions D(-) introduced, as B(p) and C(j) of
previous subsection, by J.C. Butcher (see [HW91] page 75).
Irisa
On improving the convergence of Radau IIA methods applied to index 2 DAEs 5
1. Due to the conditions C(s), B(s) and c methods are stiffly accurate (i.e.
vectors b and (a are solution of the same Cramer
system).
2. They are collocation methods (Theorem 7-8 [HNW91], page 212).
3.
4. D(s \Gamma 1) is satisfied (Lemma 5-4 [HW91], page 78).
Optimal convergence results have been obtained for those methods by J.C. Butcher on the one hand
(Theorem 5-3 of [HW91]) and E. Hairer, C. Lubich and M. Roche on the other hand (Theorems 3-1,
4-4 and 4-6 of [HLR89]). They are collected in Table 1.
Table
1: Optimal global error estimates for the s-stage method Radau IIA
1.2 Increasing the order of convergence of the z-component
When applying a s-stage Radau IIA method c) to the system (S), we obtain
z
are the coefficients of the matrix A \Gamma1 . In (4), z n vanishes because us
now replace the vector (b T A by an adjustable vector (4). By doing so,
we define a new method Rw . It is easily seen that the order of convergence of the y-component
remains unchanged. As for the z-component, the lack of accumulation makes the errors purely local.
The convergence behaviour of the z-component is thus entirely determined by the following order
conditions from Theorem 8-6 and 8-8 of [HW91].
Proposition 1 Let ffi y and ffi z be the local errors respectively for the y and z components of a Runge-Kutta
method. Then we have,
where DAT2 y , DAT2 z are sets of trees, \Phi a vectorial function and fl; ae scalar ones associated with
the trees 1 .
Nevertheless, w has not enough components to allow for an order of convergence greater than s
is the optimal vector). Hence, to get sufficient freedom, we need to consider the
composition R oe of R over oe steps. As variable steps h i , are considered, we also have to
introduce the ratios r
characterized by the triple (A; B; C) where
ffl A is the blockmatrix blocks of the form
1 See [HW91] for a definition of these notions.
PI n-976
6 A. Aubry , P. Chartier
ffl B is the blockvector (B i ) i=1;\Delta\Delta\Delta;oe with blocks of the form B
ffl C is the blockvector (C i ) i=1;\Delta\Delta\Delta;oe with blocks of the form C
e.
Replacing
by an adjustable vector w offers oe \Theta s degrees of freedom, i.e hopefully enough
for oe ? 1 to increase the order of convergence of the z-component. It is our aim now to show how to
construct w and how to implement the new method R oe;w .
2 Construction of the vector w
In this section, the effective construction of w is described. It should be emphasized that its components
depend on r forcing one evaluation per step. However, these additional computations
become negligible as soon as the dimension of the system (S) is large enough.
2.1 Order conditions
The conditions for order k are enumerated below together with the associated trees.
required with
z -
are required with
s
z -
z -
Let U s be AC
s+1 C s+1 . If (s) is satisfied, then (C s+1 ) is equivalent to
are required with 2
z -
z -
s
z -
z -
s
z -
Let U s+1 be AC
equivalent to
2 The dot stands for the componentwise product.
Irisa
On improving the convergence of Radau IIA methods applied to index 2 DAEs 7
are required with
z -
z -
z -
z -
z -
s
z -
z -
s
z -
s
z -
z -
s
z -
s
z -
z -
Let U s+2 be AC
s+3 C s+3 . If (s + 7) is satisfied, then (C s+3 ) is equivalent to
These conditions are obtained by Proposition 1 and by using simplifying assumptions (it is important
to note that the composite method R oe satisfies B(2s \Gamma 1), C(s) and D(s \Gamma 1)).
2.2 Preliminary calculus
In order to later simplify the equations for w, we now state some basic results.
eb T , then F
Proof: By definition, we have eb T A \Gamma1 . As the method c) is stiffly accurate,
ve T
s where ve T
eb
ve T
ve T
ve T
ve T
be the s \Theta s blocks of the matrix A \Gamma1 , then
PI n-976
8 A. Aubry , P. Chartier
Proof: By definition, AA
Lemma 2 is then easily proved by induction on i. 2
oe;n
be the oe \Theta s dimensional vector AC
n+1 C n+1 and u n be the
s dimensional vector
is given by
k=s
r k+1
Proof: Let n be less than or equal to 2s \Gamma 2. For all
S i;n can be expanded as follows
r k+1
r k+1
e:
Owing to C(s), we have
k=s
r k+1
e:
Similarly, T i;n can be expanded as
l
r l
implies that
and as n - 2s \Gamma 2, we obtain
r l+1
e
e
e
On improving the convergence of Radau IIA methods applied to index 2 DAEs 9
Lemma 4 For all n less than
Proof: This follows at once from B(2s \Gamma 1). 2
Lemma 5 Let
oe;n
be the vector A \Gamma1 U n , then for all integer n -
by
k=s
Proof: By definition, X
A \Gamma1 U i;n . Using
Lemma 3, we obtain
k=s
r k+1
k=s
and we use Lemma 4 to complete the proof. 2
Lemma 6 For all n less than 2s
Proof: This follows straightforwardly from the order conditions for the trees
z -
z -
1. Let
oe;n
be the vector C:A
k=s
r k+1
2. Let
oe;n
be the vector A \Gamma1 (C:U n );
k=s
r k+1
Proof: By definition, Y and the first part of the result is obtained by applying
Lemma 5. Now, let
oe;n
denote the vector C:U n . From Lemma 3, we can write T i;n as
k=s
r k+2
We have furthermore Z
so that Lemma 2 leads to
k=s
s n\Gammak
s n+1\Gammak
k=s
r k+1
PI n-976
A. Aubry , P. Chartier
The result then becomes a consequence of lemmas 4 and 6. 2
Lemma 8 For all n less than 2s
Proof: This follows from the order conditions for the trees
z -
z -
z -
Proof: The results follow from the order conditions for the trees [[-; [
z -
z -
z -
z -
z -
2.3 Results for the 2-stage method
In order to get a third-order method, w has to satisfy the following linear system (S L;2
Taking leads to a system with 4 equations and 4 unknowns. For convenience, we recall below
the coefficients of the 2-stage Radau IIA method,
R =312
The matrix M corresponding to (S L;2 ) is then of the form
and we have
Hence, for all r 1 2 (0; 1), M is non-singular and (S L;2 ) has the following unique solution
Irisa
On improving the convergence of Radau IIA methods applied to index 2 DAEs 11
2.4 Results for the 3-stage method
In order to get a fifth-order method, w must satisfy the following linear system (S L;3
Taking now leads to a system with ten equations but only nine unknowns. However, we will show
in this section, that one of these equations is identically satisfied. Collecting all results of Section 2.2,
we get
r 4
r 4
r 4
3
A
r 4
r 4
r 4
r 4
r 4
r 4
r 4
r 4
r 4
It is found that
r 4
r 4
r 4
r 4
r 4
r 4
and the system (S L;3 ) is equivalent to
Theorem are linearly dependent.
Proof: It is enough to show that v 1 , v 2 and u 3 are linearly dependent. Since c) is of order
5 for the differential component, we have
so that b T u implies the result.2
the vectors A \Gamma1 U 3 , U 3 and V 1 (for example) become linearly
dependent. To prove this, it is sufficient to show that A \Gamma1 u 3 , u 3 and v 1 are dependent. As b T A \Gamma1 u
PI n-976
A. Aubry , P. Chartier
0, we can conclude as in Theorem 1. In this case, w depends on one parameter
that can be chosen so as to minimize the quantity
where DAT2 z 5g. This seems a natural goal to achieve, since this is an
attempt to minimize the local error. For convenience, Table 2 collects the trees of DAT2 z (5) and the
values of the associated functions ff, fl and \Phi. To compute the ff's, we refer to [Hig93].
tree u ff(u) fl(u) \Phi(u)
Table
2: Trees of DAT2 z (5) and their associated functions
2.5 Sketch of the case
To achieve order 7 for the algebraic component, we have to solve the system (S L;4 ) composed of (C 4 ),
Section 2.1), that is to say 24 equations. Comparing the number of equations
and the number of unknown, we could consequently think of taking In fact, it can be shown
that is sufficient, since 5 equations are identically satisfied (see Section 6.1). However, it does
not seem reasonable any more to consider a practical implementation of the corresponding method,
owing to the complexity of the formulas for variable stepsize.
Remark 2 If the stepsize is constant (i.e. r equations are identically satisfied
(see Section 6.1), and oe can be chosen equal to 4.
3 Modifications to the code RADAU5
The 3-stage Radau IIA method has been implemented by E. Hairer and G. Wanner in order to solve
problems of the form MY and DAEs of index less than or equal to three can be solved
by this code, called RADAU5. A precise description is given in Section IV-8 [HW91] and we will adopt
the notations used there. Implementing our method requires slight modifications to the subroutines
"radcor" and "solout" which are actually replaced respectively by "radcorz" and "soloutz".
Irisa
On improving the convergence of Radau IIA methods applied to index 2 DAEs 13
3.1 Modifications to radcor
Only the computation of the algebraic component (z) is modified (if an index 2 DAE is solved). Once
the n th step has been accepted, two cases are considered:
1. less than three steps have been computed. Then, we keep the internal stages z n;1 , z n;2 , z n;3 and
the step size h n . The value of (y
2. three or more steps have been computed. Then, we keep the internal stages z n;1 , z n;2 , z n;3 and
the step size h n . r 1 , r 2 are computed by the formulas
and r
and w by the subroutine "vectw2". For (y n+1 ; z n+1 ) we put
z
Remark 3 For continuous outputs, we need also to keep the internal stages over two steps for the
differential components (see section 3.2.2).
3.2 New subroutines
3.2.1 Vectw2
In order to compute the formal expression of w and to create the associated fortran subroutine, the
manipulation package Maple was used. A call to vectw2 uses the format vectw2(icas,vw,r1,r2),
where the inputs are one of the five cases described below (icas) and the parameters r 1 , r 2 (r1,r2).
The output is the vector w (vw). Five cases are considered:
1. 1. In this case, we have seen that w depends on one parameter. It is optimized
as explained in Remark 1.
2. r
3. r
4. r
5. r 1
This allows us to reduce the cost of computation and to eliminate computational problems: had we
used the general expression of w (case 5), divisions by zero would have occured in the cases 1 to 4.
3.2.2 Continuous outputs
In the code Radau5, the subroutine "solout" provides the user with approximations at equidistant
output-points. The corresponding interpolation formulas are implemented in the subroutine "contr5".
"contr5(I,x)" gives an approximation U I (X) to the I th component of the solution Y at the point x
should lie in the interval [x n ; x n+1 ]). U is the collocation polynomial: it is of degree 3 and defined by
3:
PI n-976
14 A. Aubry , P. Chartier
For index 2 DAEs, are polynomials of degree 3 which satisfy
3:
By Theorem 7-8 ([HW91]), we have
As our aim is to increase the order of convergence for the algebraic component, it seems natural to
search for an approximation P
Approximation of z(x)
Let x be of the form x n\Gamma2 We define q as follows
where the vector satisfies the linear system
Using the notations of section 2.4, we have
Proposition 2 For all ' 2 (0; 1], S z
L;3 (') possesses a solution and
Proof: From Theorem 8-5 and 8-6 in [HW91], we have
According to the analysis of section 2.1, this leads to the system S z
L;3 (') which, by Theorem 1, possesses
a solution. 2
The subroutine "vectwz" computes w('). As for the vector w, five cases are considered. When h
depends on one parameter (case 1) which is not adjusted as in Remark 1. In
this case, the value defined by continuity for w(') is choosen. A call to "vectwz" uses the format
vectwz(icas,vwz,r1,r2,t), where the inputs are one of the five cases described before (icas) and
the parameters r 1 , r 2 , ' (respectively r1,r2,t). The output is the vector w(') (vwz).
Approximation of y(x)
Let x be of the form x
Irisa
On improving the convergence of Radau IIA methods applied to index 2 DAEs 15
where the vector satisfies the linear system
C) is the Runge-Kutta method R 2 obtained by the composition of the 3-stage Radau IIA
method c) over two steps,
re
and E is the vector (z -
Proposition 3
Proof: Let us introduce the vector and the following polynomial
u:
From Theorem 8-5 and 8-6 in [HW91], we have
Using the points of the internal stages, it follows
so that (P ) is equivalent to the system S y
The subroutine "vectwy" computes B(j). A call to "vectwy" uses the format vectwy(vwy,r,t),
where the inputs are the parameters r and j (respectively r,t) and the output the vector B(j) (vwy).
The subroutine "soloutz" provides the user with approximations (p(x i
out
out
out
out
at equidistant output-points
out ) i=1;\Delta\Delta\Delta;N . For the differential component, x i
out is of the form x
choosen so as to satisfy x
out - x n+1 and for the algebraic one, x i
out is
of the form x n\Gamma2 is choosen in satisfy x
out - x n . A call to
"soloutz" uses the format
where the inputs are the number of accepted steps (nr), x n (xold), x n+1 (x), (y n+1 ; z n+1 ) (y), the
system's dimension (neq), one of the five cases described before for the computation of w(') (icas),
the parameters r1, r2 (r1,r2), the stepsize h n\Gamma2
variable (last) to indicate if the last computational step has been reached.
PI n-976
A. Aubry , P. Chartier
significant digits
modified radau5
Figure
1: Precision versus computing time - algebraic components - test problem
4 Numerical experiments
4.1 Test problem
We consider the index two problem:
where \Psi is the following infinitely smooth function:
with consistant initial values. The exact solution is
In both codes, we set
(work(4) is the parameter - in the stopping criterion for Newton's method. work(4), work(5) are
the parameters c 1 , c 2 in the stepsize control, see [HW91], page 130-134).
In figure 1, we plot the cpu time against the number of significant digits (\Gammalog 10 (absolute error)) of
the algebraic components, for both codes. For this, we use continuous outputs : outputs are required
at we compute the global error and then take the maximum over all values. In figure 2,
we plot the cpu time against the number of significant figures of the differential components, for both
codes.
In the following problems, only the modified parameters will be shown.
Irisa
On improving the convergence of Radau IIA methods applied to index 2 DAEs 17261014
significant digits
modified radau5
Figure
2: Precision versus computing time - differential components - test problem
4.2 Pendulum
The simplest constrained mechanical system is the pendulum, whose equations of motions are described
in [HW91], page 483-485. We have applied the code Radau5 and the modified code to the GGL (Gear,
Gupta and Leimkuhler) formulation
with consistant initial values
simplicity, we took
In figure 3 (respectively 4), we plot the cpu time against the number of significant digits of the
algebraic (resp. differential) components, for both codes.
4.3 Multibody mechanism
A seven body mechanism is described in [HW91], page 531-545. We have applied the code Radau5
and the modified code to the index 2 formulation
with consistant initial values.
In figure 5 (respectively 6), we plot the cpu time against the number of significant digits of the algebraic
differential) components, for both codes. Here, outputs are required at t
4.4 Discharge pressure control
This simplified model of a dynamic simulation problem in petrochimical engineering is described
in [HLR89], page 116-118. We have applied the code Radau5 and the modified code to the following
PI n-976
A. Aubry , P. Chartier246810
significant digits
modified radau5
Figure
3: Precision versus computing time - algebraic components -
significant digits
modified radau5
Figure
4: Precision versus computing time - differential components - pendulum
Irisa
On improving the convergence of Radau IIA methods applied to index 2 DAEs 19
significant digits
modified radau5
Figure
5: Precision versus computing time - algebraic components - seven body mechanism
significant digits
modified radau5
Figure
versus computing time - differential components - seven body mechanism
PI n-976
A. Aubry , P. Chartier13579
significant digits
modified radau5
Figure
7: Precision versus computing time - algebraic components - discharge pressure control26100.01
significant digits
modified radau5
Figure
8: Precision versus computing time - differential components - discharge pressure control
with consistant initial values. Here, initial step size h is equal to 10 \Gamma5 .
In figure 7 (respectively 8), we plot the cpu time against the number of significant digits of the
algebraic (resp. differential) components, for both codes. Here, outputs are required at t
Irisa
On improving the convergence of Radau IIA methods applied to index 2 DAEs 21
5 Conclusion
A new simple technique to overcome the order reduction phenomenon, appearing for the algebraic
component when Radau IIA methods are applied to index two DAEs, is proposed.
Increasing the order of convergence of z is made possible by considering the composition of the basic
method with itself over oe steps. As z n+1 is defined in the basic method as a linear combination of the
internal stages, a good choice of oe should provide enough freedom for the order conditions associated
with the composite method to be satisfied. We have determined these order conditions which derive
straightforwardly from the work of E. Hairer, C. Lubich and M. Roche (section 2.1). Then we have
shown that some of those conditions are redundant and might be omitted for s-stage Radau IIA methods
with s - 4 (section 2.3 to 2.5).
It could be interesting to generalize this simplifications to any Radau IIA methods. A general question
will be to determine how many compositions of a s-stage Radau IIA method have to be considered to
obtain an order of convergence equal to 2s \Gamma 1 for the algebraic component. However, it does not seem
reasonable any more to consider a practical implementation of the corresponding method for s - 4,
owing to the complexity of the formulas for variable stepsize.
The formulas for incorporated in the code Radau5 developed by E. Hairer and G.
Wanner. Slight modifications were required (section 3). Only the computation of the algebraic component
was modified and a new procedure in order to have continuous outputs was created where we
have used our technique for both components (algebraic and differential) to compute approximations
of order five at equidistant output-points.
According to our numerical experiments, results for the differential components are disappointing.
However, the use of our technique in the code Radau5 leads to an increase of the accuracy for the
algebraic components (when tolerances are sufficiently small).
6
Appendix
6.1 Construction of the vector w in the case
In this section, we explained the calculus of linear algebra used to show that composed five times the
s-stage Radau IIA method is sufficient in the case expand the following vectors of (C 6 ) (We
use Lemma
A
A
A
and the following vectors of (C 7 ) (We use Lemma
PI n-976
22 A. Aubry , P. Chartier
A
r 6
r 6
A
A
A
A
r 6
r 6
r 6
r 6
Let introduce the following vectors
(idem for the vectors t
r 6
r 6
r 6
r 6
r 6
r 6
r 6
and the system (S L;4 ) is equivalent to
Irisa
On improving the convergence of Radau IIA methods applied to index 2 DAEs 23
Proposition 4
1. are linearly dependent.
2. are linearly dependent.
3. are linearly dependent.
4. are linearly dependent.
5. are linearly dependent.
Proof: Because of the expression of the vectors it is sufficient to show that
are linearly dependent (idem for the part 2 to 5 of the proposition). The method
c) is of local order 8 for the differential component. Thus, order conditions associated with
the trees of DAT2 y (7) are satisfied. In particular,
Finally, we obtain
but b 6= 0, hence the proposition is shown. 2
Remark 4 If the step size is constant (i.e. r
1. are linearly dependent.
2. are linearly dependent.
3. are linearly dependent.
4. are linearly dependent.
Hence, nine equations are identically satisfied and p can be choosen equal to four.
PI n-976
A. Aubry , P. Chartier
--R
On smoothing and order reduction effects for implicit Runge-Kutta formulae
Numerical solution of initial value problems in differential-algebraic equations
A Composition Law for Runge-Kutta Methods Applied to Index-2 Differential-Algebraic Equations
Coefficients of the Taylor expansion for the solution of differential-algebraic systems
The Numerical Solution of Differential Algebraic Systems by Runge-Kutta Methods
Solving Ordinary Differential Equations (Vol
Stiff Problems and Differential Algebraic Problems (Vol.
A description of DASSL: A differential/Algebraic System Solver.
--TR
--CTR
Frank Cameron , Mikko Palmroth , Robert Pich, Quasi stage order conditions for SDIRK methods, Applied Numerical Mathematics, v.42 n.1, p.61-75, August 2002
Frank Cameron, A Matlab package for automatically generating Runge-Kutta trees, order conditions, and truncation error coefficients, ACM Transactions on Mathematical Software (TOMS), v.32 n.2, p.274-298, June 2006 | rooted trees;runge-kutta methods;composition;differential-algebraic systems of index 2;Radau IIA methods;simplifying assumptions |
296955 | Rapid Concept Learning for Mobile Robots. | Concept learning in robotics is an extremely challenging problem: sensory data is often high-dimensional, and noisy due to specularities and other irregularities. In this paper, we investigate two general strategies to speed up learning, based on spatial decomposition of the sensory representation, and simultaneous learning of multiple classes using a shared structure. We study two concept learning scenarios: a hallway navigation problem, where the robot has to induce features such as opening or wall. The second task is recycling, where the robot has to learn to recognize objects, such as a trash can. We use a common underlying function approximator in both studies in the form of a feedforward neural network, with several hundred input units and multiple output units. Despite the high degree of freedom afforded by such an approximator, we show the two strategies provide sufficient bias to achieve rapid learning. We provide detailed experimental studies on an actual mobile robot called PAVLOV to illustrate the effectiveness of this approach. | Introduction
Programming mobile robots to successfully operate in unstructured environments,
including offices and homes, is tedious and difficult. Easing this programming burden
seems necessary to realize many of the possible applications of mobile robot
technology [7]. One promising avenue towards smarter and easier-to-program robots
is to equip them with the ability to learn new concepts and behaviors. In partic-
ular, robots that have the capability of learning concepts could be programmed
or instructed more readily than their non-learning counterparts. For example, a
robot that could be trained to recognize landmarks, such as "doors" and "intersec-
tions", would enable a more flexible navigation system. Similarly, a recycling robot,
which could be trained to find objects such as "trash cans" or "soda cans", could
be adapted to new circumstances much more easily than non-learning robots (for
example, new objects or containers could be easily accommodated by additional
training).
Robot learning is currently an active area of research (e.g. see [5], [6], [9], [16]).
Many different approaches to this problem are being investigated, ranging from
supervised learning of concepts and behaviors, to learning behaviors from scalar
feedback. While a detailed comparison of the different approaches to robot learning
is beyond the scope of this paper (see [17]), it is arguable that in the short term,
robots are going to be dependent on human trainers for much of their learning.
S. MAHADEVAN, G. THEOCHAROUS, N. KHALEELI
Specifically, a pragmatic approach to robot learning is one where a human designer
provides the basic ingredients of the solution (e.g. the overall control architec-
ture), with the missing components being filled in by additional training. Also,
approaches involving considerable trial-and-error, such as reinforcement learning
[25], are difficult to use in many circumstances, because they require long training
times, or because they expose the robot to dangerous situations. For these reasons,
we adopt the framework of supervised learning, where a human trainer provides
the robot with labeled examples of the desired concept.
Supervised concept learning from labeled examples is probably the most well-studied
form of learning [20]. Among the most successful approaches are decision
trees [23] and neural networks [19]. Concept learning in robotics is an extremely
challenging problem, for several reasons. Sensory data is often very high-dimensional
(e.g. even a coarsely subsampled image can contain millions of pixels),
noisy due to specularities and other irregularities, and typically data collection
requires the robot to move to different parts of its environment. Under these con-
ditions, it seems clear that some form of a priori knowledge or bias is necessary for
robots to be able to successfully learn interesting concepts.
In this paper, we investigate two general approaches to bias sensory concept
learning for mobile robots. The first is based on spatial decomposition of the sensor
representation. The idea here is to partition a high-dimensional sensor represen-
tation, such as a local occupancy grid or a visual image, into multiple quadrants,
and learn independently from each quadrant. The second form of bias investigated
here is to learn multiple concepts using a shared representation. We investigate the
effectiveness of these two approaches on two realistic tasks, navigation and recy-
cling. Both these tasks are studied on a real robot called PAVLOV (see Figure 1).
In both problems, we use a standardized function approximator, in the form of a
feedforward neural net, to represent concepts, although we believe the bias strategies
studied here would be applicable to other approximators (e.g. decision trees
or instance-based methods).
In the navigation task, PAVLOV is required to traverse across an entire floor of
the engineering building (see Figure 10). The navigational system uses a hybrid
two-layered architecture, combining a probabilistic planning and execution layer
with a reactive behavior-based layer. The planning layer requires the robot to map
sensory values into high-level features, such as "doors" and "openings". These
observations are used in state estimation to localize the robot, and are critical to
successful navigation despite noisy sensing and actions. We study how PAVLOV
can be trained to recognize these features from local occupancy grid data. We also
show that spatial decomposition and multiple category learning provide a relatively
rapid training phase.
In the recycling task, PAVLOV is required to find items of trash (e.g. soda cans
and other litter) and deposit them in a specified trash receptacle. The trash receptacles
are color coded, to make recognition easier. Here, we study how PAVLOV
can be trained to recognize and find trash receptacles from color images. The data
RAPID CONCEPT LEARNING FOR MOBILE ROBOTS 3
is very high dimensional, but once again, spatial decomposition and multi-category
learning are able to sufficiently constrain the hypothesis space to yield fast learning.
The rest of the paper is organized as follows. We begin in Section 2 by describing
the two robotics tasks where we investigated sensory concept learning. Section 3
describes the two general forms of bias, decomposition and sharing, used to make
the concept learning problem tractable. Section 4 describes the experimental results
obtained on a real robot platform. Section 5 discusses the limitations of our
approach and proposes some directions for further work. Section 6 discusses some
related work. Finally, Section 7 summarizes the paper.
2. Two Example Tasks
We begin by describing the real robot testbed, followed by a discussion of two
tasks involving learning sensory concepts from high-dimensional sensor data. The
philosophy adopted in this work is that the human designer specifies most of the
control architecture for solving the task, and the purpose of sensory concept learning
is to fill in some details of the controller.
2.1. PAVLOV: A Real Robot
Figure
shows our robot PAVLOV 1 , a Nomad 200 mobile robot base, which was
used in the experiments described below. The sensors used on PAVLOV include
ultrasound sonar and infra-red (IR) sensors, arranged radially in a ring. Two sets
of bumper switches are also provided. In addition, PAVLOV has a color camera
and frame-grabber. Communication is provided using a wireless Ethernet system,
although most of the experiments reported in this paper were run onboard the
robot's Pentium processor.
Figure
1. The experiments were carried out on PAVLOV, a Nomad 200 platform.
4 S. MAHADEVAN, G. THEOCHAROUS, N. KHALEELI
2.2. Navigation
Robot navigation is a very well-studied topic [1]. However, it continues to be
an active topic for research since there is much room for improvement in current
systems. Navigation is challenging because it requires dealing with significant sensor
and actuator errors (e.g. sonar is prone to numerous specular errors, and odometry
is also unreliable due to wheel slippage, uneven floors, etc.
We will be using a navigation system based on a probabilistic framework, formally
called partially-observable Markov decision processes (POMDP's) [4], [13], [21].
This framework uses an explicit probabilistic model of actuator and sensor uncer-
tainty, which allows a robot to maintain belief estimates of its location in its envi-
ronment. The POMDP approach uses a state estimation procedure that takes into
account both sensor and actuator uncertainty to determine the approximate location
of the robot. This state estimation procedure is more powerful than traditional
state estimators, such as Kalman filters [14], because it can represent discontinuous
distributions, such as when the robot believes it could be in either a north-south
corridor or an adjacent east-west corridor.
For state estimation using POMDP's, the robot must map the current sensor
values into a few high level observations. In particular, in our system, the robot
generates 4 observations (one for each direction). Each observation can be one
of four possibilities: door, wall, opening, or undefined. These observations are
generated from a local occupancy grid representation computed by integrating over
multiple sonar readings.
Figure
2 illustrates the navigation system onboard PAVLOV, which combines a
high level planner with a reactive layer. The route planner and execution system
used is novel in that it uses a discrete-event probabilistic model, unlike previous
approaches which use a discrete-time model. However, as the focus of this paper is
on learning the feature detectors, we restrict the presentation here to explaining the
use of feature detectors in state estimation, and refer the reader to other sources
for details of the navigation system [11], [18].
The robot maintains at every step a belief state, which is a discrete probability
distribution on the underlying state space (e.g. in our environment, the belief state
is a 1200-dimensional vector). If the current state distribution is ff prior , the state
distribution ff post , after the execution of an abstract action a, is given by 2
ff post
scale
This updated state distribution now serves as ff prior when the state distribution
is updated to ff post , after an abstract observation
ff post
scale
O(ojs)ff prior (s); 8s 2 S (2)
In both updates, scale is a normalization constant that ensures that
RAPID CONCEPT LEARNING FOR MOBILE ROBOTS 5
ff post
This is necessary since not every action is defined in every state (for example, the
action go-forward is not defined in states where the robot is facing a wall).
2.3.
Abstract
Observations
In each state, the robot is able to make an abstract observation. This is facilitated
through the modeling of four virtual sensors that can perceive features in the nominal
directions front, left, back and right. Each sensor is capable of determining if
a percept is a wall, an opening, a door or if it is undefined. An abstract observation
is a combination of the percepts in each direction, and thus there are 256 possible
abstract observations. The observation model specifies, for each state and action,
the probability that a particular observation will be made.
Denote the set of virtual sensors by I and the set of features that sensor i 2 I
can report on by Q(i). The sensor model is specified by the probabilities v i (f js)
for all i 2 I , f 2 Q(i), and s 2 S, encoding the sensor uncertainty. v i is the
probability with which sensor i reports feature f in state s. An observation o is the
aggregate of the reports from each sensor. This is not explicitly represented. We
calculate only the observation probability. Thus, if sensor i reports feature f , then
Y
i2I
Given the state, this assumes sensor reports from different sensors are independent.0000000000000000000000000000000000000000000000000000000000000000111111111111111111111111111111111111111111111111111111111111111111111111
Sensor Reports
Grids
Raw Sensor
Values
Motor
Commands
Action
Reports
Action
Commands
Neural Net
Feature
Detectors Layer
Behavior-based
Layer
Planning
Figure
2. A hybrid declarative-reactive architecture for robot navigation. The neural net feature
detectors (shaded box) are trained using spatial decomposition and multi-task learning.
6 S. MAHADEVAN, G. THEOCHAROUS, N. KHALEELI
2.4. Recycling
The second task we study is one where the robot has to find and pick up litter
lying on the floor (e.g. soda cans and other junk) and deposit it in a colored trash
receptacle (see Figure 3). This task involves several component abilities, such as
locating and picking up the trash, and also subservient behaviors (such as avoiding
obstacles etc. However, for the purposes of this paper, we will mainly focus on
the task of detecting a trash can from the current camera image, and moving the
robot till it is located adjacent to the trash can.
Figure
3. Image of a trash can, which is color coded to facilitate recognition (this can is colored
yellow).
Avoid
Motors/Turret
Turn
Camera
Sensors
Avoid
Bump
Figure
4. Behavior-based architecture for recycling task. The focus of sensory concept learning
here is to improve "camera turn" behavior by learning how to detect and move towards trash
cans.
RAPID CONCEPT LEARNING FOR MOBILE ROBOTS 7
The recycling task is accomplished using a behavior-based architecture [2], as
illustrated in Figure 4. Only one of the behaviors, "camera turn" is improved by
the sensory concept learning methods described here, in particular, by learning
how to detect and move towards the trash can. The other behaviors implement a
collection of obstacle avoidance algorithms, which are not learned.
3. Accelerating sensory concept learning
Learning sensory concepts is difficult because the data is often very high-dimensional
and noisy. The number of instances is often also limited, since data collection requires
running the robot around. In order to learn useful concepts, under these
conditions, requires using some appropriate bias [20] to constrain the set of possible
hypotheses 3 The study of bias is of paramount importance to machine learning,
and some researchers have attempted a taxonomy of different type of bias (e.g. see
[24]). Among the main categories of bias studied in machine learning are hypothesis
space bias (which rules out certain hypotheses), and preference bias which ranks one
hypotheses over another (e.g. prefer shallower decision trees over deeper ones).
In the context of robotics, the ALVINN system [22] for autonomous driving is a
good example of the judicious use of hypotheses bias to speed convergence. Here, for
every human provided example, a dozen or so synthetic examples are constructed
by scaling and rotating the image input to the net, for which the desired output
is computed using a known pursuit steering model. We present below two ways
of accelerating sensory concept learning, which can also be viewed as a type of
hypotheses space biases.
3.1. Spatial Decomposition
The sensory state space in both tasks described above is huge (of the order of
several hundred real-valued inputs). The number of training examples available is
quite limited, e.g. on the order of a few hundred at most. How is it possible to
learn a complex function from such a large state space, with so little data? We use
two general approaches to decompose the overall function learning problem.
The first idea is simple: partition the state into several distinct regions, and learn
subfunctions for each region. The idea is illustrated in Figure 5. This idea is used
in the navigation domain to train four separate feature detectors, one each for the
front and back quadrants of the local occupancy grid, and one each for the left and
right quadrants. There are two advantages of such a decomposition: each image
generates four distinct training examples, and the input size is halved from the
original input (e.g. in the navigation domain, the number of inputs is 512 rather
than 1024).
8 S. MAHADEVAN, G. THEOCHAROUS, N. KHALEELI
3.2. Multi-class Learning
The second strategy used in our work to speed sensory concept learning is to learn
multiple categories using a shared structure. This idea is fairly well-known in neural
nets, where the tradeoff between using multiple single output neural nets vs. one
multi-output neural net has been well studied. Work by Caruana [3] shows that even
when the goal is to learn a single concept, it helps to use a multi-output net to learn
related concepts. Figure 6 illustrates the basic idea. In the recycling domain, for
example, the robot learns not just the concept of "trash can", but also whether the
object is "near" or "far", on the "left" or on the "right". Simultaneously learning
these related concepts results in better performance, as we will show below.
4. Experimental Results
The experiments described below were conducted over a period of several months
on our real robot PAVLOV, either inside the laboratory (for recycling) or in the
corridors (for navigation). We first present the results for the navigation task, and
subsequently describe the results for the recycling task.
4.1. Learning Feature Detectors for Navigation
Given that the walls in our environment were fairly smooth, we found that sonars
were prone to specular reflections in a majority of the environment. This made
it difficult to create hard-coded feature detectors for recognizing sonar signatures.
We show below that using an artificial neural network produced more accurate
and consistent results. Not only was it easy to implement and train, but it is also
possible to port it to other environments and add new features. Figure 7 shows
the neural net used in feature detection. The net was trained using the quickprop
method [8], an optimized variant of the backpropagation algorithm.
local occupancy grids were collected by running the robot through the
hallways. Each local occupancy grid was then used to produce 4 training patterns.
The neural net was trained on 872 hand labeled examples. Since all sensors predict
the same set of features, it was only necessary to learn one set of weights. Figure 8
shows the learning curve for the neural net, using batch update. Starting off with
a set of random weights, the total error over all training examples converged to an
acceptable range (! 1) within about training epochs.
A separate set of data, with 380 labeled patterns, was used to test the net. This
would be the approximately the number of examples encountered by the robot,
as it navigated the loop in the Electrical Engineering department (nodes 3-4-5-6
in
Figure
10). Feature prediction is accomplished by using the output with the
maximum value. Out of the 380 test examples, the neural net correctly predicts
features for 322, leading to an accuracy of 85%.
RAPID CONCEPT LEARNING FOR MOBILE ROBOTS 900000000000000000000000000000000011111111111111111111111111111111111111111111000000000000000000000000000000000000111111111111111111111111111111111111
sensor
representation original function
front
back
left
right
Figure
5. Spatial decomposition of the original sensory state helps speed learning sensory concepts.
Here, the original sensory space is decomposed into a pair of two disjoint quadrants.00110011001101010011 001100000000000000000000000001111111111111111111111111000000000000000000000000000000000000000000000000001111111111111111111111111111111111111111111110000011111000000000000000000000000011111111111111111111111110000011111 00000000000000000000000000000011111111111111111111111110000000000000000000000000111111111111111111111111100000000000000000000000001111111111111111111111111000000111111000000111111000000111111
Inputs
Concept1
Concept2
Concept3
Concept4
Figure
6. Learning multiple concepts simultaneously using a shared representation can speed
sensory learning.
S. MAHADEVAN, G. THEOCHAROUS, N. KHALEELI
Front
Back
Left Right
Hidden Layer Output Layer
Input Layer
local
occupancy
grid (32x32)
Door
Wall
Opening
Undefined
Figure
7. A local occupancy grid map, which is decomposed into four overlapping quadrants (left,
right, top, bottom), each of which is input to a neural net feature detector. The output of the net
is a multi-class label estimating the likelihood of each possible observation (door, opening, wall,
or undefined). The net is trained on manually labeled real data.20060010000
Total
Epochs
curve for neural net feature detector
Figure
8. Learning curve for training neural net to recognize features. The net is trained on 872
hand labeled examples using quickprop.
RAPID CONCEPT LEARNING FOR MOBILE ROBOTS 11
Opening)
(Right
Opening)
(Back Opening) (Back Opening)
Wall)
(Right
Opening)
(Back Opening)
Wall)
(Right
Wall)
(Back Opening)
Wall)
(Right
Wall)
(Back Opening)
Wall)
(Right
Wall)
(Back Opening)
Wall)
(Right
Wall)
(a)
(c)
(b)
(d)
Figure
9. Sample local occupancy grids generated over an actual run, with observations output
by the trained neural net. Despite significant sensor noise, the net is able to produce fairly reliable
observations.
S. MAHADEVAN, G. THEOCHAROUS, N. KHALEELI
Figure
9 illustrates the variation in observation data, generated during an actual
run. In these occupancy grids, free space is represented by white, while black
represents occupied space. Gray areas indicate that occupancy information is un-
known. The figures are labeled with the virtual sensors and corresponding features,
as predicted by the neural net.
Specular reflections occur when a sonar pulse hits a smooth, flat surface angled
obliquely to the transducer. The possibility exists that the sonar pulse will reflect
away from the sensor, and undergo multiple reflections before it is received by the
sensor. As a result, the sensor registers a range that is substantially larger than
the actual range. In the occupancy grids, this results in a physically occupied
region having a low occupancy probability. In Figure 9(a) where the specularities
are relatively insignificant, the neural net does an accurate job of predicting the
features. Effects of the specularities are noticeable in Figure 9(b) and Figure 9(c).
In
Figure
9(b) the neural net is able to predict a wall on the left, although it
has been almost totally obscured by specular reflections. The occupancy grid in
Figure
9(c) shows some bleed-through of the sonars. In both examples, the neural
net correctly predicts the high level features. Figure 9(e) and Figure 9(f) are
examples of occupancy grids where the effects of the specularities become very
noticeable. In these examples specularities dominate, almost totally wiping out
any useful information, yet the neural net is still able to correctly predict features.
From the presented examples, it is apparent that the neural net can robustly
predict features in a highly specular environment. Testing the neural net on an
unseen set of labeled data reveals that it is able to correctly predict 85% of the
features. In addition, although examples have not been presented, the neural net
is able to accurately predict features even when the robot is not approximately
oriented along one of the allowed compass directions.
The navigation system was tested by running the robot over the entire floor of the
engineering building over a period of several months (see Figure 10). The figure also
shows an odometric trace of a particular navigation run, which demonstrates that
despite significant odometric and sensor errors, the robot is still able to complete
the task.
4.2. Learning to Find Trash Cans
We now present the experimental findings from the recycling task. In order to
implement a similar neural network approach, we first took various snapshots of
the trash can from different angles and distances using the on-board camera of
PAVLOV. The images (100x100 color images) were labeled as to the distance and
orientation of the trash can. Six boolean variables were used to label the images
(front, left, right, far, near, very-near).
The inputs to our neural network were pre-processed selected pixels from the
100X100 images, and the outputs were the six boolean variables. The RGB values
of the colored images were transformed into HSI values (Hue, Saturation, Intensity)
which are better representatives of true color value because they are more invariant
RAPID CONCEPT LEARNING FOR MOBILE ROBOTS 13
Electrical Engineering
Department
Main
Office
Engineering
Computing
Computer
Science
Department
3.25 mm
Faculty Office
Robot Lab18
Y
(meters)
Odometric plot of three successive runs on PAVLOV in the EE department
Figure
10. The 3rd floor of the engineering building was used to test the effectiveness of the feature
detectors for navigation. The bottom figure shows an odometric trace of a run on PAVLOV,
showing the robot starting at node 1, doing the loop (3-4-5-6), and returning to node 1. The
robot repeated this task three times, and succeeded despite significant odometric errors.
14 S. MAHADEVAN, G. THEOCHAROUS, N. KHALEELI
to light variations [10]. Using an image processing program we identified the HSI
values of the yellow color and based on those values we thresholded the images
into black and white. We then sub-sampled the images into 400 pixels so that we
could have a smaller network with far fewer inputs. The sub-sampling was done by
selecting one pixel in every five.
Figure
11 shows the neural net architecture chosen for the recycling task. Figure
shows some sample images, with the output generated by the trained neural
net. The neural net produces a six element vector as its output, with 3 bits indicating
the direction of the trash can (left, front, or right), and 3 bits indicating the
distance (far, near, very near). The figure shows only the output values that were
close to 1. Note that the net can generate a combination of two categories (e.g.
near and very-near), or even sometimes a contradictory labeling (e.g. far/near). In
such cases, the camera turn behavior simply chooses one of the labels, and proceeds
with capturing subsequent images, which will eventually resolve the situation (this
is shown in the experiments below). Figure 13 shows the learning curve for training
the trash can net.
Figure
14 shows the experimental setup used to test the effectiveness of the trash
can finder. A single yellow colored trash can was placed in the lab at four different
locations. In each case, the robot was started at the same location, and its route
measured until it stopped adjacent to the trash can (and announced that it had
found the trash can). 4 Figure 15 and Figure 16 show several sample trajectories of
the robot as it tried to find the trash can. In all cases, the robot eventually finds
the trash can, although it takes noticeably longer when the trash can is not directly
observable from the starting position.
5. Limitations of the Approach
The results presented above suggest that high-dimensional sensory concepts can
be learned from limited training examples, provided that a human designer carefully
structures the overall learning task. This approach clearly has some definite
strengths, as well as some key limitations.
ffl Need for a teacher: Supervised concept learning depends on a human teacher
for providing labeled examples of the desired target concept. Previous work on
systems such as ALVINN [22] has clearly demonstrated that there are interesting
tasks where examples can be easily collected. Similarly, for the navigation and
recycling task, we have found that collecting and labeling examples to be a fairly
easy (although somewhat tedious) task. Nevertheless, this approach could not
be easily used in domains where it is difficult for a human teacher to find a
sufficiently diverse collection of examples.
ffl Filling in details of a pre-specified architecture: The approach taken in this
paper assumes that the designer has already pre-specified much of the overall
control structure for solving the problem. The purpose of learning is to complete
RAPID CONCEPT LEARNING FOR MOBILE ROBOTS 15
inputs 6 outputs
hidden units
RIGHT
FRONT
FAR
VERY NEAR
Figure
11. A neural net trained to detect trash cans.
a b c
Figure
12. Sample images with the output labels generated by the neural net a: front, near. b:
front, far. c: left, front, far. d: right, far. e: left, near, very-near. f: front, very-near.
S. MAHADEVAN, G. THEOCHAROUS, N. KHALEELI20601001401800 20 40
Epochs
Comparing Multi-Output Learning vs. Single Output Learning
'multi-output.err'
'very-near.err'
Figure
13. This graph compares the training time for a multi-output net vs. training a set of
single output nets. Although the multi-output net is slower to converge, it performed better on
the test data.
a few missing pieces of this solution. In the navigation task, for example,
the feature detectors are all that is learned, since the overall planner, reactive
behaviors, and state estimator are pre-programmed. Obviously, this places a
somewhat large burden on the human designer.
ffl Decomposable functions: The sensory concepts being learned in the two tasks
were decomposable in some interesting way (either the input or the output space
could be partitioned). We believe many interesting concepts that robots need
to learn have spatial regularity of some sort that can be exploited to facilitate
learning.
6. Related Work
This research builds on a distinguished history of prior work on concept learning
from examples, both in machine learning [20] and in robot learning [5], [9]. Here,
we focus primarily on the latter work, and contrast some recent neural-net based
approaches with decision-tree based studies.
ALVINN [22] uses a feedforward neural net to learn a steering behavior from
labeled training examples, collected from actual human drivers. As noted earlier,
ALVINN exploits a pursuit model of steering to synthesize new examples to speed
RAPID CONCEPT LEARNING FOR MOBILE ROBOTS 17
Figure
14. Environmental setup for finding trash cans.
learning. ALVINN differs from our work in that it directly learns a policy, whereas
in our case the robot learns only feature detectors and recognizers. We believe
that directly learning an entire policy is quite difficult, in general. In fact, in a
subsequent across-the-country experiment, the direct policy learning approach was
rejected in favor of a simpler feature-based approach similar to our work (except
the templates were 1-dimensional rather than 2-dimensional, as in our work).
Thrun and Mitchell [27] propose a lifelong learning approach, which extends the
supervised neural-net learning framework to handle transfer across related tasks.
Their approach is based on finding invariances across related functions. For ex-
ample, given the task of recognizing many objects using the same camera, invariances
based on scaling, rotation, and image intensity can be exploited to speed up
learning. Their work is complementary to ours, in that we are focusing on rapid
within-task learning, and the invariants approach could be easily combined with
the partioning and multi-class approach described here.
Such studies can be contrasted with those using decision trees. For example,
Tan [26] developed an ID3 decision-tree based algorithm for learning strategies for
picking up objects, based on perceived geometric attributes of the object, such as
its height and shape. Salganicoff et al. [15] extended the decision-tree approach
for learning grasping to an active learning context, where the robotic system could
itself acquire new examples through exploration.
In general, the decision tree approaches seem more applicable when the data is not
high-dimensional (in both the system just cited, the number of input dimensions is
S. MAHADEVAN, G. THEOCHAROUS, N. KHALEELI
-1.4
-1.2
-0.4
-0.20.2
South
East
Navigation to lab position A
"pose.a1"
"pose.a2"
"pose.a3"
-0.3
-0.2
South
East
Navigation to lab position B
"pose.b1"
"pose.b2"
"pose.b3"
Figure
15. Three successful traces (starting at 0,0) of the robot navigating to the trash can, placed
in positions A and B. In both positions A and B, the trash can was directly observable from the
robot starting position.
South
East
Navigation to lab position C
"pose.c1"
"pose.c2"
"pose.c3"
South
East
Navigation to lab position D
"pose.d1"
Figure
16. Results for learning with trash can in position C and D. The top figure shows three
successful traces (starting at 0,0) of the robot navigating to lab position C. Note that in pose.c2
trace the robot temporarily loses the trash can but eventually gets back on track. The bottom
figure shows a successful run when the trash can is in position D, which is initially unobservable
to the robot.
S. MAHADEVAN, G. THEOCHAROUS, N. KHALEELI
generally less than 10). By contrast, in our work as well as in the ALVINN system,
the input data has several hundred real-valued input variables, making it difficult
to employ a top decision-tree type approach. The advantage, however, of using
decision trees is that the learned knowledge can be easily converted into symbolic
rules, a process that is much more difficult to do in the case of a neural net.
Symbolic learning methods have also been investigated for sensory concept learn-
ing. Klingspor et al. [12] describe a relational learning algorithm called GRDT,
which infers a symbolic concept description (e.g. the concept thru door) by generalizing
user labeled training instances of a sequence of sensor values. A hypothesis
space bias is specified by the user in the form of a grammar, which restrict possible
generalizations. A strength of the GRDT algorithm is that it can learn hierarchical
concept descriptions. However, a weakness of this approach is that it relies on using
a logical description of the overall control strategy (as opposed to using a procedural
reactive/declarative structure). Logical representations incur a computational
cost in actual use, and their effectiveness in actual real-time robotics applications
has not been encouraging.
7.
Summary
This paper investigates how mobile robots can acquire useful sensory concepts from
high-dimensional and noisy training data. The paper proposes two strategies for
speeding up learning, based on decomposing the sensory input space, and learning
multiple concepts simultaneously using a shared representation. The effectiveness
of these strategies was studied in two tasks: learning feature detectors for probabilistic
navigation and learning to recognize visual objects for recycling. A detailed
experimental study was carried out using a Nomad 200 real robot testbed called
PAVLOV. The results suggest that the strategies provide sufficient bias to make it
feasible to learn high-dimensional concepts from limited training data.
Acknowledgements
This research was supported in part by an NSF CAREER award grant No. IRI-
9501852. The authors wish to acknowledge the support of the University of South
Florida Computer Science and Engineering department, where much of this research
was conducted. We thank Lynn Ryan for her detailed comments on a draft of this
paper.
Notes
1. PAVLOV is an acronym for Programmable Autonomous Vehicle for Learning Optimal Values.
2. In actuality, the state estimation procedure is more complex since we use an event-based semi-Markov
model to represent temporally extended actions. However, for the purposes of this
paper, we are simplifying the presentation.
RAPID CONCEPT LEARNING FOR MOBILE ROBOTS 21
3. Bias is generally defined as any criterion for selecting one generalization over another, other
than strict consistency with the training set. It is easy to show that bias-free learning is
impossible, and would amount to rote learning.
4. Although we do not discuss the details here, the robot employs a further processing phase to
extract the rough geometrical aligment of the trash can opening in order to drop items inside
it.
--R
Navigating Mobile Robots.
A robust layered control system for a mobile robot.
Multitask learning: A knowledge-based source of inductive bias
Acting under uncertainty: Discrete bayesian models for mobile-robot navigation
Robot Learning.
Introduction to the special issue on learning autonomous robots.
Robotics in Service.
Robot Learning.
Fundamentals of Digital Image Processing.
A robust robot navigation architecture using partially observable semi-markov decision processes
Learning concepts from sensor data.
A robot navigation architecture based on partially observable markov decision process models.
Fast vision-guided mobile robot navigation using model-based reasoning and prediction of uncertainties
Machine learning for robots: A comparison of different paradigms.
Mobile robot navigation using discrete-event markov decision process models
Machine Learning.
An office-navigating robot
Neural network based autonomous navigation.
Induction of decision trees.
Inductive learning from preclassified training examples.
Reinforcement Learning: An Introduction.
Learning one more thing.
--TR
--CTR
B. L. Boada , D. Blanco , L. Moreno, Symbolic Place Recognition in Voronoi-Based Maps by Using Hidden Markov Models, Journal of Intelligent and Robotic Systems, v.39 n.2, p.173-197, February 2004 | robot learning;neural networks;concept learning |
297106 | Power balance and apportionment algorithms for the United States Congress. | We measure the performance, in the task of apportioning the Congress of the United States, of an algorithm combining a heuristic-driven (simulated annealing) search with an exact-computation dynamic programming evaluation of the apportionments visited in the search. We compare this with the actual algorithm currently used in the United States to apportion Congress, and with a number of other algorithms that have been proposed. We conclude that on every set of census data in this country's history, the heuristic-driven apportionment provably yields far fairer apportionments than those of any of the other algorithm considered, including the algorithm currently used by the United States for Congressional apportionment. | 1. MOTIVATION AND OVERVIEW
How should the seats in the House of Representatives of the United States be
allocated among the states? The Constitution stipulates only that "Representa-
tives shall be apportioned among the several states according to their respective
numbers, counting the whole numbers of persons in each :." The obvious
implementation of this requirement would almost always yield fractional numbers
of seats. The issue of how to achieve fair integer seat allocations has been controversial
in this country virtually since its founding. In fact, many of the apportionment
algorithms we will discuss have been proposed and debated by famous historical
figures, including John Quincy Adams, Alexander Hamilton, Thomas Jefferson,
and Daniel Webster. The debate is far from over. In fact, the relative fairness of
two of the algorithms we will discuss in this paper was argued before the Supreme
Court in 1992 [Supreme 1992]. 1 We propose an apportionment method consisting
of a simulated-annealing search that is aimed at maximizing the fairness of the
resultant apportionments. Even though the complexity of the algorithm is high,
our implementation shows that the method is feasible for the cases of interest. In-
deed, we have been able to run it on the data conducted during all the census years
in US history and the results are conclusive: In all cases our method was provably
superior with respect to widely agreed fairness criteria to the most prominent
apportionment algorithms that have been used or proposed earlier.
Balinski and Young [1982] (see also [Balinski and Young 1985]) performed a detailed
comparative study, for six historical algorithms of Congressional apportion-
ment, of the degree to which the algorithms' allocations matched states' "quotas,"
i.e., their portion of the population times the House size. Mann and Shapley [Mann
and Shapley 1960; Mann and Shapley 1962] and others studied, for the actual used-
in-Congress seat allocations, the power indices (in the Electoral College) of each
state. This paper attempts to combine the strengths of these two research lines.
In particular, we agree with Balinski and Young both that allocations should be
"fair," and that, in light of 200 years of debate (colorfully recounted and analyzed by
Balinski and Young [1982]), obtaining new insights into the merits and weaknesses
of the six historical algorithms should be a priority. On the other hand, many
feel that "fairness" should be defined by a tight match between power and quotas,
rather than between allocations and quotas. Our feeling is very much in harmony
with modern political science theory, where it is widely recognized that allocations
Briefly put, the Supreme Court ruled that Congress acted within its authority in choosing the
currently used algorithm (the "Huntington-Hill Method"). However, the Supreme Court's decision
left open the possibility that Congress would be acting equally within its authority if it chose to
adopt some other algorithm. The court ruled that "the constitutional framework.delegate[s] to
Congress a measure of discretion broader than that accorded to the States [in terms of choosing
how to apportion], and Congress's apparently good-faith decision to adopt the [Huntington-]Hill
Method commands far more deference, particularly as it was made after decades of experience,
experimentation, and debate, was supported by independent scholars, and has been accepted
for half a century [Supreme 1992]." Regarding the decision, we mention only that the power
balancing issues discussed in this paper were not brought before the court, but that nonetheless
the experimental results of this paper suggest that among the two algorithms being discussed by
the court in the case, the Huntington-Hill algorithm in fact gives fairer apportionments in terms
of power balancing.
Power Balance and Apportionment Algorithms \Delta 3
do not necessarily directly correspond to power, and that power indices (see the
detailed definitions later in this paper) provide a potentially more accurate gauge
of power (see, e.g., the discussions in [Shapley 1981; Riker and Ordeshook 1973]). 2
So, in this light, we repeat the comparative study of Balinski and Young, but we
replace allocation comparisons with power index comparisons.
Of course, the historical algorithms were designed (in part-the full story is more
complex and political, and indeed led to the first presidential veto (see [Balinski
and Young 1982])) to achieve some degree of harmony and fairness between allocations
and quotas. This is not surprising, given that power indices had not
yet been invented. However, given that in this study our comparisons are based on
power indices, it seems natural to add to the six historical algorithms 3 an algorithm
tailored to achieve harmony between power indices and quotas. We have used a
heuristic based on the simulated annealing paradigm (see [Aarts and Korst 1989;
Metropolis et al. 1953; Kirkpatrick et al. 1983]), which finds an apportionment by
seeking to achieve a local minimum of the distance between normalized power indices
and quotas (the attribute "local" is with respect to a natural neighborhood
relation between apportionments). This heuristic yields results that are fairer to
those obtained by all the historical algorithms. We report in Section 3 the results
for the last census, 1990, but we have obtained similar results for all the census
years, 1790, 1800, . , 1980, and 1990.
The function class #P (first defined by Valiant [Valiant 1979a; Valiant 1979b])
is the counting version of NP. #P is the class of all functions f such that, for
some nondeterministic polynomial-time Turing machine N , for all inputs x it holds
that f(x) is the number of accepting computation paths of N(x). One problem in
studying power indices is that power indices are typically #P-complete [Prasad and
Kelly 1990; Garey and Johnson 1979] and, consequently, we perform a combinatorial
search that invokes at each iteration numerous #P-complete computations.
Fortunately, a dynamic programming approach, first proposed by Mann and Shapley
[1962], yields a pseudo-polynomial algorithm for computing the power indices,
i.e., an algorithm whose running time is polynomial in the size of the House and in
the number of states. Since these quantities have had reasonable values throughout
US history (the maximum values have been 435 and 50, which are also the current
values), we have been able to exactly compute the needed power indices.
2 For those unfamiliar with why allocations may not correspond to power, consider the following
typical motivating example. Suppose we have states A, B, and C with 6, 2, and 2 votes respectively.
Note that though between them states B and C have 40 per cent of the seat allocation, nonetheless
it is the case that in a majority-rule vote on some polarizing issue on which states have differing
interests and so their delegations vote as blocs, B and C have no power at all as A by itself is a
majority.
3 By the use of "the" in the context "the six historical algorithms" we do not mean to suggest that
no other algorithms have been proposed. Other algorithms have indeed been proposed (e.g., the
algorithm Condorcet suggested in 1793 ([Condorcet 1847], see [Balinski and Young 1982, p. 63])).
However, these six algorithms have been the key contenders in the apportionment discussion in
the United States.
2. DISCUSSION
In this section, we justify a number of decisions made in designing this study, and
we describe in more detail the background of the study.
2.1 Study Design
Our computer program takes as its input a list of states, hS
tions, h, and some other parameters regarding random
number generation and the implementation of the simulated annealing algorithm.
The program computes, for each state S i , the appropriate quota
We have considered the Banzhaf power index and the Shapley-Shubik power index.
They represent the most widely used quantitative ways to measure the power of a
player (or a state, in our parlance) in a voting game. It is well-known that the two
indices can, at least on artificially constructed examples, differ sharply (see, e.g.,
[Straffin 1978] and the references therein, and [Shapley 1981; Straffin 1977]). We
do not in this paper seek to resolve the broader question of whether power indices
are "correct" or "appealing" measures of power, or whether they thus should be
used to shape voting/apportionment systems (those are more issues of political
science than of experimental algorithms). Though there is much discussion in the
literature on the naturalness and usefulness of power indices, our study simply seeks
to provide-to those who do place value in power indices-quantitative information
on the comparative power distributions for the historical apportionment algorithms
and for simulated annealing. We hope that this information will also be useful
in the political science debate on the more general issue of choosing the "right"
apportionment scheme, and in Section 4 we mention the directions this information
suggests.
The Banzhaf power index (see, e.g., the detailed discussion in [Dubey and Shapley
1979]) is defined in terms of winning coalitions. A winning coalition is a subset S
of states such that the sum of the votes of the states in S is larger than half of the
total number of votes (which, of course, is equal to the House size). A state i is
critical for a coalition S if (a) i 2 S, (b) S is a winning coalition, and (c) S \Gamma fig is
not a winning coalition. The Banzhaf power index of a state i is, by definition, the
probability that for a randomly chosen coalition S (that is, for a coalition chosen
randomly from the set of all possible coalitions, not just from the set of winning
coalitions), it holds that i is critical for S.
The Shapley-Shubik power index [Shapley and Shubik 1954] is defined in terms
of linear orderings of states, which, intuitively, represent the extent to which the
states are interested in passing some bill (the first state in the ordering is the
strongest supporter of the bill, while the last one is the strongest opponent). Let
be such an ordering. A state i is a pivot for the ordering
- if it appears in some position k and i is critical for the coalition formed by the
first k states (in other words, the pivot of an ordering is the state whose joining
turns the developing coalition into a winning coalition). The Shapley-Shubik power
index of a state i is the probability that for a randomly chosen ordering - it holds
that i is the pivot of -.
Power Balance and Apportionment Algorithms \Delta 5
In order to calculate the closeness of the power indices to the quota, we have to
normalize them. That is, if pow i is the power index (Banzhaf or Shapley-Shubik)
of state i, we define
normpow
where, recall, h is the House size. This is the normalization used by Mann and Shapley
[1960]. For the rest of this discussion, by power of a state we will mean its normalized
power index (Banzhaf or Shapley-Shubik). Note that
h, and thus these normalized powers make it easier to compare power-to-quota
closeness. 4 The main metric that we have used to evaluate the power-to-quota
closeness is the L 2 metrics applied on proportions as defined below.
L 2 on proportions:
0-i!n
Although other measures can be used, we feel the L 2 on proportions to be the
most relevant one. We have performed experiments also with respect to the L 1
metric on differences and proportions and L 2 on differences, and in all cases we
reached the same conclusions. For completeness, here are the definitions of these
three other metrics.
L 1 on differences:
0-i!n
L 2 on differences:
0-i!n
L 1 on proportions:
0-i!n
The contrast between the "difference" and the "proportion" measures can be
dramatic. For example, suppose states A and B have quotas 1.5 and 40.5 but are
respectively given 2 and 41 votes. The "differences" measures treat the two states
similarly, but in the "proportions" measures, state A is viewed as being far more out
of balance than state B. However, we repeat, the conclusions of our investigation
remain the same, irrespective of the metric used.
The program apportions the votes via the six historical algorithms-Adams,
Dean, Huntington-Hill (we abbreviate this "H-Hill" in our tables; the method is also
sometimes referred to in the literature simply as the Hill Method), Webster (which
is sometimes also referred to in the literature as the Webster-Willcox Method), Jef-
ferson, and Hamilton-and simulated annealing. The algorithms work as follows.
Let us first discuss the algorithms of Adams, Dean, Huntington-Hill, Webster, and
Jefferson. Informally, they work as follows. One partitions the non-negative real
4 We note that many researchers prefer to use the Banzhaf index without normalization.
6 \Delta L.A. Hemaspaandra et al.
line into adjacent intervals, each corresponding to a number of votes. Then one
seeks an integer such that if one divides each state's population by that integer
and give each state the number of votes corresponding to the interval in which that
value falls, the sum of votes handed out equals the desired house size.
More formally, let I be such that
I
We say an apportionment hv 0 ,v 1 is the 5 sliding divisor apportionment
with respect to I there exists a real number d such that
The five historical sliding divisor algorithms are defined as follows (see also [Balinski
and Young 1982]).
Adams:. I
I
Dean:. I
I
Huntington-Hill:. I
I
Webster:. I
I
Jefferson:. I
Note, in particular, that the Dean, Huntington-Hill, and Webster algorithms describe
the intervals between, respectively, the harmonic, geometric, and arithmetic
means of successive integers. The algorithm currently in use in the United States
is the method of geometric means, i.e., the Huntington-Hill algorithm.
An important caveat is that the Constitution requires that each state be given at
least one representative. However, this artificial requirement would taint the comparative
analysis of the algorithms. In this study, we do allow algorithms to assign
zero votes to a state. For example, for the 1990 census data the Jefferson algorithm
allocates 0 votes to Wyoming. To learn whether our ignoring the Constitution's
"one vote minimum" rule affected our results, we also ran our program with the
added proviso that the sliding divisor algorithms were not allowed to assign 0 votes
to any state (we achieved this by setting I 1 := I 0 [ I 1 and I 0 := ;) and we obtained
the same result: The simulated annealing algorithm was still the best and
the Jefferson algorithm was still the worst.
5 It is not hard to see that if such exists, it is unique. Such solutions do not always exists. For
example, no sliding divisor algorithm can apportion two equal states with any odd House size.
However, for the five historical sliding divisor algorithms and large populations that are random
and independent in their low-order bits (informally, actual populations), nonexistence of solutions
is unlikely to occur, and, for historical census data, has never occurred.
Power Balance and Apportionment Algorithms \Delta 7
The Hamilton algorithm is quite different from the sliding divisor algorithms
described above. The Hamilton algorithm initially assigns to each state, S i , exactly
votes. Then it assigns one extra vote to each of the h \Gamma
states
having the largest values of q that ties in the fractional part can cause
this algorithm to fail to be well-defined, but, again, in practice this is unlikely to
occur, and indeed never has under actual historical census data.)
Finally, the simulated annealing algorithm that we propose works as follows. Let
us say that two apportionments are
neighbors of each other if b is obtained from a by shifting one vote from some
state i to some state j, i.e., there exist i and j, i 6= j, such that
apportionment (summing to h) is made in a
relatively arbitrary way. (The algorithm used in the program intentionally makes
an initial apportionment that gives a terrible match between powers and quotas.
Thus, it is up to the rest of the simulated annealing algorithm to try to fix this poor
initial assignment.) Then, repeatedly, a neighbor b of the current apportionment
a is randomly generated. If b is at least as good as a (in the sense that b has a
power-to-quota distance no greater than that of a), then b becomes the current
apportionment. Even if the new apportionment b is worse, it still has a chance
to become the current apportionment. Namely, let \Delta be the difference between
the power-to-quota distance for b and the power-to-quota distance for a. Since
we are in the case in which b is worse than a, \Delta is positive. Throughout this
process, we consider a current temperature that is gradually decreasing (each time
a given number of iterations have occurred, the current temperature is decreased by
multiplying it by a certain constant that is less than one, which is called the cooling
Let t be the value of the current temperature. We randomly generate a
real number r. If r ! e \Gamma\Delta=t , then b is taken as the current apportionment in place
of a. Observe that as the temperature t approaches the value zero, the probability
that r is less than e \Gamma\Delta=t gets smaller and smaller, and thus after an initial period in
which jumps between various descending hills leading to local optima are likely, the
process is stabilized ("it freezes," i.e., the probability of choosing an apportionment
that is worse than the current one becomes close to zero), with high probability
at some local optimum (and the hope of the simulated annealing approach is that
that local optimum is not too much worse in quality than the global optimum).
The exact stopping criterion is explained in Section 3.
The results of the simulated annealing algorithm are, of course, sensitive to the
values of the initial temperature and of the cooling factor, as well as to the choice
of the initial apportionment. Our intention has been not to fine-tune these values
but rather to prove that the historical methods can be easily, soundly beaten by
a modern computer science-based heuristic approach, on inputs drawn from real
census data. As we want to emphasize the realization of this goal, we report in
Section 3 the results for only one (rather arbitrary) setting of these parameters.
We plan to pursue in the future a more thorough investigation of the impact of the
parameter settings for the apportionment problem.
We digress here to mention that this algorithm can potentially beat the greedy
algorithm, 6 in which at each step we choose a better neighbor of the current ap-
6 However, for the real census data that we have used, the greedy algorithm yielded results very
8 \Delta L.A. Hemaspaandra et al.
portionment until we are stuck at a local optimum. Indeed, we found a four-state
instance in which the greedy algorithm gets stuck at a local optimum that is not a
global optimum. In this instance, 100 representatives must be apportioned between
4 states having populations 823, 801, 105, 101. It can be seen that a = (50; 34; 14; 2)
are two local optima with a being better than b (we are referring
here to the Shapley-Shubik power index, which is defined below). However,
starting from b and going through at-least-as-good-as b neighbors there is no way
to reach a (i.e., (b; a) is not in the transitive closure of the binary relation "neighbor
at-least-as-good-as b") because at some moment we have to pass through an apportionment
of the form (44; ; ; ). But, it can be verified that all apportionments of
this form are worse than b and thus are not reachable from b with the greedy algo-
rithm. In a certain precise sense, this is the smallest such example. In particular,
it is simple to prove that there are no examples such as the one above with 2 or 3
states, that is, in all 2 or 3 state cases, all locally optimal apportionments are also
globally optimal.
Prasad and Kelly [1990] have shown that the Banzhaf power index is #P-complete
and Garey and Johnson [1979] have shown that the Shapley-Shubik power index
is #P-complete, where the complexity class #P as usual denotes the counting version
of NP [Valiant 1979a; Valiant 1979b]-#P is the class of all functions f such
that, for some nondeterministic polynomial-time Turing machine N , for each x it
holds that f(x) is the number of accepting computation paths N has on input x.
Much of the past 20 years of research in theoretical computer science has been
devoted to proving that NP-complete problems (and thus #P-complete functions)
cannot be feasibly computed unless a wide range of implausible consequences occur
(see the survey [Sipser 1992]). For example, Toda [1991] (see also [Beigel et al.
1991; Toda and Ogiwara 1992; Gupta 1995; Regan and Royer 1995]) has shown
that Turing access to #P subsumes the entire polynomial hierarchy. However, a
dynamic programming approach will allow us to perform an exact computation of
both the Banzhaf power index and the Shapley-Shubik power index even for the
relatively large inputs of the censuses involving 50 states. The dynamic programming
approach for this problem was proposed by Mann and Shapley [1962]. We
use the notation from the previous section. The algorithm involves, for each state
constructing a matrix i C of order
House size, and
number of states:
Also, let Q be the minimum number of votes needed for a winning coalition (i.e.,
1). The entries in this matrix have the following
represents the number of coalitions not containing state i and having k states whose
votes sum to j. Let i C '
j;k be the number of coalitions containing k states from
close to those given by the simulated annealing algorithm, in particular also beating the historical
algorithms. This is true even though our implementation of the greedy algorithm accepted the
first local improvement found rather than enumerating all neighbors and accepting one yielding a
steepest descent.
Power Balance and Apportionment Algorithms \Delta 9
whose votes sum up to k. Observe that, for
Also, the following recurrence holds:
(note that in the above recurrence the
predecessor of
j;k . To find the Banzhaf power
index of state S i , we have to compute the number of coalitions for which S i is
critical (and then divide it by the total number of coalitions). This value is given
by
These are all coalitions not containing S i that are losing but with the addition of
To find the Shapley-Shubik power index of state S i , we have to compute the
number of linear orderings for which S i is a pivot (and then divide it by the total
number of linear orderings). This value is given by
Thus, the computation of both power indices reduces to the computation of the
entries of the matrix i C, which can be determined by calculating iteratively the
matrices Some shortcuts can be obtained using the following
observations. Note that in Equations (2.1) and (2.2), we need only the entries with
1. By looking at complementary coalitions, one can see that i C
. Thus, the entries in the matrix with column index k ? (n \Gamma 1)=2
can be computed, using the above formula, from the entries with k - (n \Gamma 1)=2 and
when this is done the largest row index j that we need is still Q\Gamma1. Consequently, we
have to compute only a quarter of the entries in matrix i C, namely those entries with
row index verifying verifying
After we have computed the matrix i C for state S i , we don't have to start from
scratch when passing to state S i+1 to compute i+1 C . Indeed, it is not difficult
to see that i+1 C by convention the
entries with negative row or column index are 0.
This algorithm is polynomial in n and Q (but, of course, it is not polynomial
in the length of a natural encoding of the input instance, because the reasonable
encodings of Q
bits). Nevertheless, since
the censuses so far h - 435, we have been able to compute the exact values of the
Banzhaf and Shapley-Shubik for the census years 1790, 1800, 1810, . , 1980, and
State Pop. Quota SA-B SA-SS Adams Dean H-Hill Webst Jeffer Hamilt
OH 10847115 19.0206 19 19
MI 9295297 16.2995
IN
MO 5117073 8.9729 9 9 9 9 9 9 9 9
WI 4891769 8.5778 9 9 9 9 9 9 9 9
CO
IA
MS
Totals 248072974 435.0000 435 435 435 435 435 435 435 435
Table
1. States, Populations, Quotas, and Apportionments: L2-of-Proportions Evaluation
Power Balance and Apportionment Algorithms \Delta 11
State Quota SA-B Adams Dean H-Hill Webst Jeffer Hamilt
IL 20.0437 19.8279 18.7446 19.7091 19.7097 19.7102 20.6716 19.7102
MI 16.2995 15.7744 15.7255 15.6914 15.6919 15.6924 16.6546 15.6924
GA 11.3597 11.7810 10.7599 10.7400 10.7404 10.7408 10.7208 10.7408
MA 10.5499 10.7900 9.7745 9.7569 10.7404 10.7408 10.7208 10.7408
IN 9.7218 9.8015 9.7745 9.7569 9.7573 9.7577 9.7399 9.7577
MO 8.9729 8.8152 8.7913 8.7758 8.7761 8.7765 8.7609 8.7765
WI 8.5778 8.8152 8.7913 8.7758 8.7761 8.7765 8.7609 8.7765
TN 8.5522 8.8152 8.7913 8.7758 8.7761 8.7765 7.7834 8.7765
MD 8.3844 8.8152 7.8098 7.7964 7.7967 7.7970 7.7834 7.7971
MN 7.6718 7.8308 7.8098 7.7964 7.7967 7.7970 7.7834 7.7971
LA 7.3998 7.8308 7.8098 6.8185 6.8188 6.8191 6.8074 6.8191
AL 7.0852 6.8482 6.8301 6.8185 6.8188 6.8191 6.8074 6.8191
KY 6.4622 6.8482 6.8301 5.8420 5.8422 5.8425 5.8326 5.8425
AZ 6.4270 6.8482 6.8301 5.8420 5.8422 5.8425 5.8326 5.8425
SC 6.1140 5.8671 5.8517 5.8420 5.8422 5.8425 5.8326 5.8425
CO 5.7768 5.8671 5.8517 5.8420 5.8422 5.8425 5.8326 5.8425
CT 5.7640 5.8671 5.8517 5.8420 5.8422 5.8425 5.8326 5.8425
IA 4.8691 4.8873 4.8746 4.8666 4.8668 4.8670 4.8589 4.8670
MS 4.5122 4.8873 4.8746 4.8666 4.8668 3.8925 3.8861 3.8925
KS 4.2919 3.9085 4.8746 3.8922 3.8923 3.8925 3.8861 3.8925
AR 4.1220 3.9085 3.8984 3.8922 3.8923 3.8925 3.8861 3.8925
WV 3.1449 2.9307 2.9231 2.9185 2.9186 2.9187 2.9139 2.9187
UT 3.0210 2.9307 2.9231 2.9185 2.9186 2.9187 2.9139 2.9187
NE 2.7677 2.9307 2.9231 2.9185 2.9186 2.9187 1.9423 2.9187
NM 2.6567 2.9307 2.9231 2.9185 2.9186 2.9187 1.9423 2.9187
HI 1.9433 1.9534 1.9484 1.9453 1.9454 1.9455 1.9423 1.9455
ID 1.7654 1.9534 1.9484 1.9453 1.9454 1.9455 0.9711 1.9455
RI 1.7596 1.9534 1.9484 1.9453 1.9454 1.9455 0.9711 1.9455
MT 1.4012 0.9766 1.9484 1.9453 0.9726 0.9727 0.9711 0.9727
ND 1.1201 0.9766 1.9484 0.9726 0.9726 0.9727 0.9711 0.9727
WY 0.7954 0.9766 0.9741 0.9726 0.9726 0.9727 0.0000 0.9727
Totals 435.0000 435.0000 435.0000 435.0000 435.0000 435.0000 435.0000 435.0000
Table
2. States, Quotas, and Normalized Powers: Banzhaf Power Index and L2-of-Proportions
State Quota SA-SS Adams Dean H-Hill Webst Jeffer Hamilt
IL 20.0437 20.0487 18.9942 19.9876 19.9858 19.9834 20.9756 19.9779
OH 19.0206 19.0005 17.9516
MI 16.2995 15.8872 15.8820 15.8399 15.8385 15.8366 16.8206 15.8324
GA 11.3597 11.8058 10.7941 10.7662 10.7653 10.7640 10.7362 10.7612
MA 10.5499 10.7974 9.7907 9.7656 10.7653 10.7640 10.7362 10.7612
IN 9.7218 9.7938 9.7907 9.7656 9.7647 9.7636 9.7384 9.7611
MO 8.9729 8.7947 8.7920 8.7695 8.7687 8.7677 8.7452 8.7654
WI 8.5778 8.7947 8.7920 8.7695 8.7687 8.7677 8.7452 8.7654
TN 8.5522 8.7947 8.7920 8.7695 8.7687 8.7677 7.7565 8.7654
MD 8.3844 8.7947 7.7977 7.7779 7.7772 7.7763 7.7565 7.7743
MN 7.6718 7.8001 7.7977 7.7779 7.7772 7.7763 7.7565 7.7743
LA 7.3998 7.8001 7.7977 6.7907 6.7902 6.7894 6.7721 6.7876
AL 7.0852 6.8100 6.8080 6.7907 6.7902 6.7894 6.7721 6.7876
KY 6.4622 6.8100 6.8080 5.8079 5.8075 5.8068 5.7921 5.8053
AZ 6.4270 6.8100 6.8080 5.8079 5.8075 5.8068 5.7921 5.8053
SC 6.1140 5.8244 5.8226 5.8079 5.8075 5.8068 5.7921 5.8053
CO 5.7768 5.8244 5.8226 5.8079 5.8075 5.8068 5.7921 5.8053
CT 5.7640 5.8244 5.8226 5.8079 5.8075 5.8068 5.7921 5.8053
OK 5.5158 5.8244 5.8226 5.8079 5.8075 4.8285 4.8164 4.8273
IA 4.8691 4.8430 4.8416 4.8295 4.8291 4.8285 4.8164 4.8273
MS 4.5122 4.8430 4.8416 4.8295 4.8291 3.8545 3.8448 3.8535
KS 4.2919 3.8660 4.8416 3.8553 3.8549 3.8545 3.8448 3.8535
AR 4.1220 3.8660 3.8649 3.8553 3.8549 3.8545 3.8448 3.8535
WV 3.1449 2.8933 2.8924 2.8853 2.8850 2.8847 2.8775 2.8840
UT 3.0210 2.8933 2.8924 2.8853 2.8850 2.8847 2.8775 2.8840
NE 2.7677 2.8933 2.8924 2.8853 2.8850 2.8847 1.9143 2.8840
NM 2.6567 2.8933 2.8924 2.8853 2.8850 2.8847 1.9143 2.8840
NV 2.1074 1.9247 1.9242 1.9194 1.9193 1.9190 1.9143 1.9186
HI 1.9433 1.9247 1.9242 1.9194 1.9193 1.9190 1.9143 1.9186
ID 1.7654 1.9247 1.9242 1.9194 1.9193 1.9190 0.9551 1.9186
RI 1.7596 1.9247 1.9242 1.9194 1.9193 1.9190 0.9551 1.9186
ND 1.1201 0.9603 1.9242 0.9577 0.9576 0.9575 0.9551 0.9573
WY 0.7954 0.9603 0.9600 0.9577 0.9576 0.9575 0.0000 0.9573
Totals 435.0000 435.0000 435.0000 435.0000 435.0000 435.0000 435.0000 435.0000
Table
3. States, Quotas, and Normalized Powers: Shapley-Shubik Power Index and L2-of-
Power Balance and Apportionment Algorithms \Delta 13
Algorithm Quota to Rep-Normed Power Distance
under the L2-of-Proportions Metric
H-Hill 0.375372
Webst 0.387777
Hamilt 0.394220
Dean 0.438127
Adams 1.821126
Jeffer 1.906251
Table
4. Errors Between Quotas and Rep-Normed Power: Banzhaf Power Index and L2-of-
Proportions Evaluation Metric.
Algorithm Quota to Rep-Normed Power Distance
under the L2-of-Proportions Metric
H-Hill 0.382663
Webst 0.401829
Hamilt 0.411409
Dean 0.424247
Adams 1.687721
Jeffer 1.974655
Table
5. Errors Between Quotas and Rep-Normed Power: Shapley-Shubik Power Index and
L2-of-Proportions Evaluation Metric.
3. RESULTS
We present here the results of our program for the 1990 census. Table 1 shows
the populations and quotas for the 1990 census, as well as the apportionments
that result from the six historical algorithms and the simulated annealing algo-
rithm, under both the Banzhaf (with column label SA-B) and the Shapley-Shubik
(with column label SA-SS) power indices. Of course, the columns not reporting
on the results of the simulated annealing algorithm do not need to be labeled with
Banzhaf or Shapley-Shubik, as only in the simulated annealing columns is the apportionment
dependent on the power index being used. Tables 2 and 3 show the
normalized power indices. Tables 4 and 5 show the distance between powers and
quotas according to the L2-on-proportions metric. The tables represent runs with
the following parameters used in the simulated annealing algorithm: the cooling
factor was 0.9, 7 the initial temperature was 100, the number of iterations per-
7 Ideally, simulated annealing should be done with a cooling factor as close to 1 as possible (as,
conceptually, gentle cooling is more likely to achieve a good result). Due to the limited computing
resources we had available-SUN SPARCstation 10's serving as shared cycle servers-we had to
use a cooling factor that is more severe than we would have ideally chosen. Note that this computational
limitation if anything degrades the quality of our simulated annealing apportionment.
Yet, even with this handicap, our simulated annealing algorithm outperformed all the historical
algorithms. Also, we note that if the experimental algorithms paradigm is to be successful, it is
important that it be useful not just to the few people with access to supercomputers, but also to
people using more modest computing facilities. We hope that this study, in which a modest workstation
performed the computations showing that simulated annealing outperforms the currently
used algorithm, is an example of this.
14 \Delta L.A. Hemaspaandra et al.
formed at each temperature was 1000, and the stopping factor was 5. The stopping
factor is used to control termination as follows. Recall that at each temperature
the algorithm generates and considers CoolingIter potential swaps, and then it decreases
the current temperature via Temperature := Temperature CoolingFactor.
However, the algorithm also keeps track of how many potential swaps have been
considered since the last swap that was performed (recall that a swap is performed
either because it improves the quality of the current state, or because it degrades
the quality of the current state but was randomly chosen via the temperature-
based probabilistic condition discussed in Section 2.1). If this number has reached
CoolingIter StoppingFactor (informally, at about the last StoppingFactor temperatures
no move has been made-except this analysis can wrap around the border
between temperatures), then the algorithm stops and the current seat allocation is
its output.
4. CONCLUSIONS
For all census years and for both the Banzhaf and Shapley-Shubik power index, the
simulated annealing algorithm provides a power balance more in harmony with quotas
than do any of the historical algorithms. Of course, this is not overly shocking,
as those algorithms were tailored to achieve a certain harmony between quotas and
votes apportioned, rather than between quotas and the normalized power vectors
induced by the votes apportioned. Generally, the simulated annealing algorithm
achieves its strong performance by shifting votes away from the larger state(s),
as even the large-state-hostile Adams algorithm is not quite so large-state-hostile
as the simulated annealing algorithm. In some sense, the fact (which has been
noted elsewhere and which is very apparent in the "State, Quotas, and Normalized
Powers" tables of Section 3) that power indices often disproportionally skew
power towards very large states-the so-called "big state bias"-is something the
simulated annealing algorithm is able to attempt to directly remedy. Thus, if one's
notion of fairness is to have a close match between the normalized Banzhaf or
Shapley-Shubik power indices of the states and the vector of quotas, the simulated
annealing algorithm is a strong contender, and seems to provide a better match
than any of the historical algorithms.
Our results also suggest that, if one limits one's universe to the six historical
algorithms, the United States has chosen the correct one, at least in terms of performance
with respect to historical census data: The Huntington-Hill algorithm
seems the most power-fair of the historical algorithms, albeit far less fair that simulated
annealing. Indeed, in the 1990 tables one can see that simulated annealing
and the Huntington-Hill algorithm treated small states (states receiving no more
than five seats) identically. The only difference in their apportionments is that,
relative to the Huntington-Hill algorithm, simulated annealing shifts voting weight
away from large states and towards middle-sized states.
The political implications of our results are somewhat surprising. The fact that
Adams's algorithm gives large states (e.g., California) far fewer seats than their
quotas would seem to entitle them to might suggest that large states are cheated
under Adams's algorithm, and indeed both some scholarly analyses [Balinski and
Young 1982] and the Supreme Court decision mentioned earlier [Supreme 1992]
have been primarily concerned with the correspondence between seats and quotas.
Power Balance and Apportionment Algorithms \Delta 15
However, our study shows clearly that the bias power indices show towards large
states is so pronounced that it overwhelms the "vote skewing towards small states"
of the Adams algorithm. Of course, as the tables show, a large-state-skewed algorithm
such as Jefferson's even more strikingly grants disproportionately much
power to large states. Overall, in terms of fairness, our results suggest that the
current apportionment algorithm used in the United States (the Huntington-Hill
algorithm) gives disproportionately much power to large states at the cost of giving
disproportionately little power to some middle-sized states, while treating small
states fairly.
As a final note, nothing above is meant to suggest that the simulated annealing
algorithm might be politically feasible. It is quite possible that, in a probabilistic
implementation, the vote allocation chosen might depend on the seed given to the
random number generator. Worse still it is possible, at least in artificial exam-
ples, that different vote allocations would achieve the same degree of quota/power
harmony. Thus, the algorithm itself would suggest no preference between the allo-
cations, but legislators might well have strong (and differing) preferences.
Acknowledgments
: The first author was a Bridging Fellow at the University
of Rochester's Department of Political Science when this work was started. He
thanks that department's faculty for encouraging his interest in political science
and voting systems, and for helpful conversations, in particular including suggesting
the discussion of the final paragraph of the conclusions section. He also thanks the
University of Rochester for the fellowship, and gratefully acknowledges a lifelong
debt to Michel Balinski for, many years ago, introducing him both to the study
of apportionment and to research. The authors are grateful to William Lucas of
the Claremont Graduate School and Peter van Emde Boas of the University of
Amsterdam for proofreading an earlier version and for helpful comments. We also
thank Joe Malkevitch for helpful comments. We are grateful to ACM Journal on
Experimental Algorithmics editor Bernard Moret and two anonymous referees for
valuable comments and suggestions. The authors alone, of course, are responsible
for any errors.
--R
Simulated annealing and Boltzmann machines: A stochastic approach to combinatorial optimization and neural computing.
Fair Representation: Meeting the Ideal of One Man
Fair representation: Meeting the ideal of one man
Probabilistic polynomial time is closed under parity reductions.
Plan de constitution
Mathematical properties of the Banzhaf power index.
Mathematics of Operations Research 4
Computers and Intractability: A Guide to the Theory of NP-Completeness
Closure properties and witness reduction.
Values of large games
Values of large games
Equations of state calculations by fast computing machines.
On closure properties of bounded two-sided error complexity classes
An Introduction to Positive Political Theory.
Measurement of power in political systems.
A method of evaluating the distribution of power in a committee system.
The history and status of the P versus NP question.
Probability models for power indices.
PP is as hard as the polynomial-time hierarchy
Counting classes are at least as hard as the polynomial-time hierarchy
The complexity of computing the permanent.
The complexity of enumeration and reliability problems.
--TR | simulated annealing;power indices;apportionment algorithms |
297115 | Coordination of heterogeneous distributed cooperative constraint solving. | In this paper we argue for an alternative way of designing cooperative constraint solver systems using a control-oriented coordination language. The idea is to take advantage of the coordination features of MANIFOLD for improving the constraint solver collaboration language of BALI. We demonstrate the validity of our ideas by presenting the advantages of such a realization and its (practical as well as conceptual) improvements of constraint solving. We are convinced that cooperative constraint solving is intrinsically linked to coordination, and that coordination languages, and MANIFOLD in particular, open new horizons for systems like BALI. | INTRODUCTION
The need for constraint solver collaboration is widely rec-
ognized. The general approach consists of making several
solvers cooperate in order to process constraints that could
not be solved (at least not efficiently) by a single solver.
BALI [21, 23, 22] is a realization of such a system, in terms of
a language for constraint solver collaboration and a language
for constraint programming. Solver collaboration is a glass-box
mechanism which enables one to link black-box tools,
i.e., the solvers. BALI allows one to build solver collaborations
(solver cooperation [25] and solver combination [17])
by composing component solvers using collaboration primitives
(implementing, e.g., sequential, concurrent, and parallel
collaboration schemes) and control primitives (such as
iterators, fixed-points, and conditionals).
On the other hand, the concept of coordinating a number
of activities, such that they can run concurrently in a
parallel and distributed fashion, has recently received wide
attention [4, 5]. The IWIM model [1, 2] (Ideal Worker Ideal
Manager) is based on a complete symmetry between and
decoupling of producers and consumers, as well as a clear
distinction between the computational and the coordina-
tion/communication work performed by each process. A
direct realization of IWIM in terms of a concrete coordination
language, namely MANIFOLD [3], already exists.
Due to lack of explicit coordination concepts and con-
structs, the implementation of BALI does not fully realize its
formal model: the treatment of disjunctions and the search
are jeopardized and this is not completely satisfactory from
a constraint solving point of view. This is mainly due to two
causes: (1) the dynamic aspect of the formal model of BALI,
and (2) the use of heterogeneous solvers, i.e. , solvers written
in different programming languages, with different data rep-
resentations. Only a coordination language able to deal with
dynamic processes and channels (creation, duplication, dis-
/re-/connection), and able to handle external heterogeneous
solvers (routines for automatic data conversions) can fullfil
the requirements of the formal model of BALI and overcome
the problem of its current implementation. This guided us
through the different coordination models and lead us to the
IWIM model, and the MANIFOLD language.
Coordination and cooperative constraint solving are intrinsically
linked. This motivated our investigation of a new
organizational model for BALI based on MANIFOLD. The
results show a wider-than-expected range of implications.
Not only the system can be improved in terms of robust-
ness, stability, and required resources, but the constraint
solving activity itself is also improved through the resulting
clarity of search, efficient handling of the disjunctions, and
modularity. The system can be implemented closer to its
formal model and can be split up into three parts: (1) a constraint
programming activity, (2) a solver collaboration lan-
guage, and (3) a coordination/communication component.
We qualified (and roughly quantified) the improvements co-ordination
languages, and more specifically MANIFOLD,
can bring to cooperative constraint solving. The conclusions
are promising and we feel confident to undertake a
future implementation of BALI using MANIFOLD.
The rest of this paper is organized as follows. The next
section is a brief overview of BALI, its organizational model,
and the weaknesses of its implementation. In Section 3,
after an overview of MANIFOLD, we describe the coor-
dination/communication of BALI using the features of the
MANIFOLD system. We then highlight the improvements
that we feel are most significant for constraint solving (Sec-
tion 4). Finally, we conclude in Section 5 and discuss some
future work.
BALI [21] is an environment for solver collaboration (i.e.,
solver cooperation [25, 14] and solver combination [26, 29])
that separates constraint programming (the host language)
from constraint solving (the solver collaboration language).
The host language is a constraint programming language [34]
or possibly a constraint logic programming language [16, 11]
which, when necessary, expresses the required solver collaboration
through the solver collaboration language. The solver
collaboration language supports three strategies called solving
strategies. The first strategy consists of determining the
satisfiability of the constraint store each time a new constraint
occurs ("incremental use of a solver"). The second
strategy is an alternative to this method that solves the constraint
store when a final state is reached (e.g., the end of
resolution for logic programming). The last strategy allows
the user to trigger the solvers on demand, for example, to
test the satisfiability of the store after several constraints
have been settled. Furthermore, BALI allows several solver
collaborations, in conjunction with different solving strate-
gies, to coexist in a single system. For example, solver S1
can be used incrementally while S2 only executes at the end,
and S3 and S4 are always triggered by the user.
Since the constraint programming part of BALI is less
interesting from the point of view of coordination 1 , this paper
focuses on its constraint solving techniques, i.e., the
constraint solver collaboration language of BALI. This domain
independent language has been designed for realizing
a solving mechanism in terms of solver collaborations following
certain solving strategies. The basic objects handled
by the language are heterogeneous solvers. They are
used inside collaboration primitives that integrate several
paradigms (such as sequentiality, parallelism, and concur-
rency) commonly used in solver combination or cooperation.
In order to write finer strategies, we have also introduced
some control primitives (such as iterator, fixed-point, and
conditional) in the collaboration language.
At the implementation level, BALI is a distributed co-operative
constraint programming system, composed of a
language for solver collaboration (whose implementation allows
one to realize servers to which potential clients can
connect) plus a host language (whose implementation is a
special client of the server). Solver collaboration is a glass-box
mechanism which enables one to link black-box tools,
i.e., the solvers.
Some applications have already used BALI [23]. For exam-
ple, a simulation of CoSAc [25] has been realized, and some
other solver collaborations have been designed for non-linear
constraints.
2.1 The Constraint Solver Collaboration Language Of BALI
A detailed description of the solver collaboration language of
BALI can be found in [23, 21]. In this section, we give a brief
overview of some of the collaboration primitives of BALI. The
complete syntax of the solver collaboration language of BALI
is given in Figure 1.
Sequentiality (denoted by seq) means that the solver E2
will execute on the constraint store C 0 , which is the result
1 The constraint programming part of BALI is described in [21]
and [22].
of the application of the solver E1 on the constraint store
C.
When several solvers are working in parallel (denoted by
split), the constraint store C is sent to each and every
one of them. Then, the results of all solvers are gathered
together in order to constitute a new constraint store analogous
to C.
Concurrency (denoted by dc) is interesting when several
solvers based on different methods can be applied to non-disjoint
parts of the constraint store. The result of such a
collaboration is the result of a single solver S composed with
the constraints that S did not manipulate. The result of S
must also satisfy a given property / which is a concurrency
function (the set \Psi in Figure 1). For example, basic is a standard
function in \Psi that returns the result of the first solver
that finishes executing. Some more complex / functions can
be considered, such as solved form which selects the result
of the first solver whose solution is in a specific solved form
on the computation domain. The results of the other solvers
(which may even be stopped as soon as S is chosen) are not
taken into account. The concurrency primitive is similar
to a "don't care'' commitment but also provides control for
choosing the new store (using / functions).
(positive integers)
(boolean observation functions)
E) j
E)
Ar
Figure
1: Syntax of the solver collaboration language of BALI
These primitives (which comprise the computation part
of the collaboration language) can be connected with combinators
(which compose the control part, using primitives
such as iterators, conditionals, and fixed-points) in order to
design more complex solver collaborations.
The fixed-point combinator (denoted by f p) repeatedly
applies a solver collaboration until no more information can
be extracted from the constraint store. This combinator
allows one to create an idempotent solver/collaboration from
a non-idempotent solver/collaboration.
The above primitives and combinators are completely
statically defined. We now introduce observation functions
of the constraint store which allow one to get more dynamic
primitives. These functions are evaluated at run-time
(when entering a primitive) using the current constraint
store. These functions may be either arithmetic (the set OA
in
Figure
or Boolean (the set OB in Figure 1). Arithmetic
observation functions have the profile: Stores ! N .
Three such functions are: (1) card var computes the number
of distinct variables in the constraint store. This is interesting
for solvers that are sensitive to the number of vari-
ables. (2) card c returns the number of atomic constraints
that comprise the store. This is important for solvers whose
complexity is a function of the number of constraints (such
as solvers based on propagation). (3) card uni var returns
the number of univariate atomic constraints. This is essential
for solvers whose efficiency is improved with univariate
constraints (such as interval propagation solvers).
Boolean observation functions have the profile: Stores !
Boolean. Three such functions are: (1) linear tests whether
there exists any variable that occurs more than once in an
atomic constraint. This is of interest in deciding the applicability
of a linear solver. (2) uni var tests whether there is at
least one univariate equality in the store. This information is
important since, for example, univariate constraints are generally
the starting point of interval propagation. (3) tri tests
whether the store is in triangular form (i.e., there are some
equality constraints over a variable X, some over variables
X and Y , some over X; Y and Z, . This is interesting
for eliminating variables, or determining an ordering for the
Grobner bases computation.
The repeat combinator (denoted by rep) is similar to the
fixed-point combinator, but allows applying a solver n times:
n is the result of the application of an observation function
(or a composition of observation functions) to the constraint
store. Since this primitive takes into account the constraint
and its form at run-time, it improves the dynamic aspect of
the collaboration language.
Finally, the conditional combinator (denoted by if) applies
one solver/collaboration or another, depending on the
evaluation of a condition (which can also depend on observation
functions of the constraint store).
The following example illustrates the solver collaboration
language:
seq(A,dc(basic,B,C,D),split(E,F),f p(G))
Consider applying this collaboration scheme to the constraint
store c 2 . First A is applied to c and returns c1 . Then, B, C,
and D are applied to c1 . The first one that finishes gives the
new constraint store c2 . Then E, and F execute on c2 . The
solution c3 is a composition of c 0
3 (the solution of E) and
c 00
3 (the solution of F). Finally, G is repeatedly applied to c3
until a fix-point, c4 , is reached, which is the final solution of
the collaboration.
2.2 Organizational Model And Implementation
The role of the organizational model we have implemented
is: 1) to create a distributed environment for integrating heterogeneous
solvers 3 , 2) to establish communication between
solvers in spite of their differences, 3) to coordinate their ex-
ecutions. Such an organizational model turns solver collaborations
into servers to which clients (such as the implementation
of the host language or all kinds of processes requiring
a solver) can connect. This model enabled us to implement
BALI and create/execute solver collaborations [21].
2 In order to simplify the explanation, we consider here solvers that
return only one solution (one disjunct). We detail the treatment of
disjunctions in the next sections.
3 Each solver (software, library of tools, client/server architecture)
has its own data representation, is written in a different programming
language, and executes on a different architecture and operating
system.
2.2.1 Agent
The realizations of solvers and solver collaborations are het-
erogeneous. However, by an encapsulation mechanism we
homogenize the system, and obtain what we call agents.
Each agent is autonomous and is created, works, and terminates
independently from the others. Hence, agents can
execute in parallel or concurrently in a distributed architecture
Solvers are encapsulated to create simple agents. As
shown in [21], a solver collaboration is a solver. Applying
this concept to the architecture, encapsulation becomes a
hierarchical operation. Hence, several simple agents can be
encapsulated in order to build a complex agent. However,
viewed from the outside of a capsule, simple and complex
agents are identical.
ADMISSIBILITY
CONVERTER
RECOMPOSITION
ECLiPSe -> S
INTERNAL
ECLiPSe
CONVERTER
Figure
2: Simple agent
In the current implementation of BALI, solvers are encapsulated
into ECLiPSe 4 processes (see Figure 2). Hence,
ECLiPSe launches the solvers and re-connects their input
and output through pipes. The data structure converters
are written in Prolog and the data exchanges between capsules
and solvers are performed via strings. The encapsulation
also provides a constraint store for the solver it represents
(a local database for storing the information), an
admissibility function (which is able to recognize which constraints
of the store can be handled by the solver), and a recomposition
function (which recreates an equivalent store
using the constraints treated by the solvers, and the constraints
not admissible by the solver). The interface of an
agent is an ECLiPSe process. Moreover, Prolog terms can
be transmitted between two ECLiPSe processes. Inter-agent
communication is thus realized with high level terms, and
not strings or bits. Furthermore, there is no need for syntactic
analyzers between pairs of agents.
A complex agent (encapsulation of a solver collabora-
behaves like a simple agent, though its internal environment
is a bit different (see Figure 3). It has a constraint
store for keeping the information it receives: this is
its knowledge base. For managing this base, it has a recomposition
function which re-builds the constraint store
when some agents send some of their solutions. The major
work of a complex agent is the coordination (as determined
by the collaboration primitive it represents) of the agents it
encapsulates.
4 ECLiPSe [20] is the "Common Logic Programming System" developed
at ECRC.
ECLiPSe
COORDINATOR
ECLiPSe
ENCAPSULATION
INTERNAL
ECLiPSe
RECOMPOSITION
ENCAPSULATION
Figure
3: Complex agent
2.2.2 Coordination
We now describe the coordination of the implementation of
BALI (see [21] for more details), but not the coordination of
its formal model. An agent can be in one of three different
states: running (R), sleeping (S), or waiting (W). When an
agent receive a constraint c, it becomes running to solve c.
An agent is in the W state when it is waiting for the answer
from one or more agents. An agent is in the S state when it
is neither running nor waiting. These states together with
the communication among agents, enable us to describe the
coordination of the constraint solvers.
Sequential primitive: seq(S1 ,S2 ,. ,Sn) tries to solve a
constraint by sequentially applying several solvers. It first
sends a constraint to S1 and waits for a solution c1 for it.
When it receives a solution from S1 , it sends it to S2 , waits
for a solution c2 , sends it to S3 , and so on, until it reaches Sn .
Finally, the solution cn from Sn is forwarded to the superior
agent as one of the solutions of the sequential primitive.
Since we consider solvers that enumerate their solutions (i.e.,
each solution represents a disjunct of the complete solution),
the sequential agent must wait for the other disjuncts of Sn
which will be treated the same way as cn . Backtracking is
then performed on Sn\Gamma1 , Sn\Gamma2 and back to S1 . In a sequential
collaboration, several agents are "pipelined" and work in
"parallel", but the solutions are passed "sequentially" from
one agent to the next.
primitive: split(S1 ,S2 ,. ,Sn ) applies several solvers
in parallel on the same constraints. The solution of split is
a Cartesian-product-like re-composition of all the solutions
of S1 ,S2 ,. ,Sn . When a split agent receives a solve request
from its superior, it forwards it to all its S i 's. Then, it waits
and stores all the solutions of each S i . Finally, the split
agent creates all the elements of the Cartesian-product of
the solutions, and sends them one by one to its superior
agent.
don't care primitive: dc(/1 ,S1 ,S2 ,. ,Sn) introduces
concurrency among solvers. Upon receiving a constraint c
from its superior, the don't care agent forwards c to all its
sub-agents, S i 's. Then it waits for a solution c 0 from any of
its sub-agents. If c 0 does not satisfy /1 5 then c 0 is forgotten
and the don't care agent waits for a solution from another
sub-agent (other than the one that produced c 0 ). As soon
as the don't care agent receives a solution c 0 from some S i
5 /1 is an element of the set / of boolean functions. They test
whether or not a constraint satisfies some properties.
that satisfies /1 , all other sub-agents are stopped and c 0 , as
well as all other solutions produced by S i , are forwarded to
the superior agent.
fix-point primitive: f p(S) repeatedly applies S on a con-
straint, until no more information can be extracted from the
constraint. The solving process starts when the fix-point
agent receives a constraint c from its superior. It is an iterative
process and in each iteration k, we consider a set Ck of
disjuncts to be treated by S (e.g., in iteration 1, C1 consists
of a single element, c). In iteration k, the mk disjuncts of
Ck must be treated by S: the fix-point agent chooses one
element of Ck , ck;i , removes it from Ck , sends it to S and
collects all the solutions from S. If the 6 solution from S
is equal to ck;i (a fix-point has been reached for this dis-
junct), the fix-point agent forwards it to its superior agent.
Otherwise, the solutions produced by S are added to Ck+1 .
The same treatment is applied to all the elements of Ck to
complete the set Ck+1 and the solving process enters iteration
1. The process terminates when at the end of
iteration k, the set Ck+1 is empty.
repeat primitive: The coordination for the repeat primitive
is identical to the fix-point collaboration,
except that it stops after a given number n of iterations.
The number n is computed at run-time: it is the result of
the application of the arithmetic function ffi to the current
constraint store. The arithmetic function ffi is composed of
observation functions of the constraint store (ele-
ments in OA, see Figure 1). The solving process starts when
the repeat agent receives a constraint c from its superior.
First, n is computed: the coordination is
analogous to the one of the fix-point primitive. The process
terminates at the end of iteration n, when every solution returned
by S for every disjunct in Cn is sent to the superior
agent.
conditional primitive: if(fl,S1,S2) is reather simple.
When it receives a constraint c from its superior agent, this
primitive applies the function fl to c. The Boolean function
is composed of both arithmetic and Boolean observation
functions of the constraint store. If fl(c) is true, then c is
forwarded to the sub-collaboration S1, otherwise to the sub-
collaboration S2. Then, this primtive becomes an intermediary
between one of the sub-agents and its superior agent,
i.e., as soon as the selected sub-agent sends a solution, it is
forwarded immediately to the superior agent. In fact, after
evaluation of fl(c) the conditional primitive acts similarly to
a sequential primitive having a single sub-agent.
2.3 Weaknesses Of The Implementation
Although ECLiPSe provides some functionality for managing
processes and communication, it is not a coordination
language. Thus, our implementation does not exactly realize
the formal model of BALI: some features are jeopardized,
or even missing, as described below.
Disjunctions of constraints The disjunctions of constraints
returned by a solver are treated one after the other,
and for some primitives, they are even stored and their treatment
is delayed. For the sequential primitive, this does not
drastically jeopardize the solving process. But for the fixpoint
primitive, this really endangers the resolution. We
must wait for all the disjuncts of a given iteration before
entering the next one. A solution would be to duplicate the
but due to the encapsulation mechanism, this is not
6 When reaching a fix-point, a solver can return only one solution.
reasonable. This treatment of disjunction leads to a loss of
efficiency, and to a mixed search 7 during solving (which is
not completely convenient from the constraint programming
point of view).
Static architecture Another limitation of BALI is due to
the fact that architectures representing collaborations are
fixed. Due to some implementation constraints and the limitations
of coordination features of ECLiPSe, the collaborations
are first completely launched before being used to
solve constraints. Thus, we have a loss of dynamics: 1)
parts of the architecture are created even when they are
not required, 2) agents cannot be duplicated (although this
would be interesting for some primitives such as fix-point),
and stated before, the disjunctions are not always handled
efficiently.
Other compromised features Although the formal model
of BALI allows the use of "light" solvers, the implementation
is not well suited to support such agents: their coarse grain
encapsulation uses more memory and CPU than the solver.
Thus, mixing heavy solvers (such as GB [10], Maple [12])
and light solvers (such as rewrite rules or transformation
rules) is not recommended.
No checks are made to ensure that an architecture and its
communication channels have been created properly. Management
of resources and load balancing are static: before
launching a collaboration, the user must decide on which
machine the solver will run.
3 MANIFOLD: A NEW COORDINATION FOR BALI
We now explain how we can use the coordination language
MANIFOLD [3] to significantly improve the implementation
of BALI, and remain closer to its formal model.
3.1 The Coordination Language MANIFOLD
MANIFOLD is a language for managing complex, dynamically
changing interconnections among sets of independent,
concurrent, cooperative processes [1]. MANIFOLD is based
on the IWIM model of communication [2]. The basic concepts
in the IWIM model (thus also in MANIFOLD) are
processes, events, ports, and channels. Its advantages over
the Targeted-Send/Receive model (on which object-oriented
programming models and tools such as PVM [13], PAR-
MACS [15], and MPI [7] are based) are discussed in [1, 27].
A MANIFOLD application consists of a (potentially very
large) number of processes running on a network of heterogeneous
hosts, some of which may be parallel systems. Processes
in the same application may be written in different
programming languages.
The MANIFOLD system consists of a compiler, a run-time
system library, a number of utility programs, libraries
of built-in and pre-defined processes, a link file generator
called MLINK and a run-time configurator called CONFIG.
The system has been ported to several different platforms
(e.g., SGI Irix 6.3, SUN 4, Solaris 5.2, IBM SP/1, SP/2,
and Linux). MLINK uses the object files produced by the
(MANIFOLD and other language) compilers to produce link
files and the makefiles needed to compose the executables
files for each required platform. At the run time of an ap-
plication, CONFIG determines the actual host(s), where the
processes (created in the MANIFOLD application) will run.
7 The search strategy is breadth-first for the fix-point and repeat
primitives, but depth-first for the sequential and don't care primitives.
The library routines that comprise the interface between
MANIFOLD and processes written in the other languages
(e.g., C), automatically perform the necessary data format
conversions when data are routed between various different
machines.
MANIFOLD has been successfully used in a number of
applications, including in parallelization of a real-life, heavy
duty Computational Fluid Dynamics algorithm originally
written in Fortran77 [8, 9, 18], and implementation of Loosely-Coupled
Genetic Algorithms on parallel and distributed platforms
[31, 33, 32].
3.2 BALI In MANIFOLD
Although BALI solvers are black-boxes and are heterogeneous,
this does not cause any problems for MANIFOLD, because
it integrates the solvers as external workers. Thus, communication
and coordination can be defined among them in the
same way as with normal MANIFOLDagents. MANIFOLD
can bring many improvements to BALI such as:
robustness: managing the faults in the system is not
an easy task with ECLiPSe.
ffl portability: MANIFOLD runs on several architectures,
and requires only a thread facility and a subset of
PVM [13].
ffl modularity: in the current implementation, constraint
solving is separated from constraint programming. Using
MANIFOLD, we can also split up the coordination
part from the solving part.
ffl extension of the collaboration language: each primitive
will be an independent coordinator. Thus, adding a
new primitive will be simplified.
ffl additional new features: MANIFOLD provides tools
to implement certain functionalities that are not available
in the current version of BALI (e.g., choice of the
machines, light weight processes, architectures, load
etc.
In the following, we elaborate only on the most significant
of the above points, i.e., the ones that make an intensive
use of the MANIFOLD features or the ones that are the most
significant for constraint solving.
3.2.1 Lighter Agents
denoted S
Coordinator
Rec.
Conv
Solver
Adm Conv
Figure
4: Lighter simple agent
The current encapsulation (one ECLiPSe process for each
solver/collaboration) is really heavy. MANIFOLD can produce
lighter capsules using threads to realize filters and
workers. They will replace the computation modules of
ECLiPSe. Thus, a simple agent (see Figure 4) can consist
of:
ffl a coordinator for managing the messages and agents
inside the encapsulation. This coordinator is also the
in/out gate of the capsule (when communicating with
superior agents).
ffl a solver, which is the same as in the previous implementation
ffl four filters (MANIFOLD workers): the first filters the
constraints the solver can handle, the second converts
the data into the syntax of the solver, the third converts
the solutions of the solver into the global syntax 8
and the last re-composes equivalent solutions based on
the solutions of the solver and the constraints it cannot
handle.
A complex agent (see Figure 5) is now the encapsulation
of several simple/complex agents together with some filters.
The filters and the coordinator (coordinators are described
in Section 3.2.2) are specialized for the collaboration primitive
the agent represents. For a split collaboration, only
one filter is required: a store manager which collects all the
solutions from the sub-agent and incrementally builds the
elements of their Cartesian-product (as soon as one element
is completed, it is sent to the coordinator). In a / don't care
primitive, one filter is required for applying the / function
to the constraints. For the sequential primitive, as well as
the fix-point, no filters are required.
seq
Coordinator
Coordinator
split dc
Coordinator
Sn
Sn
Sn
Store
Man.
Coordinator
fix-point
Figure
5: Lighter complex agent
This new kind of encapsulation has several advantages.
The global architecture representing a solver collaboration
will require less processes than before, and also less mem-
ory. This is due to several facts: the use of threads instead
of heavy processes, the notion of filters, and the sharing of
workers, filters, and solvers between several agents (see Figure
6). The creation of another instance of a solver will
depend on the activity of the already running instances.
Agents are not black-boxes anymore: they become glass-
boxes sharing solvers and filters with other agents. But the
main advantage is certainly the following: the coordination
is now separated from the filters, encapsulated into individual
modules, each of which depends on the specific type of
collaboration it implements, and can use all the features of
MANIFOLD. Thus, it is possible to arrive at a coordination
scheme that respects the formal model of BALI.
8 Global syntax is the syntax used in the filters and between agents.
Store
Sn
Coordinator
Coordinator
Solver
Conv Rec.
Conv
Coordinator
dc
Coordinator
split
Conv
Conv
Solver
Adm
F
S'n
Man.
Rec.
Figure
Shared solvers and filters
3.2.2 Coordinators
Using MANIFOLD and the new encapsulation process, it
is now possible to overcome the problems inherent in the
previous implementation of BALI.
Dynamic handling of the solvers Since the coordination
features are now separated from the filters and workers, the
set up of the distributed architecture and its use are no
longer disjoint phases. This means that when a solving request
is sent, the collaboration will be built incrementally
(agent after agent) and only the necessary components will
be created. For example, in a conditional or guarded collab-
oration, only the "then" or the "else" sub-collaboration will
be launched. If another request is sent to the same collab-
oration, the launched components will be re-used, possibly
augmented by some newly created components.
When a solver/collaboration is requested to solve a con-
straint, several cases can arise. If the solver/collaboration S
has not already been launched, then an instance of S will be
created. If it is already launched but all of its instances are
busy (i.e., all instances of S are currently working on con-
straints) another instance will be created. Otherwise, one of
the instances will be re-used for the new computation. The
function find instance manages this functionality (see Appendix
A.1).
Dynamic handling of the disjunctions Contrary to the
current implementation 9 , disjunctions are treated dynami-
cally. We demonstrate this for the sequential collaboration
seq(S1 ,S2 ,. ,Sn ). All the disjuncts produced by S1 must be
sent to S2 . With ECLiPSe, a disjunct c1 of S1 is completely
solved by S2 , . , Sn (meaning all possible disjuncts created
by S2 , . , Sn are produced), before treating the next
disjunct c2 of S1 . MANIFOLD allows us to use pipelines to
solve c2 as soon as it is produced by S1 . If S2 is still working
on c1 , and all the other instances of S2 are busy, then a new
instance of S2 is created for solving c2 . The treatment of
c2 is no longer postponed. This mechanism applies to all
sub-agents of the sequential agent.
This introduces a new problem: there may be a combinatorial
explosion of the number of instances of S2 , . ,
Sn . However, this can rarely happen: while an agent S i is
producing solutions, the agent S i+1 is already solving (and
has already solved) some of the previous constraints. Thus,
9 Currently, the fix-point coordinator waits for all the solutions of
the sub-collaboration before entering the next iteration.
some instances have already returned to a sleeping state and
can be re-used. Nevertheless, the following case may arise.
Suppose the solvers S i are arranged such that as the index
grows, the designated solvers, S i , become slower, and suppose
every S i creates disjuncts. The number of instances will
become exponential in this case, and the system will therefore
run out of resources. In order to overcome this problem,
the number of instances can be limited (see Appendix A.1).
Thus, when a solving request is to be sent to the agent S,
and the maximal number of (its) instances is reached, and
all its instances are busy, the superior agent will wait for the
first instance to return to the sleeping state. This mechanism
does not imply a completely dynamic treatment of the
disjunctions. However, it gives a good compromise between
the delay for solving a disjunct and the physical limitation
of the resources.
Coordinators for the primitives We now describe the
coordinators for the sequential primitive. Some other primitives
are detailed in Appendix A. The algorithms are presented
here in a Pascal-like language extended with an event
functionality. We consider a queue of messages m from p
meaning that the message m was received on the port p. task
m from p alg means that we remove the message m from the
port p and execute the algorithm alg (the message m from
p is the condition for executing the task alg). The latter
cannot be interrupted. end is a message that is sent by an
agent when it has enumerated all its disjuncts. The agents
have a number of flags representing the states described in
Section 2.2.2.
Figure
7: Duplication of a sequential primitive
coordinator for seq(S1,.,Sn)
S1.Sn: sub-agents; S0: sup-agent
ports: p.0.in . p.n.in
% for 0=!i!n+1 p.i.in is linked to p.Si.out of Si
p.0.1.out . p.0.n+1.out
% for 0!i!n+1 p.0.i.out is linked to p.Si.in of Si
% p.0.n+1.out is linked to p.S0.in of S0
task c from p.i.in:
if j!?NULL
then send c to p.j.i+1.out; Mi=Mi+1
else send c to p.i.in
% c is sent again and again to p.i.in
% till an instance of Si+1 becomes free
task end from p.i.in:
and Si-1 is Sleeping
then Si is Sleeping
task end from p.n.in:
and Sn-1 is Sleeping
then Sn is Sleeping; send end to p.0.n+1.out
task c from p.i.in is used to forward a solution from
S i to S i+1 . If no instance of S i+1 is free, and it is not
possible to create a new instance, then the same message is
sent again, and will be treated later.
To detect the end of a sequential primitive, we count the
solutions and end messages of each of the sub-agent. An
agent S i becomes sleeping when S i\Gamma1 is already sleeping,
and when S i produces as many end messages as the number
of solutions of S i\Gamma1 .
The superior agent S0 is never duplicated inside a collab-
oration, since the coordinator can create only a sub-architec-
ture; the collaboration does not duplicate itself. That is the
job of the superior agent: it either finds a free instance of
the collaboration, or creates one if the maximal number of
instances is not yet reached (see Figures 7 and 8 for an example
of duplication).
dc
dc
Y
YS
SS
Figure
8: Duplication of a /-don't care primitive
We have seen that coordination languages, and MANIFOLD
in particular, are helpful for implementing cooperative constraint
solving. However, the advantages are not only at
the implementation level. MANIFOLD allows an implementation
closer to the formal model of BALI, and this implies
some significant benefits for constraint solving: faster execution
time, better debugging, and clarity of the search [28]
during constraint solving (see Table 9). The architecture
also gains through some improvements: robustness, reliabil-
ity, quality, and a better management of the resources (see
Table
10). This last point also has consequences for the end
user: as the architectures representing a solver collaboration
become lighter, the end user can build more and more complex
collaborations, and thus, solve problems that could not
be tackled before.
Constraint solving Treatment of disjunctions is a key
point in constraint solving. The most commonly required
search is depth-first: each time several candidates appear,
take one, and continue with it until reaching a solution, then
backtrack to try the other candidates. One of the reasons
for this choice is that, generally, only one solution is re-
quired. Contrary to the first implementation of BALI, the
coordination we described with MANIFOLD leads to what
we call a "parallel depth-first and quick-first" search. The
parallel depth-first search is obvious. The quick-first search
arises from the fact that each constraint flows through the
agents independently from the others. Hence (ignoring the
boundary condition of reaching the instance limits of solvers,
mentioned above), it is never delayed by another constraint,
nor stops at the input of a solver or in a queue. The result
is that the solution which is the fastest to compute (even if
it is not originated from the first disjunct of a solver) has
a better chance to become the first solution given by the
solver collaboration 10 .
Debugging, collaboration improvement, and graphical interface
to present output will be eased. The coordinators can
duplicate the messages and send them to a special worker.
This latter can then be linked to a display window (text or
graphic) or a profiler. It will enable users observe the flow
of data in a collaboration. Thus, users can extract statistics
on the utilization of the solvers and draw conclusions
on the efficiency of a newly designed collaboration. All this
process can lead to a methodology for designing solver collaborations
Due to its encapsulation techniques, the current implementation
jeopardizes the use of "fine grain" solvers (solvers
that require little memory and CPU). Although we can envisage
encapsulating a single function with an ECLiPSe pro-
cess, this is not reasonable. Though not really designed for
fine grain agents, MANIFOLD still gives more freedom to use
single functions (such as rewrite rules or constraints trans-
formations) as solvers. With MANIFOLD, single functions
for simplifying the constraints can easily be inserted in a collaboration
as threads without compromising the efficiency of
the whole architecture; this significantly enlarges the set of
solvers that can be integrated in BALI.
BALI in: ECLiPSe MANIFOLD
search during solving mixed depth-first
first solution - ++++
execution time
treatment of the disjunctions - +++
use of "fine grain" solvers - ++
add of solvers (encapsulation)
extension of the collaboration
language - +++
"debugging tools" - +++
improvement of solver collab. - +++
input graphical interface - ++
output graphical interface
Figure
9: Improvements for constraint solving
Coordination is now separated from collaboration: a collaboration
primitive implies a coordinator separated from
the converters, recomposition functions, and admissibility
When a branch leads faster to a solution, we find it quickly, because
we do not have to explore all branches before this one.
functions. Thus, with the same filters we can easily implement
new primitives: only the coordinator has to be modi-
fied, and in some cases a filter must be added.
Architecture The major limitation of BALI is the large
amount of resources it requires. Of course, this is an intrinsic
problem with cooperative solvers: they are generally
costly in memory, CPU, etc. But another limiting factor is
the overhead of the current encapsulation mechanism. With
the new encapsulation technique, MANIFOLD will decrease
the required resources. Furthermore, with dynamic handling
of disjunctions, we expect the new architecture to be
less voracious.
BALI in: ECLiPSe MANIFOLD
construction of the architecture static dynamic
robustness - +++
extension of BALI
stability - ++
graphical interface (in/output) - ++
quality of communication - ++++
coordination functionalities
number of processes but solvers -
number of communications ++ -
Figure
10: Improvements for architectures
The system will gain in robustness, since currently no
failure detection of the architecture is possible. The collaborations
will be more stable and less susceptible to broken
communication and memory allocation problems. The dynamic
building of the architecture will decrease the number
of unnecessary processes: only the agents required in a computation
are launched.
The only negative point is the increased communica-
tion. With the current implementation, the encapsulation is
composed of two communicating processes: ECLiPSe and a
solver. All the filters are modules of the ECLiPSe process.
With MANIFOLD, the filters are independent agents that
also exchange information. However, this should not create
a bottle-neck since messages are generally short, communicating
agents are usually threads in the same process that
using shared memory to communicate, and no single
agent conducts nor monitors all communication.
5 CONCLUSION
We have introduced an alternative approach for designing
cooperative constraint solving systems. Coordination lan-
guages, and MANIFOLD in particular, exhibit properties
that are appropriate for implementing BALI. However, implementation
improvement is not the only advantage. Using
MANIFOLD we can produce a system closer to the formal
model of BALI, and some significant benefits are also obvious
for constraint solving. The major improvements are the
treatment of the disjunctions, the homogenization of search,
and the reduction of required resources. A fare management
of the disjuncts returned by a solver often leads to quicker
solutions. Moreover, due to replication, the complete set
of solutions is always computed more efficiently. Although
the mixed search used in the current implementation of BALI
does not really influence resolution when looking for all the
solutions of a problem, it becomes a real nuisance when looking
for only one solution. Furthermore, observing the resolution
and following the flow of constraints is not conceiv-
able. MANIFOLD overcomes this problem by providing a
"parallel depth-first and quick-first" search: each disjunct is
handled independently, and thus no constraint resolution is
delayed or queued.
Comparing BALI to other systems (such as cc [30] and
Oz [19]) is not easy since they do not have the same objectives
[21]. cc is a formal framework for concurrent constraint
programming, and Oz is a concurrent constraint programming
system. However, one of the major distinctions is that
BALI, contrary to Oz and cc, enables the collaboration of
heterogeneous solvers. Another essential difference concerns
the separation of tasks. With Oz and cc, constraint pro-
gramming, constraint solving, and coordination of agents
are mixed. With BALI, constraint programming is separated
from cooperative constraint solving, and using MANIFOLD,
cooperative constraint solving is split up into coordination
of agents and constraint solving: each aspect of cooperative
constraint programming is an independent task.
Since the implementation model of BALI with MANIFOLD
is clearly defined, we can surely start with the implementation
phase. Moreover, we know the feasibility of the task,
and have already qualified (as well as roughly quantified)
the improvements. Hence, we know that it is a worthwhile
work.
In the future, we plan to integrate a visual interface to
assist programmers in writing more complex solver collab-
orations. This can be achieved using Visifold [6] and some
predefined "graphical" coordinators.
In order to perform optimization, we are thinking of
adding another search technique to BALI: a best solution
search (branch and bound). This kind of search is generally
managed by the constraint language. However, MANIFOLD
coordinators that represent collaboration primitives can perform
the following tasks: they can eliminate the disjuncts
that are above the current "best" solution, and also manage
the updating of the current best solution. Branching can,
thus, be improved and performed sooner.
The constraint solver extension mechanism of SoleX [24]
consists of rule-based transformations seen as elementary
solvers. Until now, the implementation of SoleX with BALI
was not really conceivable: rule-based transformations are
too fine grain solvers to be encapsulated. With the new
model, the implementation of SoleX becomes reasonable.
Finally, we are convinced that cooperative constraint
solving is intrinsically linked to coordination, and that coordination
languages open new horizons for systems like BALI.
--R
Coordination of massively concurrent activities.
The IWIM model for coordination of concurrent activities.
Manifold20 reference manual.
What do you mean
Coordination languages for parallel programming.
Visifold: A visual environment for a coordination language.
An introduction to the MPI standard.
Restructuring sequential Fortran code into a paral- lel/distributed application
Using coordination to parallelize sparse-grid methods for 3D CFD problems
Maple V
user's guide and reference manual.
A symbolic-numerical branch and prune algorithm for solving non-linear polynomial sys- tems
PARMACS v6.
Constraint Logic Pro- gramming: a Survey
Combining symbolic constraint solvers on algebraic domains.
Multiple semi-coarsened multigrid for 3d cfd
ECLiPSe User Manual.
Collaboration de solveurs pour la programmation logique 'a contraintes. PhD Thesis, Universit'e Henri Poincar'e-Nancy I
An environment for designing/executing constraint solver collaborations.
The Constraint Solver Collaboration Language of BALI.
SOLEX: a Domain-Independent Scheme for Constraint Solver Ex- tension
Implementing Non-Linear Constraints with Cooperative Solvers
Simplifications by cooperating decision procedures.
search strategies for computer problem solving
Cooperation of decision procedures for the satisfiability problem.
Concurrent Constraint Programming.
Distributed evolutionary optimization in manifold: the rosenbrock's function case study.
Parallel and distributed evolutionary computation with Manifold.
Parallel evolutionary computation: Multi agents genetic algorithm.
Strategic Directions in Constraint Programming.
--TR
--CTR
Monfroy , Carlos Castro, Basic components for constraint solver cooperations, Proceedings of the ACM symposium on Applied computing, March 09-12, 2003, Melbourne, Florida | constraint solver cooperation;dynamic coordination;solver collaboration language;coordination model |
297118 | Coordinating autonomous entities with STL. | This paper describes ECM, a new coordination model and STL its corresponding language. STL's power and expressiveness are shown through a distributed implementation of a generic autonomy-based multi-agent system, which is applied to a collective robotics simulation, thus demonstrating the appropriateness of STL for developing a generic coordination platform for autonomous agents. | Introduction
Coordination constitutes a major scientific domain
of Computer Science. Works coming within Coordination
encompass conceptual and methodological issues
as well as implementations in order to efficiently
help expressing and implementing distributed appli-
cations. Autonomous Agents, a discipline of Artificial
Intelligence which enjoys a boom since a couple
of years, embodies inherent distributed applications.
Works coming within Autonomous Agents are intended
to capitalize on the co-existence of distributed
entities, and autonomy-based Multi-Agent Systems
(MAS) are oriented towards interactions, collaborative
phenomena and autonomy.
Today's state of the art parallel programming mod-
els, such as (distributed) shared memory models,
data and task parallelism, and parallel object oriented
models (for an overview see [30]), are used
for implementing general purpose distributed appli-
cations. However they suffer from limitations con-
Part of this work is financially supported by the Swiss National
Foundation for Scientific Research, grants 21-47262.96
and 20-05026.97
cerning a clear separation of the computational part
of a parallel application and the "glue" that coordinates
the overall distributed program. Especially
these limitations make distributed implementations
burdensome. To study problems related to coordi-
nation, Malone and Crowston [25] introduced a new
theory called Coordination Theory aimed at defining
such a "glue". The research in this area has focused
on the definition of several coordination models and
corresponding coordination languages, in order to facilitate
the management of distributed applications.
Coordination is likely to play a central role in
MAS, because such systems are inherently dis-
tributed. The importance of coordination can be illustrated
through two perspectives. On the one hand,
a MAS is built by objective dependencies which refers
to the configuration of the system and which should
be appropriately described in an implementation. On
the other hand, agents have subjective dependencies
between them which requires adapted means to program
them, often involving high-level notions such as
beliefs, goals or plans.
This paper presents STL, a new coordination language
based on the coordination model ECM, which
is a model for multi-grain distributed applications.
STL is used so as to provide a coordination frame-work
for distributed MAS made up of autonomous
agents. It enables to describe the organizational
structure or architecture of a MAS. It is conceived as
a basis for the generic multi-agent platform CODA 1 .
Coordination Theory, Models
and Languages
Coordination can be defined as the process of managing
dependencies between activities [25], or, in the
field of Programming Languages, as the process of
building programs by gluing together active pieces [10].
To formalize and better describe these interdependencies
it is necessary to separate the two essential
parts of a parallel application namely, computation
and coordination. This sharp distinction is also the
1 Coordination for Distributed Autonomous Agents.
key idea of the famous paper of Gelernter and Carriero
[10] where the authors propose a strict separation
of these two concepts. The main idea is to
identify the computation and coordination parts of
a distributed application. Because these two parts
usually interfere with each other, the semantics of
distributed applications is difficult to understand.
To fulfill typical coordination tasks a general coordination
model in computer science has to be composed
of four components (see also [20]):
1. Coordination entities as the processes or agents
running in parallel which are subject of coordination
2. A coordination medium: the actual space where
coordination takes place;
3. Coordination laws to specify interdependencies
between the active entities; and
4. A set of coordination tools.
In [10] the authors state that a coordination language
is orthogonal to a computation language and
forms the linguistic embodiment of a coordination
model. Linguistic embodiment means that the language
must provide language constructs either in
form of library calls or in form of language extensions
as a means to materialize the coordination model.
Orthogonal to a computation language means that a
coordination language extends a given computation
language with additional functionalities which facilitate
the implementation of distributed applications.
The most prominent representative of this class
of new languages is Linda [9] which is based on
a tuple space abstraction as the underlying coordination
model. An application of this model has
been realized in Piranha [8] (to mention one of
the various applications based on Linda's coordination
model) where Linda's tuple space is used
for networked based load balancing functionality.
The PageSpace [14] effort extends Linda's tuple
space onto the World-Wide-Web and Bonita [28] addresses
performance issues for the implementation of
Linda's in and out primitives. Other models and
languages are based on control-oriented approaches
(IWIM/Manifold [2] [3], ConCoord [18], Darwin
[24], ToolBus [5]), message passing paradigms
(CoLa [17], Actors [1]), object-oriented techniques
(Objective Linda [19], JavaSpace [31]), multi-set
rewriting schemes (Bauhaus Linda [11], Gamma
[4]) or Linear Logic (Linear Objects [6]). A good
overview on coordination issues, models and languages
can be found in [27].
Our work takes inspiration from control-oriented
models and tuple-based abstractions, and focuses on
coordination for purpose of MAS distributed implementations
3 Coordination using Encapsu-
lation: ECM
ECM 2 is a model for coordination of multi-grain distributed
applications. It uses an encapsulation mechanism
as its primary abstraction (blops), offering
structured separate name spaces which can be hierarchically
organized. Within these blops active entities
communicate anonymously through connections,
established by the matching of the entities' communication
interfaces.
ECM consists of five building blocks:
1. Processes, as a representation of active entities;
2. Blops, as an abstraction and modularization
mechanism for group of processes and ports;
3. Ports, as the interface of processes/blops to the
external world;
4. Events, a mechanism to react to dynamic state
changes inside a blop;
5. Connections, as a representation of connected
ports.
Figure
1 gives a first overview of the programming
metaphor used in ECM.
According to the general characteristics of what
makes up a coordination model and corresponding
coordination language, these elements are classified
in the following way:
1. The Coordination Entities of ECM are the processes
of the distributed application;
2. There are two types of Coordination Media in
ECM: events, ports, and connections which enable
coordination, and blops, the repository in
which coordination takes place;
3. The Coordination Laws are defined through the
semantics of the Coordination Tools (the operations
defined in the computation language which
work on the port abstraction) and the semantics
of the interactions with the coordination media
by means of events.
An application written using the ECM methodology
consists of a hierarchy of blops in which several
processes run. Processes communicate and coordinate
themselves via events and connections. Ports
serve as the communication endpoints for connections
which result in pairs of matched ports.
Encapsulation Coordination Model.
Connection
Blop
Process
Event
Figure
1: The Coordination Model of ECM.
3.1
A blop is a mechanism to encapsulate a set of ob-
jects. Objects residing in a blop are per default only
visible within their "home" blop. Blops are an abstraction
for an agglomeration of objects to be coordinated
and serve as a separate name space for port
objects, processes, and subordinated blops as well as
an encapsulation mechanism for events. In Figure 1,
two blops are shown. Blops have the same interface
as processes, i.e. a name and a possibly empty set of
ports, and can be hierarchically structured.
3.2 Processes
A process in ECM is a typed object, it has a name
and a possibly empty set of ports. Processes in the
ECM model do not know any kind of process iden-
tification, instead a black box process model is used.
A process does not have to care about to which process
information will be transmitted or received from.
Process creation and termination is not part of the
ECM model and is to be specified in the instance of
the model.
3.3 Ports
Ports are the interface of processes and blops to
establish connections to other processes/blops, i.e.
communication in ECM is handled via a connection
and therefore over ports. Ports have names and a set
of well defined features describing the port's charac-
teristics. Names and features of a port are referred
to as the port's signature. The combination of port
features results in a port type.
Features. Ports are characterized through a
set of features from which the communication feature
is mandatory and must be supported by all ECM re-
alizations. The communication feature materializes
the communication paradigm: it includes point-to-
point stream communication (with classical message-passing
semantics), closed group (with broadcast se-
mantics) and blackboard communication. Additional
port features specify e.g. the amount of other ports
a port may connect to, see STL as an example.
Matching. The matching of ports is defined
as a relation between port signatures. Four
general conditions must be fulfilled for two ports to
get matched: (1) both have compatible values of fea-
tures; (2) both have the same name; (3) both belong
to the same level of abstraction, i.e., are visible within
the same hierarchy of blops; and (4) both belong to
different objects (process or blop).
Conceptually the matching of process ports can be
described as follows. When a process is created in a
blop, it creates with its port signature a "potential"
in the blop where it is currently embedded. If conditions
are fulfilled for two potentials in a blop,
the connection between their corresponding ports is
established and the potentials disappear.
3.4 Connections
The matching of ports results in the following connections
ffl Point-to-point Stream. 1:1, 1:n, n:1 and n:m
communication patterns are possible;
ffl Group. Messages are broadcast to all members
of the group. A closed group semantics is used,
i.e. processes must be member of the group in
order to distribute information in it;
ffl Blackboard. Messages are placed on a blackboard
used by several processes; they are persistent and
can be retrieved more than once in a sequence
defined by the processes.
3.5 Events
Events can be attached to conditions on ports of blops
or processes. These conditions will determine when
the event will be triggered in the blop. Condition
blop world -
// Process definition
process p1 -
P2Pin port1 !"INPUT"?;
BB port2 !"BB"?; // with its ports
. // More processes
. // A new blop
. // More blops
create process p1; // Create processes
create blop b1(); // Create blops
- // End of blop world
Figure
2: Layout of a typical program written in STL.
checking is implementation dependent (see STL's
event definition as an example of how to define event
semantics on ports).
4 The Coordination Language
We designed and implemented a first language binding
of the ECM model, called . STL is a realization
of the ECM model applied to multi-threaded
applications on a LAN of UNIX workstations. STL
materializes the separation of concern as it uses a separate
language exclusively reserved for coordination
purposes and provides primitives which are used in
a computation language to interact with the entities.
The implementation of STL [21] is based on Pt-pvm
[22], a library providing message passing and process
management facilities at thread and process level for
a cluster of workstations. In particular, blops are represented
by heavy-weight UNIX processes, and ECM
processes are implemented as light-weight processes
(threads).
The ECM model is realized in an STL program
whose general structure is outlined in Figure 2. Starting
from the default blop world, a hierarchy of processes
and blops can be defined, showing the hierarchical
structure of the language at definition level.
4.1 STL's Specialities
In this Section we look at particularities for the instantiation
of the ECM model in STL.
3 Simple Thread Language.
Blops
The name of a blop is used to create instances of a
blop object. Blop objects can be placed onto a specific
physical machine, or can be distributed onto a
cluster of workstations. The creation of a blop is a
complex recursive procedure: it includes the initialization
of all static processes and ports defined for
this blop and subordinated blops.
Figure
3 shows the definition of two blops (called
world and sieve) in STL syntax, the line 'create
blop sieve s()' initializes the blop somewhere on
the parallel machine. The statement could be annotated
with a machine name to specify the actual
workstation on which the blop should be initialized.
The port definitions will be explained later.
blop world -
blop sieve - // Declaration blop sieve
// Two ports: types and names
Group a !"CONNECTOR"?;
create blop sieve s(); // Create blop
Figure
3: Blop declaration and invocation in STL.
ECM processes in STL can be activated from within
the coordination language and in the computation
language. In the coordination language this is done
through the instantiation of a process object inside
a blop. To dynamically create new processes the
process object instantiation can be done in the body
of an event or in the computation language directly.
To some extent this is a trade-off regarding our goal
to totally separate coordination and computation at
code level. However, in order to preserve a high level
of flexibility at application level, we allow these two
possibilities.
Process termination is implicit: once the function
which implements the process inside the computation
language has terminated, the process disappears from
the blop.
Figure
4 shows an example of an STL process type
worker with two static ports in and res and a thread
entry point worker; the syntax and semantics of the
port definitions will be explained in the next section.
Ports and Connections
knows static ports as an interface of a process
or blop defined in the coordination language, and dynamic
ports which are created at runtime in the computation
language. However, the type of a dynamic
Attribute P2P BB Group MyPort Explanation
Communication stream blackboard group stream Communication structure
Saturation 1 * 5 Amount of ports that may connect
Capacity * 12 Capacity port: in data items
Synchronization async async async async Semantics message passing model
Orientation inout inout inout in Direction of data flow
Table
1: STL's built-in ports and a user defined port with corresponding port attribute values.
Process type worker
process worker -
P2Pin in !"WORK"? // input port
P2Pout res !"RESULT"? // output port
create process worker w;// An instance
Figure
4: Process declaration and invocation in STL.
port, i.e, its features must be determined in the co-ordination
language.
ports use a set of attributes as an implementation
for ECM's port features. These attributes must
be compatible in order to establish a connection between
two ports. Table 1 gives an overview of the
attributes of a port; combinations of attributes lead
to port types.
provides the following built-in port types:
point-to-point output ports, (P2Pout), point-to-point
input ports (P2Pin), point-to-point bi-directional
ports (P2P), groups (Group) and blackboards (BB).
Variants of these types are possible and can be defined
by the user.
The classical stream port. Two matched ports
of this type result in a stream connection with
the following semantics: Every send operation
on such a port is non blocking, the port has
an infinite storage capacity (in STL, infinity is
symbolized by *), and matches to exactly one
other port. The orientation attribute defines
whether the port is an output port (P2Pout),
an input port (P2Pin), or both (P2P).
Group:
A set of Group ports form the group mechanism
of STL. Ports of this type are gathered
in a group and all message send operations are
based on broadcast, that is, the message items
will always be transfered to all members of the
group. A closed group semantics is used.
BB:
The BB stands for Blackboard and the resulting
connection has a blackboard semantics as
defined for ECM. In contrast to the previous
port types, messages on the blackboard are now
persistent objects and processes retrieve messages
using a symbolic name and a tag.
Combinations of these basic port types are possi-
ble, for example to define a (1:n) point-to-point con-
nection, the saturation attribute of a P2P port can be
augmented to n, see Table 1 port MyPort.
Synchronous communication can be achieved by
changing the type of message synchronization to syn-
chronous, thus yielding in point-to-point synchronous
communication. For 1:n this means that the data
producing process blocks until all the n processes have
connected to the port, and every send operation returns
only after all n processes have received the data
item.
In STL the synchronization attribute overrides
the capacity attribute, because synchronous communication
implies a capacity of zero. However, asynchronous
communication can be made a little bit less
asynchronous by setting the capacity attribute to a
value n to make sure that a process blocks after having
sent n messages. Note that, the capacity attribute
is a local relation between the process and its port:
for asynchronous communication with a certain port
capacity, it is only guaranteed that the message has
been placed into the connection which does not necessarily
mean that another process connected via a
port to this connection has (or will) actually received
the message.
Connections result in matched ports and are defined
in accordance to the ECM model.
STL's Events
Events are triggered using a condition operation on a
port. The event is handled by an event handler inside
the blop.
Conditions related to ports of processes or blops
determine when the event will be executed in the blop
(for an overview on port conditions, see Table 2).
Whether an event must be triggered or not will be
checked by the system every time data flows through
it or a process accesses it. Otherwise a condition
like isempty would uninterruptedly trigger events for
ports of processes, because at start-up of the process
ports are empty.
Condition Description
unbound(port p) For saturation 6=
* the predicate unbound() returns true if the port has
not yet matched to all its potential communication partners. For ports
with saturation=*, the unbound predicate returns always true.
accessed(port p) Equals true whenever the port has been accessed in general.
isempty(port p) Checks whether the port has messages stored or not.
isfull(port p) Returns true if the port's capacity has been reached.
msg handled(port p, int n),
less msg handled(port p, int n)
Equals true if n messages, or less than n messages have been handled,
respectively.
Table
2: Conditions on ports.
event new-worker() -
create process worker new;
when unbound(new.out) then new-worker();
Process type worker
process worker -
P2Pin in !"WORK"? // in port
P2Pout out !"WORK"? // out port
create process worker w;
// Attach event to port
when unbound(w.out) then new-worker();
Figure
5: An example of event handling in STL.
After an event has been triggered, a blop is not
tuned anymore to handle subsequent events of the
same type. In order to handle these events again,
the event handling routine must be re-installed which
is usually done in the event handling routine of the
event currently processed.
Very useful is the unbound condition on ports because
it enables the construction of parallel software
pipelines very elegantly. If we reconsider Figure 4
and extend it to Figure 5, we see the interaction of
event conditions and ports in STL. First an event
new worker is declared. The event is attached to
an unbound condition on the out port of the initial
process denoted w.out. If process w either
reads or writes data from/to its port out, the event
new worker is triggered because at that time there
are no other ports to which w.out is currently bound,
so unbound(w.out) returns TRUE. The event body
of the event declaration of new worker creates a new
process of type worker, resulting in a new port signature
or "potential" in the blop. The blop matches
now the two ports (w.out and new.in) and the information
can be transferred from w to new. The
same mechanism recursively works for the out port
of the created process new, because the same condition
(unbound) is attached to its port new.out.
This example illustrates the necessity of reinstalling
explicitly event handlers so as to ensure a
coordinated execution of the event handler body; in
this case a process creation.
Primitives
STL is a separate language used in addition to a
given computation language (in this case C), however
the coordination entities must be accessed from
within the computation language. Therefore, we implemented
a C library to interact with the coordination
facilities of STL. The set of primitives includes
operations for creating dynamic ports, methods to
transfer data from and to ports, and operations for
process management; see [21] for a detailed specification
4.2 STL Compiler and Runtime Syste
Figure
6 shows the basic building blocks of the STL
programming environment in context with Pt-pvm.
A distributed application consists of two files: the co-ordination
part (app.stl) and the computation part
(app-func.c). The STL part will be parsed by the
compiler to produce pure Pt-pvm code. The
final program will then be linked with the runtime libraries
of STL and Pt-pvm, the user supplied code,
and the generated code of the STL compiler.
app-func.o
app.stl
app.c app-func.c
app.o stllib.a
User
User
Compiler
Compiler
Linker
app
ptpvm.a
Figure
Programming environment of STL.
5 Coordination of Autonomous
Agents in STL
One of our target with STL is the distributed implementation
of a class of multi-agent systems.
The multi-agent methodology is a recent area of
distributed artificial intelligence. A MAS is an organization
given by a set of artificial entities acting
in an environment. Focusing on collective behaviors,
this methodology is aimed at studying and taking
advantage of various forms of agent influences and
interactions. It is widely used either in pure simulation
of interacting entities (for instance in artificial
life [23]) or in problem solving [26].
Definitions of MASs are general enough to address
multiple domains which can be specified by
the nature of the agents and the environment. We
will proceed by composing a typical software MAS
starting from problematics logically inducing a "dis-
tributed" approach and which could capitalize on
self-organizing collective phenomena. One key concept
in such approaches is emergence, that is the
apparition of functional features at the level of the
system as a whole. Our work is aimed at leading to
robust solutions for applications in the frameworks of
robotics and parallelism. For the design of our sys-
tems, we follow the "new AI" trend [7], [13]: our
agents are embodied and situated into an environ-
ment. The primeval feature we attempt to embody
into them is autonomy: the latter is believed to be a
necessary condition for flexibility, scalability, adaptability
and emergence.
In what follows, we will formally describe the class
of MASs we attempt to implement on distributed ar-
chitectures. Then we will motivate our project and
discuss the difficulties encountered when distributing
such MASs. We will eventually present the implementation
with STL of a peculiar application belonging
to our class of MASs.
5.1 A Generic Model for an Autonomous
Agents' System
Our generic model is composed of an Environment
and a set of Agents. The Environment encompasses
a list of Cells, each one encapsulating a list of on-cell
available Objects at a given time (objects to be manipulated
by agents) and a list of connections with
other cells, namely a Neighborhood which implicitly
sets the topology. This way of encoding the environment
allows the user to cope with any type of topol-
ogy, be it regular or not, since the set of neighbors
can be specified separately for every cell.
The general architecture of an agent is displayed on
Figure
7. An agent possesses some sensors to perceive
the world within which it moves, and some effectors
to act in this world (embodiment). The implementation
of the different modules presented on Figure 7,
namely Perception, State, Actions and Control algorithm
depends on the application and is under the
user's responsibility. In order to reflect embodiment
and situatedness, perception must be local: the agent
perceives only the features of one cell, or a small sub-set
of cells, at a given time. The control algorithm
module is particularly important because it defines
the type of autonomy of the agent: it is precisely inside
this module that the designer decides whether
to implement an operational or a behavioral autonomy
[33]. Operational autonomy is defined as the capacity
to operate without human intervention, without
being remotely controlled. Behavioral autonomy
supposes that the basis of self-steering originates in
the agent's own capacity to form and adapt its principles
of behavior: an agent, to be behaviorally au-
tonomous, needs the freedom to have formed (learned
or decided) its principles of behavior on its own (from
its experience), at least in part. For instance, a very
basic autonomy would consist of randomly choosing
the type of action to take, a more sophisticated one
would consist of implementing some learning capabil-
ities, e.g. by using an adaptive neural network.
5.2 A Typical Application: Gathering
Agents
Our class of MAS can support numerous applica-
tions. We illustrate with a simulation in the frame-work
of mobile collective robotics. Agents (an agent
simulates the behavior of a real robot) seek for objects
distributed in their environment, and we would
like them to stack all objects, like displayed in Figure
8.
Our approach rests on a system integrating operationally
autonomous agents, that is, each agent in the
system acts freely on a cell (the agent decides which
action to take according to its own control algorithm
and local perception). Therefore, there is no master
responsible for supervising the agents in the system,
thus allowing it to be more flexible and fault tolerant.
Agents have neither explicit coordination features for
Sensors Effectors
Content and
connections of
occupied Cell
Perception Actions
objects,
move
Manipulate
algorithm
Control
State
Figure
7: Architecture of an agent.
Figure
8: Collective robotics application: stacking objects
detecting and managing antagonism situations (con-
flicts in their respective goals) nor communication
tools for negotiation. In fact they communicate in
an indirect way, that is, via their influences on the
environment.
In our simulation, the environment is composed of
a discrete two dimensional L cell-sided grid, a set
of N objects and n mobile agents. An object can
be located on a cell or carried by an agent. Under
the given constraints, we implemented several variants
for agent modules (details can be found in [12]
and [16]). A simple control algorithm that can be
used is as follows. Agents move randomly from a cell
to a connected one. If an agent that does not carry
an object comes to a cell containing NO objects, it will
pick up one object with a probability given by 1=NO ff ,
where ff - 0 is a constant; if an agent that carries an
object comes to a cell containing some objects, it will
systematically drop its object. If the cell is empty,
nothing happens.
This simulation has been already serially imple-
mented, exhibiting the emergence of properties in
the system, such as cooperation yielded by the recurrent
interactions of the agents; agents cooperate
to achieve a task without being aware of that. Further
details about this simulation and outcomes can
be found in [16].
5.3 Coexistence and Distribution of
Autonomous Agents
We would like to stress on the problem of implementing
a MAS, such as the one described above, on a
distributed architecture.
The instantiation of a MAS is most of the time se-
rial. However that may be, because one draws inspiration
from group of robots or living entities, it seems
clear that the agents may run in parallel. At the level
of abstraction of the MAS specification, the term
parallelism simply conveys the notion co-existence of
the autonomous agents. Hence, parallelism ideally
underlies every conception of MAS and is thought
to be implicitly taken into account in a serial imple-
mentation. Paradoxically, the projection of a MAS
onto a distributed architecture turns out to be far
from being obvious. We will not discuss here what
are the fundamental origins of this difficulty; more
details may be found in [15] (it concerns the different
concepts of time in MAS and in distributed comput-
ing, respectively). We will illustrate the difference
between concepts of coordination in both distributed
computing and MAS.
The problem crystallizes around the different levels
of what we understand when speaking about coordination
In a conception stage (i.e. before any implemen-
tation), the notion of "coordination of agents" refers
to a level of organization quite different from the one
at programming level. We studied our MAS application
(the gathering agents) in order to investigate
some methods of cooperation between agents, namely
emergent or self-organized cooperation, a sub-domain
of coordination. In this stage, "cooperation between
agents" deals with dependencies between agents as
autonomous entities. The challenge is to find an appropriate
trade-off between cooperation, the necessary
fruitful coordination of inter-dependent entities,
and their relative autonomy. At this stage, the envisaged
cooperation methods must be described, in
terms of the agents' architecture, their perception,
their actual sensors and effectors. Thus, these methods
have to be completely undertaken by their internal
control algorithm: this is a result of the embodiment
and situatedness prescriptions.
In an implementation stage, the notion of "coordi-
nation of agents" deals with the organization of the
actual processes, or "pieces of software" (structures,
objects, .) which represent the agents at machine
level. In a serial implementation, agents work in a
round robin fashion in such a way that data consistency
is preserved. Therefore, no further coordination
problems occur. Thus, serial implementations
do not to perturb the conceptual definition of coordination
of the agents.
But in a distributed implementation, new problems
arise, due to shared resources and data, synchronization
and consistency concerns. Coordination models,
in distributed computing, are aimed at providing solutions
for these problems: they describe coordination
media and tools, external to the agents, so as to
deal with the consequences of the spatial distribution
of the supporting processes. For example, the coordination
medium, such as ports and connections in
ECM, is the substratum in which coordinated entities
are embedded for what concerns their "coordi-
nation dimension". This substratum should not be
confused with the agents' environment described at
MAS level. It reflects the distributed supporting ar-
chitecture. We find here again the notion of orthogo-
nality: the coordination medium is orthogonal to the
model of the agents and their environment. If we take
the separation of concern into account, this means
that the coordination introduced at the conceptual
level, which has to be implemented in the control
algorithm module of the agents, belongs to the computational
part, and has to be implemented in the
computational language, whereas the coordination of
the supporting processes has to be implemented in
the coordination language.
Nevertheless, the question is now to determine to
which extent this separation between computation
(including agents' conceptual coordination) and co-ordination
(of distributed processes) is possible. In
other words, to which extent coordination methods of
distributed processes do not interfere with coordination
methods of agents as specified at MAS level; and
what kind of coordination media and tools may provide
coordination means compatible with the agent
architecture, their autonomy and locality prescriptions
Usual platforms that enable distributed computing
do not belong to MAS domain. Distributed implementations
of a MAS through existing languages
would give rise, if no precautions are taken, to a hybrid
system which realizes an improper junction between
the two levels of definition of coordination. In
each case, the agent processes are coordinated or synchronized
at a rate and by means out of the conceptual
definition of the MAS, but that the chosen language
provides and compels to use in order to manage
its processes. Because the processes are not designed
with the aim of representing autonomous agents, the
resulting system may exhibit characteristics which
are not the image of a property of the MAS itself.
One of our goals is to understand and prescribe what
precautions are to be taken, and to develop a platform
that makes distributed implementations of our
MAS class an easier task.
For this, we start with an implementation of our
MAS class with STL.
5.4 Constraints for a Distributed Im-
plementation
Our very aim is to be able to express our autonomy-
based multi-agent model on a distributed architecture
in the most natural way which preserve autonomy
and identity of the agents. We attempt to use
STL in order to distribute our system of gathering
agents. The problematics sketched here above
is well reflected when we try to distribute the environment
itself on several processes (machines). The
only purpose of this division of the environment (for
instance 4 blocks of (L=2) 2 cells each) is to take advantage
of a given distributed architecture. But it
clearly necessitates means in addition to coordination
mechanisms described at MAS level: a mechanism is
needed to cope with agents crossing borders between
sub-environments (of course this should be achieved
transparently to the user, it should be part of the
software platform). Moreover, we will need another
type of mechanism in order to cope with data consistency
(e.g. updating the number of objects on a
cell). These mechanisms should not alter every agen-
t's autonomy and behavior: we will have to dismiss
any unnecessary dependency.
5.5 Implementation in STL
The Environment is a torus grid, in which every cell
has four neighbors (four connectivity). Note that using
a four connectivity (against an eight connectiv-
ity) basically does not change anything except that it
slightly alleviates the implementation. Agents comply
rigorously with the model previously introduced
in
Figure
7. They sense the environment through
their sensors and act upon their perception at once.
To take advantage of distributed systems, the Environment
is split into sub-environments, each of which
being encapsulated in a blop, as depicted in Figure 9,
thus providing an independent functioning between
sub-environments (and hence between agents roaming
in different sub-environments). Note that blops
have to be arranged so as to preserve the topology of
the sub-environments they implement.
Figure
9: Environment split up among 4 blops.
For our implementation new port types have been
introduced, namely P2P Nin and P2P Nbi, which are
respectively variants of P2Pin and P2P, for which
the saturation attribute is set to infinity (see Figure
10).
port P2P-Nin -
port P2P-Nbi -
Figure
10: User-defined port types.
Global Structure
The meta-blop world is composed of an init process,
in charge of the global initialization of the system,
and a set of N pre-defined blops (called bx with x
ranging from 1 to N), each of which encapsulating
and handling a sub-environment. Figure 11 gives a
graphical overview of the organization within a blop
bx in the meta-blop world. Note that on this figure,
only one blop bx is represented so as to avoid to overload
the picture. In the case of an application with
multiple blops bx, there should be some connections
between the init process and all the blops, as well as
some connections between north ports of top blops
with south ports of bottom blops and east ports of
east blops with west ports of west blops have been
intentionally dropped
The init process has four static ports for every blop
to be initialized: three of type P2Pout (init NbAgts,
cre Agts and cre SubEnv) and one of type P2P (eot).
The r-ole of the init process is threefold: first, to create
through its init NbAgts and cre Agts ports the initial
agents within every blop; second, to set up through
its cre SubEnv port the sub-environment (size, number
of objects and position of the objects on the cells)
of every blop; third, to collect the result of an exper-
iment, to signal the end of an experiment, and to
properly shutdown the system through the eot port.
Blops bx: Figure 13 (see Appendix) shows how
the implementation of the application depicted on
Figure
looks like in the STL coordination lan-
guage, in the case of four blops bx, namely b1, b2,
b3 and b4.
Figure
12 (see Appendix) presents the
declaration and instantiation of all the processes belonging
to a blop bx. Two types of processes may be
distinguished: processes that are purpose-built for a
distributed implementation of the multi-agent application
(they enable a distributed implementation),
namely initAgent and taxi, and processes that are
actually peculiar to the multi-agent application, viz.
subEnv and agent processes.
Ports of a Blop bx: Each blop has twelve
static ports: four P2Pout outflowing direction ports
(north o, south o, west o, east o) and four P2Pin
inflowing direction ports (north i, south i, west i,
east i), which are gateway ports enabling agent migration
across blops; three P2Pin ports, namely
Agents and b SubEnv used for the creation
of the initial agents (actually realized in the
initAgent process) and for an appropriate setup of the
sub-environment (achieved in the subEnv process);
and a P2P port (b eot) used to forward to the init
process the result of an experiment and to indicate
the end of an experiment.
For the time being, the topology between blops is
set in a static manner, by creating the ports with
appropriate names (see Figure 13 of the Appendix).
The four inflowing direction ports of a blop match
with a port of its inner process initAgent. The four
outflowing direction ports of a blop match with ports
of its inner process taxi.
initAgent Process, new agt Event: The initA-
gent process is responsible for the creation of the
agents. It has four static ports: nb Agts of type
P2Pin, newArrival of type P2P Nin, location of type
P2P and init of type P2Pout. At the outset of the ex-
periment, the initAgent process through its nb Agts
port will be informed by the init process on the number
of agents to be created in the present blop. The
initAgent process will then loop on its newArrival
port so as to receive the identifiers of the agents to be
created. As soon as a value comes to this port, the
new agt event (see Figure 12 of the Appendix) is triggered
and it will create a new agent process. In the
meantime, the initAgent process will draw randomly
for every agent's identifier an agent's position. The
location port enables the initAgent process to communicate
with the subEnv process, so as to have a
better control on the position of an agent with regard
to other agents' and objects' positions, e.g. to
ensure that at the outset no more than one agent can
reside on an empty cell. The initAgent process will
then write on its init port some values for the agent
just created. The latter, through its creation port
will read the information that was previously written
on the init port of the initAgent process. Values that
are transmitted feature for instance the position of
the agent and its state.
Note that the newArrival port is connected to all
inflowing direction ports of the blop within which it
resides, thus enabling to deal with migrating agents
across blops in the course of an experiment, by the
same event mechanisms as described above. The lo-
init
cre_Agts_b1
cre_SubEnv_b1
Blop bx
b_Agents
creation
req-ans
Blackboard
b_north_o
b_west_o
b_south_o
b_east_o
taxi
tSouth
tWest
con_Agt
tNorth
tEast
agent
creation
req_ans
initAgent
init
newArrival
subEnv
init
location
location
nb_Agts
eot
b_eot
b_NbAgts
World
Figure
11: init process and a single blop bx: solid and dotted lines are introduced just for a purpose of visualization.
cation port is very useful at this point, because the
initAgent process can already check with the subEnv
process whether the position the agent intends to
move to is permitted or not. In case it is not, the
initAgent process will have to draw randomly a new
position in the neighborhood of the position the agent
intended to go.
agent Process: This process has two static ports
(req ans of type BB and creation of type P2Pin) plus
to taxi a dynamic P2Pout port. As already stated,
this process reads on its creation port some values
(its position and its state). All req ans ports of the
agents are connected to a Blackboard, through which
agents will sense their environment (perception) and
act (action) into it, by performing Linda-like in/out
operations with appropriate messages. The type of
action an agent can take depends on the type of control
Algorithm implemented within the agent (see the
architecture of an agent on Figure 7). The to taxi
port is used to communicate dynamically with the
taxi process in case of migration: the position and
state of the agent are indeed copied to the taxi pro-
cess. The decision of migrating is always taken by
the subEnv process.
subEnv Process: The subEnv process handles
the access to the sub-environment and is in charge
of keeping data consistency. It is also responsible for
migrating agents, which will cross the border of a sub-
environment. It has a static in out port (of type BB)
connected to the Blackboard and a static P2Pout port
to taxi connected to the taxi process. Once initialized
through its init P2Pin port, the subEnv process builds
the sub-environment. By performing in/out operations
with appropriate tuples, the subEnv process will
process the requests of the agents (e.g. number of objects
on a given cell, move to next cell) and reply to
their requests (e.g. x objects on a given cell, move
allowed and registered). When the move of an agent
will lead to cross the border (cell located in another
blop), the subEnv process will first inform the agent
it has to migrate and then inform the taxi process an
agent has to be migrated (the direction the agent has
to take will be transmitted, so that the taxi process
can know which port to write to). The location P2P
port is used to communicate to the initAgent process
further to its request on the position of an agent with
regard to other agents' and objects' positions.
The taxi Process: The taxi process is responsible
for migrating agents across blops. It has four
static direction ports (of type P2Pout), which are
connected to the four outflowing direction ports of
the blop within which it stands. When this process
receives on its static P2Pin port requ the direction
towards where an agent has to migrate, it will create
a dynamic P2Pin port con Agt in order to establish
with the appropriate agent process a communication,
by means of which it will collect all the useful information
of the agent (intended position plus state).
These values will then be written on the port corresponding
to the direction to take and will be transferred
to the newArrival port of the initAgent process
of the concerned blop inducing the dynamic creation
of a new agent process in the blop, thus materializing
the migration.
6 Discussion
6.1 STL a coordination language
As a coordination language for distributed program-
ming, ECM along with STL present some similarities
with several coordination languages, and particularly
with the IWIM model [2] and its instantiation Manifold
[3]. However they differ in several important
points:
ffl One might be inclined to identify blops with
IWIM managers (manifolds). This is not the
case, because blops are not coordinators that
create explicitly interconnections between ports.
The establishment of connections is implicit, resulting
from a matching mechanism, depending
on the types and the states of the ports. This is
definitely a different point of view in which communication
patterns are not imposed. Further-
more, the main characteristics of blops is to encapsulate
objects, thus forming a separate name-space
for enclosed entities and an encapsulation
mechanism for events. Nested blops are a powerful
mechanism to structure private name-spaces,
offering an explicit hierarchical model.
ffl ECM generalizes connection types: either
stream, blackboard or group. This adds powerful
means to express coordination with tuple-space
models and does not restrict to channels. Refined
semantics can be defined in virtue of port
characteristics (features).
ffl In ECM, events are not signals broadcast in the
environment, but routines belonging to blops.
They are attached to ports with conditions
on their state that determine when events are
launched. Events can create new blops and pro-
cesses, and attach events to ports. Their action
area is limited to a blop.
ffl Interconnections evolve through configuration
changes of the set of ports within a blop, induced
by events and also by the processes them-
selves. In fact, the latter can create new processes
and new ports, thus yielding to communication
topology changes.
ECM and STL present similarities with several
other coordination models and languages like
Linda [9], Darwin [24] or ConCoord [18]. We
mention few other specific characteristics of our work.
Like several further developments of the Linda model
(for instance Objective Linda [19]), ECM uses a hierarchical
multiple coordination space model, in contrast
to the single flat tuple space of the original
Linda. Processes get started through an event in
a blop, or automatically upon initialization of a blop,
or through a creation operation by another process;
Linda uses one single mechanism: eval(). Processes
do not execute in a medium which is used to
transfer data. In order to communicate, they do not
have references to other processes or to ports belonging
to other entities; they communicate anonymously
through their ports.
6.2 The agent language STL++, a
new instantiation of the ECM
model
Besides the advantage of a better overview of coordination
duties, it however turned out that the separation
of code (as in STL) can not always be main-
tained. Although the black box process model of
ECM is a good attempt to separate coordination and
computation code, dynamic properties proved to be
difficult to express in a separate language. This is
for example reflected in STL by the primitives which
must be used in the computation language in order
to use dynamic coordination facilities of STL. Dynamic
properties can not be separated totally from
the actual program code. Furthermore, a duplication
of code for processes may introduce difficulties
to manage code for a distributed application.
These observations lead us to the development of
another instantiation of ECM, namely the coordination
language STL++. This new language binding
implements ECM by enriching a given object oriented
language (C++ in our case) with coordination
primitives, offering high dynamical properties. An
STL++ application is then a set of classes inheriting
from the basic classes of the STL++ library.
STL++ aims at giving basic constructs for the implementation
of generic multi-agent platforms, thus
being an agent language [32]. A thorough description
of STL++ can be found in [29].
7 Conclusion
In this paper, we presented the ECM coordination
model and STL, its language binding. We built a
first STL-based prototype on top of the existing Pt-
pvm platform [22]. An implementation of a classical
collective robotics simulation illustrated the power of
and demonstrated its appropriateness for coordinating
a class of autonomous agents, whose most
critical constraint is the preservation of autonomy by
dismissing coordination mechanisms exclusively embedded
for purpose of implementation (unnecessary
dependencies).
As far as the development of a platform for multi-agent
programming is concerned, STL can be seen
as a first starting point. STL already includes mechanisms
which are appropriate for multi-agent pro-
gramming, among which are: (1) the absence of a
central coordinator process, which does not relate to
any type of entity in the multi-agent system; (2) the
notion of ports avoiding any additional coordinator
process; and (3) in despite of (2) the notion of blop
hierarchy which in our case allows us to represent the
encapsulation of the environment and the agents.
The STL coordination model is still to be extended
in order to encompass as many generic coordination
patterns as possible, yielding in STL skeletons at
disposal for general purpose implementations. Future
works will consist in: (1) improving the model,
such as introducing new user-defined attributes for
ports, dynamic ports for blops, data typing for port
types, refining sub-typing of ports, supporting multiple
names for ports, and (2) developing a graphical
user interface to facilitate the specification of the co-ordination
part of a distributed application.
There are two major outcomes to this work. First,
as autonomous agents' systems are aimed at addressing
problems which are naturally distributed,
our coordination platform provides a user the possibility
to have an actual distributed implementation
and therefore to benefit from the numerous advantages
of distributed systems, so that this work is
a step forward in the Autonomous Agents commu-
nity. Secondly, as the generic patterns of coordination
for autonomy-based multi-agent implementations
are embedded within the platform, a user can
quite easily develop new applications (e.g. by changing
the type of autonomy of the agents, the type of
insofar they comply with the generic
model.
Acknowledgements
We are grateful to Andr'e Horstmann and Christian
Wettstein for their valuable work, which consisted in
realizing parts of the STL platform.
--R
ACTORS: A Model of Concurrent Computation in Distributed Systems.
The IWIM Model for Coordination of Concurrent Activities.
An Overview of Manifold and its Implementation.
Programming by Multiset Transformation.
The ToolBus Coordination Architecture.
Extending Objects with Rules
Intelligence without Reason.
Adaptive Parallelism and Pi- ranha
Linda in Con- text
Coordination Languages and Their Significance.
Bauhaus Linda.
Coop'eration implicite et
Autonomous Agents: from Concepts to Implementation.
An Architecture to Co-ordinate Distributed Applications on the Web
Performance of Autonomy-based Systems: Tuning Emergent Cooperation
CoLa: A Coordination Language for Massive Parallelism.
A Software Environment for Concurrent Coordinated Programming.
Objective Linda: A Coordination Model for Object-Oriented Parallel Pro- gramming
Coordination Requirements for Open Distributed Systems.
and Pt-PVM: Concepts and Tools for Coordination of Multi-threaded Appli- cations
Artificial Life.
Structuring parallel and distributed programs.
The Interdisciplinary Study of Coordination.
The Design of Intelligent Agents: A Layered Approach.
Coordination Models and Languages.
A Set of Tuple Space primitives for Distributed Co- ordination
Programming Languages for Parallel Processing.
Agent The- ories
Adaptive Behavior in autonomous agents.
--TR | autonomous agents;distributed systems;coordination;collective robotics |
297577 | Supporting Scenario-Based Requirements Engineering. | AbstractScenarios have been advocated as a means of improving requirements engineering yet few methods or tools exist to support scenario-based RE. The paper reports a method and software assistant tool for scenario-based RE that integrates with use case approaches to object-oriented development. The method and operation of the tool are illustrated with a financial system case study. Scenarios are used to represent paths of possible behavior through a use case, and these are investigated to elaborate requirements. The method commences by acquisition and modeling of a use case. The use case is then compared with a library of abstract models that represent different application classes. Each model is associated with a set of generic requirements for its class, hence, by identifying the class(es) to which the use case belongs, generic requirements can be reused. Scenario paths are automatically generated from use cases, then exception types are applied to normal event sequences to suggest possible abnormal events resulting from human error. Generic requirements are also attached to exceptions to suggest possible ways of dealing with human error and other types of system failure. Scenarios are validated by rule-based frames which detect problematic event patterns. The tool suggests appropriate generic requirements to deal with the problems encountered. The paper concludes with a review of related work and a discussion of the prospects for scenario-based RE methods and tools. | Introduction
Several interpretations of scenarios have been proposed ranging from examples of
behaviour drawn from use cases [29], descriptions of system usage to help understand
socio-technical systems [30], and experience based narratives for requirements elicitation
and validation [39], [50]. Scenarios have been advocated as an effective means of
This research has been funded by the European Commission ESPRIT 21903 'CREWS' (Co-operative
Requirements Engineering With Scenarios) long-term research project.
communicating between users and stakeholders and anchoring requirements analysis in
real world experience [15]. Unfortunately scenarios are extremely labour-intensive to
capture and document [20], [44]; furthermore, few concrete recommendations exist about
how scenario-based requirements engineering (RE) should be practised, and even less tool
support is available.
Scenarios often describe information at the instance or example level. This raises the
question of how instance level information can be generalised into the models and
specifications that are used in software engineering. Scenarios may be used to validate
requirements, as 'test data' collected from the observable practice, against which the
operation of a new system can be checked [40]. Alternatively, scenarios may be seen as
pathways through a specification of system usage, and represented as animations and
simulations of the new system [14]. This enables validation by inspection of the behaviour
of the future system. This paper describes a method and tool support for scenario based
requirements engineering that uses scenarios in the latter sense.
In industrial practice scenarios have been used as generic situations that can prompt reuse
of design patterns [8], [48]. Reuse of knowledge during requirements engineering could
potentially bring considerable benefits to developer productivity. Requirements reuse has
been demonstrated in a domain specific context [31], however, we wish to extend this
across domains following our earlier work on analogies between software engineering
problems [51].
The paper is organised in four sections. First, previous research is reviewed, then in
section 3 a method and software assistant tool for scenario based RE, CREWS-SAVRE
(Scenarios for Acquisition and Validation of Requirements) is described and illustrated
with financial dealing system case study. Finally, we discuss related work and future
prospects for scenario based RE.
2. Previous Work
Few methods advise on how to use scenarios in the process of requirements analysis and
validation. One of the exceptions is the Inquiry Cycle of Potts [39] which uses scenario
scripts to identify obstacles or problems in a goal-oriented requirements analysis.
Unfortunately, the Inquiry Cycle does not give detailed advice about how problems may be
discovered in scenarios; furthermore, it leaves open to human judgement how system
requirements are determined. This paper builds on the concepts of the Inquiry Cycle with
the aim of providing more detailed advice about how scenarios can be used in
requirements validation. In our previous work we proposed a scenario based requirements
analysis method (SCRAM) that recommended a combination of concept demonstrators,
scenarios and design rationale [50]. SCRAM employed scenarios scripts in a walkthrough
method that validated design options for 'key points' in the script. Alternative designs were
documented in design rationale and explained to users by demonstration of early
prototypes. SCRAM proved useful for facilitating requirements elaboration once an early
prototype was in place [48]; however, it gave only outline guidance for a scenario-based
analysis.
Scenarios can be created as projections of future system usage, thereby helping to identify
requirements; but this raises the question of how many scenarios are necessary to ensure
sufficient requirements capture. In safety critical systems accurate foresight is a pressing
problem, so taxonomies of events [24] and theories of human error [37], [42], have been
used to investigate scenarios of future system use. In our previous research [49], we have
extended the taxonomic approach to RE for safety critical systems but this has not been
generalised to other applications.
Scenarios have been adopted in object oriented methods [29], [21], [10] as projected
visions of interaction with a designed system. In this context, scenarios are defined as
paths of interaction which may occur in a use case; however, object oriented methods do
not make explicit reference to requirements per se. Instead, requirements are implicit
within models such as use cases and class hierarchies. Failure to make requirements
explicit can lead to disputes and errors in validation [17], [38]. Several requirements
management tools have evolved to address the need for requirements tracability (e.g.
Doors, Requisite Pro). Even though such tools could be integrated with systems that
support object oriented methods (e.g. Rational Rose) at the syntactic level, this would be
inadequate because the semantic relationships between requirements and object oriented
models needs to be established. One motivation for our work is to establish such a bridge
and develop a sound means of integrating RE and OO methods and tools.
Discovering requirements as dependencies between the system and its environment has
been researched by Jackson [26] who pointed out that domains impose obligations on a
required system. Jackson proposed generic models, called problem frames, of system-environment
dependencies but events arising from human error and obligations of systems
to users were not explicitly analysed. Modelling relationships and dependencies between
people and systems has been investigated by Yu and Mylopoulos [54] and Chung [7].
Their i* framework of enterprise models facilitates investigation of relationships between
requirements goals, agents and tasks; however, scenarios were not used explicitly in this
approach. Methods are required to unlock the potential of scenario based RE; furthermore,
the relationship between investigation based on examples and models on one hand and the
systems and requirements imposed by the environment on the other, needs to be
understood more clearly.
One problem with scenarios is that they are instances, i.e. specific examples of behaviour
which means that reusing scenario based knowledge is difficult. A link to reusable designs
might be provided if scenarios could be generalised and then linked to object-oriented
analysis and design patterns [18]. Although authors of pattern libraries do describe
contexts of use for their patterns [8], [19], they do not provide guidance about the extent of
such contexts. Requirements reuse has been demonstrated by Lam et al. [30]; although, the
scope of reuse was limited to one application domain of jet engine control systems. Many
problems share a common abstraction [51], and this raises the possibility that if common
abstractions in a new application domain could be discovered early in the RE process,
then it may be possible to reuse generic requirements and link them to reusable designs.
This could provide a conduit for reusing the wealth of software engineering knowledge
that resides in reusable component libraries, as well as linking requirements to object-oriented
solution patterns (e.g. [8], [19]). Building a bridge from requirements engineering
to reusable designs is another motivation for the research reported in this paper.
Next we turn to a description of the method and tool support.
3. Method and Tool support for Scenario-based RE
The method of scenario-based RE is intended to be integrated with object oriented
development (e.g. OOSE - [29]), hence use cases are employed to model the system
functionality and behaviour. A separate requirements specification document is maintained
to make requirements explicit and to capture the diversity of different types of
requirements, many of which can not be located in use cases. Use cases and the
requirements specification are developed iteratively as analysis progresses.
We define each scenario as "one sequence of events that is one possible pathway through a
use case". Hence many scenarios may be specified for one use case and each scenario
represents an instance or example of events that could happen. Each scenario may describe
both normal and abnormal behaviour. The method illustrated in Figure 1 uses scenarios to
refine and validate system requirements. The stages of the method are as follows:
1. Elicit and document use case
In this stage use cases are elicited directly from users as histories of real world system
usage or are created as visions of future system usage. The use case model is validated for
correctness with respect to its syntax and semantics. The paper does not report this stage is
in detail.
2. Analyse generic problems and requirements
A library of reusable, generic requirements attached to models of application classes is
provided. A browsing tool matches the use case and input facts acquired from the designer
to the appropriate generic application classes and then suggests high level generic
requirements attached to the classes as design rationale 'trade-offs'. Generic requirements
are proposed at two levels: first, general routines for handling different event patterns, and
secondly, requirements associated with application classes held in the repository. The
former provide requirements that develop filters and validation processes as specified in
event based software engineering methods (e.g. Jackson System Development [25]); while
the latter provide more targeted design advice; for instance, transaction handling
requirements for loans/hiring applications.
3. Generate scenarios
This step generates scenarios by walking through each possible event sequence in the use
case, applying heuristics which suggest possible exceptions and errors that may occur at
each step. This analysis helps the analyst elaborate pathways through the use case in two
passes; first for normal behaviour and secondly for abnormal behaviour. Each pathway
becomes a scenario. Scenario generation is supported by a tool which automatically
identifies all possible pathways through the use case and requests the user to select the
more probable error pathways.
4. Validate system requirements using scenarios
Generation is followed by tool-assisted validation which detects event patterns in scenarios
and presents checklists of generic requirements that are appropriate for particular normal
and abnormal event patterns. In this manner requirements are refined by an interactive
dialogue between the software engineer and the tool. The outcome is a set of formatted use
cases scenarios and requirements specifications which have been elaborated with reusable
requirements.
Although the method appears to follows a linear sequence, in practice the stages are
interleaved and iterative.
Figure
1 Method Stages for Scenario-based Requirements Engineering.
3.1. Schema for scenario based requirements modelling
In this section we describe the schema of scenario based knowledge shown in Figure 2. A
use case is a collection of actions with rules that govern how actions are linked together,
drawn from Allen's temporal semantics [2] and the calculus of the ALBERT II
specification language [14]. An action is the central concept in both scenarios and use
cases. The use case specifies a network of actions linked to the attainment of a goal which
describes the purpose of the use case. Use cases are refined into lower levels of detail, so a
complete analysis produces a layered collection of use cases. Use cases have user-defined
properties that indicate the type of activity carried out, e.g. decision, transaction, control,
etc. Each action can be sub typed as either cognitive (e.g. the buyer agrees the deal price on
A'.h#vt
#@yvpv#"+r
ph+r
!#6hy'+r
Brr.vp
Whyvqh#v'A.hr+
"#Brr.h#r
#Whyvqh#r
Key:
offer), physical (e.g. the dealer returns the telephone receiver), system-driven (e.g. the
dealing systems stores information about the current deal) or communicative (e.g. the
buyer requests a quote from the dealer). Each action has one start event and one end event.
Actions are linked through 8 types of action-link rules which are described later in more
detail.
state
transition state
object
action
use case
agent
contains
starts/
ends results
in
from
/to
has
event
uses
involves
scenario
is sequence
of
goal
achieves
model
primitives
models
generated
from
structure
object
composed
property
has
1.m
ipates
in
Figure
2 Meta-schema for modelling use cases and scenario based knowledge.
Each action involves one or more agents. Each agent can be either human (e.g. a dealer),
machine (e.g. a dealing-system), composite (e.g. a dealing-room) or an unspecified type.
Agents have user-defined properties that describe their knowledge and competencies.
Action - agent links are specified by an involvement relation that can be subtyped as
{performs, starts, ends, controls, responsible}. Each action uses nil, one or many object
instances denoted by a use relation, subtyped as {accesses, reads, operates}. Each action
can also result in state transitions which change the state of objects. States are aggregations
of attribute values which characterise an object at a given time in both quantitative and
qualitative terms. Structure objects are persistent real world objects that have spatial
properties and model physical and logical units of organisation. Structure objects are
important components in reusable generic domain models (called object system models
and described in section 3.5), but also allow use case models to be extended to describe the
location of objects, agents and their activity.
Introduction to the case study
The case study is based on a security dealing system at a major bank in London. Securities
dealing systems buy and sell bonds and gilt-edged stock for clients of the bank. The bank's
dealers also buy and sell securities on their own account to make a profit. Deals may be
initiated either by a client requesting transactions, or by a counterparty (another bank or
stockbroker who acts as a buyer) requesting a quotation, or by the dealer offering to buy or
sell securities. Quotes are requested over the telephone, while offers are posted over
electronic wire services to which other dealers and stockbrokers subscribe. In this case
study we focus on deals initiated by a counterparty from the viewpoint of the dealer. The
main activities in this use case include agreement between the dealer and buyer on the deal
price and number of stock, entering and recording deal information in the computer
system, and checking deal information entered by the dealer to validate the deal.
The case study contains several use cases; however, as space precludes a complete
description, we will take one use case ' prepare-quote' as an example, as illustrated in
figure 3. The sequence is initiated by a request event from the counterparty agent. The
dealer responds by providing a quotation which the counterparty assesses. If it is suitable
the counterparty agrees to the quotation and the deal is completed. Inbound events to the
system are the deal which has to be recorded and updated, while outbound events are
displays of market information and the recorded deal. System actions are added to model
the first vision of how the dealing support system will work. The dealer agent is linked to
the dealing room structure object that describes his location of work. The dealer carries out
the "prepare quote" use case which is composed of several actions and involves the "trade"
object.
Figure
3 Upper level use case illustrated as an Agent-interaction showing tasks, agents
and actions.
Counter-Party
Dealing System
Dealer
Head Dealer
3.2. Tool support
The method is supported by Version 2.1 of the CREWS-SAVRE tool that has been
developed on a Windows-NT platform using Microsoft Visual C++ and Access, thus
making it compatible for loose integration with leading commercial requirements
management and computer-aided software engineering software tools. It supports 6 main
functions which correspond to the architecture components shown in Figure 4.
1: incremental specification of use cases and high-level system requirements (the
domain/use case modeller supports method stage 1)
2: automatic generation of scenarios from a use case (scenario generator supports stage
3: manual description of use cases and scenarios from historical data of previous system
use, as an alternative to tool-based automatic scenario generation (use case/scenario
authoring component supports stage 1);
4: presentation of scenarios, supporting user-led walkthrough and validation of system
requirements (scenario presenter supports stage 4);
5: semi-automatic validation of incomplete and incorrect system requirements using
commonly occurring scenario event patterns (requirements validator supports stage 4).
CREWS-SAVRE is loosely coupled with RequisitePro's requirements database so it
can make inferences about the content, type and structure of the requirements.
Another component, which guides natural language authoring of use case specifications, is
currently under development. The component uses libraries of sentence case patterns (e.g.
[14]) to parse natural language input into semantic networks prior to transforming the
action description into a CREWS-SAVRE use case specification. Space precludes further
description of the component in this paper.
The CREWS-SAVRE tool permits the user to develop use cases which describe projected
or historical system usage. It then uses an algorithm to generate a set of scenarios from a
use case. Each scenario describes a sequence of normal or abnormal events specified in the
original use case. The tool uses a set of validation frames to detect event patterns in
scenarios, thereby providing semi-automatic critiquing with suggestions for requirements
implied by the scenarios.
domain /use
case modeller
scenario
generator
scenario
presenter
requirements
validater
use case
author tool
CREWS-SAVRE tool
REQUISITEPRO
REQUIREMENTS
MANAGEMENT
scenario
author tool
natural
language
descriptions
use case
facts
scenario
facts
generated scenarios
generated scenarios
scenarios
validated
requirements
user/
domain
expert
use case
environment
modeller/
validater system-env
models
user/
software
engineer
Figure
4 Overview of the CREWS-SAVRE tool architecture
3.3 Use Case Specification
The user specifies a use case using CREWS-SAVRE's domain and use case modeller
components. First, for a domain, the software engineer specifies all actions in the domain,
defines agents and objects linked to these actions, assigns types {e.g. communicative,
physical, etc.} to each action, and specifies permissible action sequences using action-link
rules. From this initial domain model, the user can then choose the subset of domain
actions which form the normal course of a use case. This enables a user to specify more
than one use case for a domain, and hence generate more scenarios for that domain. As a
result, a use case acts as a container of domain information relevant to the current scenario
analysis. As scenarios are used to validate different parts of the domain's future system,
different use cases containing different action descriptions are specified. Consequently, the
domain model enables simple reuse of action descriptions across use cases, because two or
more use cases can include the same action.
Eight types of action-link rule are available in CREWS-SAVRE:
strict sequence
part sequence
inclusion includes B): (ev(startA)<ev(startB)) AND ((ev(endA)>(ev(endB));
concurrent no rules about ordering of events;
alternative
parallel and equal
equal-start starts-with B): (ev(startA)=ev(startB)) AND ((ev(endA)not=(ev(endB));
equal-end
where event ev(X) represents the point in time at which the event occurs. Actions are
defined by ev(startA)<ev(endA) and ev(startB)<ev(endB), that is all actions have a
duration and the start event of an action must occur before the end event for the same
action. These link rules types build on basic research in temporal semantics [2] and the
formal temporal semantics and calculus from a real-time temporal logic called
CORE which underpins the ALBERT II specification language [23]. The current version
of CREWS-SAVRE does not provide all of Allen's [2] 13 action-link types. Rather, it
provides a set of useful and usable semantics based on practical reports of use case
analysis (e.g. [1], [29]).
For the dealing system domain, two actions are Buyer requests deal from the dealer and
Dealer retrieves price information from the dealer-system. Selection of the action-link rule
MEANWHILE indicates that, in general, the dealer begins to retrieve price information
only after the buyer begins to request the deal. A third action, this time describing system
behaviour- Dealer-system displays the price information to the dealer- can be linked
through the THEN rule to specify a strict sequence between the two actions, in that the
first action must end before the second action starts. Part of the dealing system domain
model is shown in Figure 5. It shows a subset of the current domain actions (shown as (A)
in
Figure
5), the specification of attributes of a new action (B), current action-link rules (C)
and agent types (D). Other parts of the domain modeller are outside of the scope of this
description.
One or more use cases are specified for a domain such as the dealing system. Each use
case is linked to one high-level requirement statement, and each system action to one or
more system requirements. Each use case specification is in 4 parts. The first part specifies
the identifiers of all actions in the use case. The second part specifies action-link rules
linking these actions. The third part contains the object-mapping rules needed to handle
use of synonyms in action descriptions. The fourth part specifies exception-types linked to
the use case, as described in section 3.6.
Figure
5 Domain Modeller screen dump.
3.4 Checking the use case specification
Validation rules check the integrity of the relationships specified in section 3.1,
represented in tuple format < model component1, relationship-type, model component2>.
The tool checks the integrity of use case models to ensure they conform to the schema, but
in addition, the tool is parameterised so it can check use cases in a variety of ways. This
enables user defined validation of properties not defined in the schema. Validation checks
are carried out by clusters of rules, called validation-frames which are composed of two
parts. First a situation that detects a structural pattern (i.e. a combination of components
connected by a relationship) in the use case. The second part contains requirements that
should be present in the part of the model detected by the situation. Frames are used for
validating the consistency of use case models against the schema, and for detecting
potential problems in use cases as detailed in section 3.5. In the former case the frames
detect inconsistencies in a use case, the latter case frames detect event patterns in scenarios
and suggest appropriate, generic requirements. Two examples of consistency checking
frames are as follows:
(i) Checks for agents connected by a specific relationship type.
validation-frame {detects dyadic component relationships}
situation:
model(component(x), relationship (y), component (z));
schema requirements:
(component type (i) component (x), mandatory) ;
(component type (j) component (y), mandatory);
end-validation-frame
Example: The user decides to check that all actions are controlled by at least one agent.
First the tool finds all nodes with a component type = agent, then it finds all the nodes
connected to that agent which are actions and finally whether the involvement relationship
that connects the two nodes is of the correct As the tool is configurable the
input parameters, (i) and (j) can be any schema primitives, and the relationship (k) is
defined in the schema. The tool detects untyped or incorrect relationships using this
'structure matching' algorithm. In the 'prepare quotes' use case (see figure 3 ), this
validation check should detect that the dealer controls the give-quote action.
(ii) Validates whether two components participating in a relationship have specific
properties.
validation-frame {detects components in a relationship have the correct properties}
situation:
model(component(x), property(w), relationship (z), component (y),
property (v));
schema requirements:
(component type (i) component (x), mandatory);
(component type (j) component (y), mandatory);
(component (x) property (w), mandatory);
(component (y) property (v), mandatory);
end-validation-frame
Example: The user wishes to validate that all agents that are linked to a use case with a
property have an property. The properties (v,w) to be tested are
entered in a dialogue, so any combination may be tested. In this case the tool searches for
nodes with a type = agent, and then tests all the relationships connected to the agent node.
If any of these relationships is connected to a node type = use case, then the tool reads the
property list of the use case and the agent. If the agent does not have an 'authority' property
and the use case has a 'decision' property then warning is issued to the user. As properties
are not sub types, this is a lexical search of the property list. This frame will test that the
dealer has authority for the use case 'evaluate choice' - which has a decision property.
The system is configurable so any combinations of type can be checked for any set of
nodes and arcs which conform with the schema described in figure 2. Our motivation is to
create requirements advice that evolves with increasing knowledge of the domain, so the
user can impose constraints beyond those specified in the schema, and use to tool to
validate that the use case conforms to those constraints.
3.5 Using Application classes to identify Generic requirements
This stage takes the first cut use case(s) and maps them to their corresponding abstract
application classes. This essentially associates use cases describing a new system with
related systems that share the same abstraction. A library of application classes, termed
Object system models (OSMs) has been developed and validated in our previous work on
computational models of analogical structure matching [34].
Object system models (OSMs) are organised in 11 families and describe a wide variety of
applications classes. The families that map to the case study application are object supply
(inventory control), accounting object transfer (financial systems), object logistics
(messaging), object sensing (for monitoring applications), and object-agent control
(command and control systems). Each OSM is composed of a set of co-operating objects,
agents and actions that achieve a goal, so they facilitate reuse at the system level rather
than as isolated generic classes. Taking object supply (see Appendix A) as an example,
this OSM models the general problem of handling the transaction between buyers and
suppliers. In this case study, this matches to the purchase of securities which are supplied
by the bank to the counter party who acts as the buyer. The supplier giving a price maps to
the dealer preparing a quotation. Essentially OSMs are patterns for requirements
specification rather than design solutions as proposed by [19]. Each OSM represents a
transaction or a co-operating set of objects and are modelled as class diagrams, while the
agent's behaviour is represented in a use case annotated with high level generic
requirements, expressed in design rationale diagrams.
A separate set of generic use case models are provided for the functional or task-related
aspects of applications. Generic use cases cause state changes to objects or agents; for
instance, a diagnostic use case changes the state of a malfunctioning agent (e.g. human
patient) from having an unknown cause to being understood (symptom diagnosed).
Generic use cases are also organised in class hierarchies and are specialised into specific
use cases that are associated with applications. Currently seven families of generic use
cases have been described; diagnosis, information searching, reservation, matching-
allocation, scheduling, planning, analysis and modelling. An example of a generic use case
is information searching which is composed of subgoals for: articulating a need,
formulating queries, evaluating results, revising queries as necessary. In the dealing
domain this maps to searching for background information on companies in various
databases, evaluating company profit and loss figures and press releases, then refining
queries to narrow the search to specific companies of interest. For a longer description of
the generic use cases and their associated design rationale see [53].
Specific use cases in a new application may be associated with object system models
either by browsing the OSM/use case hierarchy and selecting appropriate models, or by
applying the identification heuristics - see Appendix A, or by using a semi-automated
matching tool that retrieves appropriate models given a small set of facts describing the
new application [51]. These heuristics point towards OSM models associated with the
application; however, identification of appropriate abstractions is complex and a complete
description is beyond the scope of this paper. In this stage, mapping between use case
components and their corresponding abstractions in the OSMs are identified so that
generic requirements, attached to the OSMs, can be applied to the new application.
Unfortunately, the mapping of problems to solutions is rarely one to one, so trade -offs
have to be considered to evaluate the merits of different solutions. Design rationale [11]
provides a representation for considering alternative designs that may be applied to the
requirements problems raised by each OSM. Non-functional requirements are presented as
criteria by which trade offs may be judged. The software engineer judges which generic
requirements should be recruited to the requirements specification and may adds further
actions to the use case thereby elaborating the specification.
Case study
The security trading system involves five OSMs; Object Supply that models securities
trading; Account Object Transfer which models the settlement part of the system (payment
for securities which have been purchased) and Object messaging to describe the
communication between the dealer and counterparties. Other OSMs model sub systems
that support trading, such as Object Sensing that detects changes in security prices,
markets and the dealer's position, and Agent Control which describes the relationship
between the head dealer and the other dealers. In addition to the OSMs, the dealing system
contains generic use cases (evaluate purchase and plan strategy) that describe the dealer's
decision making and reasoning. These map the domain specific use cases of evaluating a
deal that has been proposed and for planning a trading strategy. A further specific use case
'prepare quote' is mapped to the generic use case (price item) associated with the Object
Supply OSM. A generic model of the security trading system, expressed as an aggregation
of OSMs, is given in figure 6.
The settlement part of the system (Accounting object transfer OSM) has been omitted. The
OSM objects have been instantiated as dealing system components. Clusters of generic
requirements, represented as design rationale are associated with appropriate OSM
components.
bank security
stock
client
transfer
owned-by
move
request
supplier
supply
dealer
banking
world
wire
service
signal
exist-in
quotes
change
prices
head
dealer
strategies
banking
world
transfer
move-in
trade
move
source
destination
banks
quotes make deal,
prep quote
eval trade
plan
strategy
counter
party
held-by
deal
security
transfer
exist-in
price
agree
eval trade
customer
request
instructions
locate
Generic requirements clusters (GRs in Design rationales)
1. Checking customer preferences (limits on dealing)
2. Calculating prices (prepare quotations)
3. Evaluating choices (deals)
4. Sampling changes to object properties (stock prices)
5. Message transmission protocols (deal notification)
6. Message encryption (deal security)
7. Communicating commands (head dealer strategy)
8. Reporting compliance (strategy obeyed/followed)
9. Calculating replenishment257 81
Object messaging OSM
sub class of Object Logistics
Object sensing OSM
sub class Object properties
Object supply and
Agent control OSMs
Figure
6 Aggregation of OSMs that match to the security dealing system, represented in
object oriented analysis notation [9].
Figure
7 Design rationale for high level generic requirements from clusters 2 and 3 in
Figure
6 attached to 'evaluate choice'and 'prepare quote'use cases. The instantiated
requirement derived from the generic version is given in brackets.
The functional requirements that could be applied to support two use cases 'evaluate deal'
and 'prepare quote'. For evaluate deal the rationale is taken from the generic use case
'evaluate choice', which is a sub-class in the matching/allocation family. This proposes
three options: to assess the purchase against a set of reference levels, to prioritise several
Speed of Operation
9r+vtSh#v'hyrvtD7DTI'#h#v'
D++"r Q'+v#v'+
6.t"r#+
(Generic requirements)
Evaluate Choices
(deals)
Check options
Matrix trade-off
Constraint-based
matching
Rapid response
Accuracy
Sophisticated Choice
Complexity/Cost
[3]
b dC'+r'sR"hyv#'b!!d
b"d'#.hqr#'ss#w"+#u.r+u'yqpurpx
(design alternatives)
(justifications and non-functional
Calculate Prices
(Prepare Quote)
Calculate from volume &
value (goods position)
Weighted Matrix Calculation
Use price file
(Display Baseline Quotes)
Range of Prices
User Choice
Positive justification of a
position by an argument
Negative argument against a
position
purchase options by a simple House of Quality style matrix [22], and finally to use a
sophisticated multi-criteria decision making algorithm. Hypertext links from the rationale
point to reusable algorithms for each purpose. The options for 'prepare quotes' are to
automate quotation with simple calculations based on the dealer's position, the desirability
of the stock and the market movement; or to choose a weighted matrix technique for
quoting according to the volume requested and the dealer's position, or to leave quotation
as a manual action with a simple display of the bank's baseline quotations. Since the first
two options may be too time consuming for the dealers, the third was chosen. For evaluate
deal, the simple calculation is taken as optional facility, leaving the dealer in control. Two
high level generic requirements are added to the requirements specification and actions to
the use case which elaborates the system functionality.
3.6 Scenario Generation
This stage generates one or more scenarios from a use case specification. Each scenario is
treated as one specific ordering of events. The ordering is dependent on the timings of
start- and end- events for each action and the link rules specified in the originating use
case. Entering timings is optional, so in the absence of timed events, the algorithm uses the
ordering inherent in the link rules. More than one scenario can be generated from a single
use case if the action-link rules are not all a strict sequence (i.e. A then B).
The space of possible event sequences is large, even within a relatively simple use case.
The scenario generation algorithm reduces this space by first determining the legal
sequences of actions. The space of permissible event sequences is reduced through
application of action-link and user-constraints. The user can enter constraints that specify
which agents and actions should be considered in the scenario generation process, thus
restricting the permissible event sequences to sequences (es) to include an event (ev) that
starts the action (A) and involves a predefined agent (ag) and which has at least a given
probability
UC: (ev(startA) in es) AND (ev starts
For example, each generated scenario must include the event that starts action 20, it must
involve the agent "dealer" and action 20 must have at least a 10% likelihood of occurrence
according to probabilities calculated from information in the use case specification.
The 'prepare quote' use case definition leads to the 3 possible scenarios shown in figure 8.
The difference between each is the timing of event E40 which ends action 40, and whether
action 40 or action 45 occurs. This depiction ignores the application of the constraint on
the likelihood of an event sequence occurring. Scenario-1 and -2 differ in the timing of
event E40 (end of request for price information from the dealer-system) while scenario-3
describes a different event sequence when the dealer is unable to offer a quote for the deal.
event
event
(buyer
requests
quote)
event S50
event E50
(dealer-
system
shows price)
Time
THEN
event
event E40
event
event
event
event E40
event S50
event E50
event
event
event
event E40
event S50
event E50
PART OF USE CASE
GENERATES
event
event E20
(dealer
picks up
THEN
event
event E20
event
event E20
event E40
(dealer
retrieves
price
event S60
event
event E45
(dealer
refuses
deal)
THEN
event
event
event
event E45
event
event E20
Figure
8 Diagram illustrating 3 normal scenario paths generated from a use case
fragment
The generation mechanism is in two stages. First it generates each permissible normal
course scenario from actions and link rules in the use case, then it identifies alternative
paths for each normal sequence from exception types, as summarised in Table I. They are
divided into two groups. First abnormal events drawn from Hollnagel's [24] event
'phenotypes' classification and secondly, information abnormalities that refer to the
message contents(e.g. the information is incorrect, out-of-date etc.) that follow validation
concepts proposed by Jackson [27]. Each exception is associated with one or more generic
requirements that propose high level solutions to the problem. The exception types are
presented as "what-if" questions so the software engineer can choose the more probable
and appropriate alternative path at each action step in the scenario.
Table
I Summary of Exception types for events originating in the system and generic
requirements to deal with abnormal patterns.
Exception Generic Requirement
event does not happen - omitted time-out, request resend, set default
event happens twice (not iteration) discard extra event, diagnose duplicate
event happens in wrong order buffer and process, too early - halt and wait,
too late - send reminder, check task
event not expected validate vs. event set, discard invalid event
information - incorrect type request resend, prompt correct type
incorrect information values check vs. type, request resend, prompt with
diagnosis
information too late (out of date) check data integrity, date/time check, use
default
information too detailed apply filters, post process to sort/group
information too general request detail, add detail from alternative
source
A set of rules constrain the generation of alternative courses in a scenario using action and
agent types. These rules are part of an extensible set which can be augmented using the
agent types and influencing factors described below. Two example rules are:
IF ((ev1 starts ac1) OR (ev1 ends ac1)) AND ac1(type=cognitive)
THEN (ex(type=human) applies-to ev1).
which ensures that only human exception types and influencing factors (see next section)
are applied to a cognitive action for event (ev1), action (ac1) and exception type (ex), and
IF ((ev1 starts ac1) OR (ev1 ends ac1)) AND ac1(type=communicative)
AND (ac1 involves ag1) AND ag1(type=machine)
AND (ac1 involves ag2) AND ag2(type=machine)
THEN (ex(type=machine-machine-communication) applies-to ev1).
which ensures that only machine-machine communication failures are applied to
communication actions where both agents are of type 'machine' for event (ev1), action
(ac1), agents (ag1 and ag2) and exception type (ex). The rules identify particular types of
failure that may occur in different agent-type combinations so that generic requirements
can be proposed to remedy such problems. For instance, in the first rule, human cognitive
errors that apply to action1 can be counteracted by improved training or aid memoir
facilities (e.g. checklists, help) in the system. In the second rule which detects network
communication errors, generic requirements are suggested for fault tolerant designs sand
back-up communications.
The algorithm generates a set of possible alternative paths according to the agent-action
combination. The use case modeller allows the user to select the more probable abnormal
pathways according to their knowledge of the domain. To help the software engineer
anticipate when exceptions may occur and assign probabilities to abnormal events, a set of
influencing factors are proposed. These describe the necessary preconditions for an event
exception to happen and are sub divided into 5 groups according to the agents involved:
human agents: Influencing factors that give rise to user errors and exceptions are
derived from cognitive science research on human error [42], Norman's model of slips
[37] and Rasmussen's three levels of human-task mismatches [41]. However, as human
error cannot be adequately described by only cognitive factors; we have included other
performance affecting properties such as motivation, sickness, fatigue, and age; based
on our previous research on safety critical systems [49].
machine agents: failures caused by hardware and software, e.g. power supply
problems, software crashes, etc.
human-machine interaction: poor user interface design can lead to exceptions in
input/output operations. This group draws on taxonomies of interaction failures from
human-computer interaction [46] and consequences of poor user interface design (e.g.
human-human communication: scenarios often involve more than one human agent.
Communication breakdowns between people have important consequences.
Exceptions have been derived from theories from computer-supported collaborative
work [46]. Examples include communication breakdowns and misunderstandings;
machine-machine communication: scenarios often involve machine agents, and
exceptions specific to their communication can also give rise to alternative paths.
The interaction between influencing factors that give rise to human error is described in
figure 9. Four outer groups of factors (working conditions, management, task/domain and
personnel qualities) effect four inner factors (fatigue, stress, workload and motivations).
These in turn effect the probability of human error which is manifest as an event exception
of type <human-machine action or human action>. Human error can be caused by
environment factors and qualities of the design, so two further outer groups are added.
Personnel/user qualities are causal influences on human operational error, whereas the
system properties can either be treated as causal explanations for errors or viewed as
generic user interface requirements to prevent such errors. Requirements to deal with
problem posed by influencing factors are derived from several sources in the literature,
e.g. for task design and training [3], workplace ergonomics [45] and for Human Computer
Interface design [46], [47] and standards (e.g. ISO 9241 [25]).
Ultimately, modelling event causality is complex, moreover, the effort may not be
warranted for non-safety critical system, so three approaches are offered. First is to use the
influencing factors as a paper-based 'tool for thought'. Second, the factors are implemented
as a hypertext that can be traversed to explore contextual issues that may lead to errors and
hence to generic requirements to deal with such problems. However, many of these
variables interact, e.g. high stress increases fatigue. Finally as many combinations of
influencing factors are possible and each domain requires a particular model, hence we
provide a general modelling tool that can be instantiated with domain specific information.
The tool allows influencing factors to be entered as a rating on a five point scale (e.g. high
task low then calculates the event probability from
the ratings. The combination of factors and ratings are user controlled. he factors described
in figure 9 may be entered into the tool with simple weightings to perform sensitivity
analyses. A set of default formulae for inter-factor weights are provided, but the choice
depends on the user's knowledge of the domain. The tool can indicate that errors are more
probable given a used defined subset of influencing factors, but the type of exception is
difficult to predict, i.e. a mistake may be more likely but whether this is manifest as a
event being omitted or in the wrong order is unpredictable. Where more reliable
predictions can be made new 'alternative path' rules (see above) are added to the system.
The tool is configurable so more validation rules can be added so the system can evolve
with increasing knowledge of the domain. The current rules provide a baseline set that
recommend generic requirements for certain types of agent e.g. untrained novices need
context sensitive help and undo facilities, whereas experts require short cuts and ability to
build macros. The influencing factors may be used as agent and use case properties and
validated using the frames described in section 3.4.
3.7 Scenario Validation
CREWS-SAVRE is loosely-coupled with Rational's RequisitePro requirements
management tool to enable scenario-based validation of requirements stored in
RequisitePro's data base. CREWS-SAVRE either presents each scenario to the user
alongside the requirements document, to enable user-led walkthrough and validation of
system requirements, or it enables semi-automatic validation of requirements through the
application of pattern matching algorithms to each scenario. Each approach is examined in
turn.
Figure
shows a user-led walkthrough of part of one scenario for the dealing system use
case, and the RequisitePro requirements document being validated. The left-hand side of
the screen shows a normal course scenario as a sequence of events. On the right-hand side
are alternative courses and generic exceptions generated automatically from the
requirements engineer's earlier selection of exception types. For each selected event the
tool advises the requirements engineer to decide whether each alternative course is (a)
relevant, and (b) handled in the requirements specification. If the user decides that the
alternative course is relevant but not handled in the requirements specification, s/he can
retrieve from CREWS-SAVRE one or more candidate generic requirements to instantiate
and add to the requirements document. Each exception type in CREWS-SAVRE's data
base is linked to one or more generic requirements which describe solutions to mitigate or
avoid the exception. Thus, CREWS-SAVRE provides specific advice during user-led
scenario walkthroughs.
Figure
9 Influencing Factors for Exceptions and their interrelationships.
Urfr.h#".r
Gvtu#vt
I'v+r
X'.xfyhpr
Q.rqvp#hivyv#'
W'y"r
Wh.vr#'
H"y#v#h+xvt
F'#yrqtr#
@'fr.vrpr
Brr.hy6f#v#"qr
U.hvvt
Q".B"vqhpr
Ghpx's
rp'".htrr#
Q".qv+pvfyvr
Q".xqr+vt
CvqqrA"p#v'hyv#'
Hv++vtsrrqihpx
Hv+yrhqvtsrrqihpx
Q".f.rqvp#hivyv#'
Q'#r.+ffy'shvy".r
9v+pr.
Ch.q#h.rshvy".r
Ah#vt"r
8'tv#vo/oor
shp#'.+
vp.rh+r
's
Consider the example shown in Figure 10. The user is exploring normal course event 90,
the start of the communication action 'the dealer enters deal information into the dealer-
system' ,(shown in (A) in Figure 10s), and the alternative course GAc6, the information is
suspect (B). The user, having browsed candidate generic requirements (C), copies and
pastes the requirement 'the system shall cross-reference the information with other
information sources to ensure its integrity' into RequisitePro's requirements document (D).
This figure also shows the current hierarchical structure of the requirements held in
RequisitePro's data base (E).
Figure
Validation Frames for Adding Generic Requirements
The second approach automatically cross-checks a requirements document and a scenario
using a collection of patterns which encapsulate 'good' socio-technical system design and
requirements specification. To operationalise this, the CREWS-SAVRE tool applies one or
more validation frames to each event or event pattern in a user-selected scenario to
determine missing or incorrect system requirements. Each validation frames specifies a
pattern of actions, events and system requirements that extend KAOS goal patterns [12]
describing the impact of different goal types on the set of possible system behaviours. A
validation frame has two parts. The first defines the situation, that is, the pattern of events
and actions expressed in the form <identifier, action-type, agent-types> where agent-types
are involved in the action. Each event is expressed as <identifier, event-type, action-
identifier> where event type defines whether the event starts or ends the action. The
second part of the frame defines generic requirements needed to handle the event/action
pattern. The frames start from the PS055 standard [35] and type each requirement as a
performance, usability, interface, operational, timing, resource, verification,
acceptance testing, documentation, security, portability, quality, reliability, maintainability
or safety requirement. Hence, automatic requirements-scenario cross-checking is possible
using patterns of event, agent and action types in the scenario and requirement types in the
requirements document.
An example validation frame is:
validation-frame {detect periods of system inactivity}
situation:
agent(agC,machine) AND
not consecutive(agC,evA,evB);
requirements:
requirement(performance, optional, link);
requirement(function <time-out/re-send>, optional, link);
end-validation-frame
This frame detects the absence of a reply event after a set time period from a human agent
(implicitly) and signals the requirement to ask for resend or set a time out. An instantiation
of this is the requirement to request a price to be entered by the dealer within seconds of
accessing the 'prepare quote' option. This exception deals with inbound event delays, so if
the time is longer than a preset limit (i.e. the system is not used for that period), the tool
recommends reusable generic requirements, for example to warn the user and log-out the
user after a certain period of time.
Validation frames for alternative course events provide generic requirement to handle an
event exception (ev) linked to an event 'ev' as follows:
validation-frame
situation:
agent(agA,machine) AND
requirements:
requirement(functional, mandatory, link),
system shall check for data entry mistakes),
system shall restrict possible data entry);
end-validation-frame
Such attention failures are possible when entering dealer information. The tool proposes
generic requirements, and the requirements engineer chooses requirement GR1 "the system
shall check for data entry mistakes". In Figure 11, the requirements engineer is again
examining event S90 [the dealer enters the deal information into the dealer-system] (A)
and the causal influencing factor [agent pays poor attention to detail] (B). Again the above
validation frame detects the need for functional requirements to handle the alternatives
linked to event S90. As a result, the user is able to add new requirements to the
requirements document to handle this alternative course.
Figure
Validation Frames for Exceptions
Finally the tool uses validation frames for applying OSM-specific generic requirements in
a similar manner. In section 3.4, high level generic requirements were recruited as design
rationale trade offs, whereas when frames are used to detect OSM specific patterns in
scenarios, more detailed requirements can be indicated. The computerised dealing system
is an instantiation of the object supplying, object messaging and agent-object control
system models. Consider a validation frame linked to the 'send' communication action in
the object messaging system model. The situation (scenario) specifies an event which
starts a communication action that involves a machine agent and which matches one of the
actions in the object messaging OSM (i.e. sending or returning messages). For this
situation, the validation frame identifies at least 5 generic system requirements:
validation-frame
comment: generic requirements for send action, object messaging OSM situation:
[agent(agA,machine)
requirements:
requirement(functional, mandatory, link), generic-requirement(GOM1, system shall
support identification/retrieval of receiver agent's address);
system shall enable a user to enter the receiver agent's
system shall enable a user to enter the content of a message);
system shall enable a user to send a composed message to the
receiver agent);
system shall maintain a sender-agent log of all messages
which are sent from and received by the sender-agent);
end-validation-frame
For example, the validation frame is applicable to event which starts the action 'buyer
requests quote from dealer'. Each of the generic requirements is applicable to computerised
support for this action, whether or not the action is undertaken by telephone or e-mail, for
example retrieving the receiver agent's telephone number or e-mail address, and
maintaining a sender log of all telephone calls or e-mail messages. Such generic
requirements can be instantiated and added to the requirement specification.
4. Discussion
The contributions to RE we have reported in this paper are threefold. First, extensive reuse
of requirements knowledge is empowered via decision models and generic requirements.
Secondly a means of semi-automatic requirements validation is provided via frames.
Frames extend type checking by recognising patterns of agents' behaviour to which
appropriate validation questions, and possible design solutions, may be applied. Third, we
have described the use of scenarios as test pathway simulations, with novel tool support
for semi-automatic scenario generations. The current status of development is that the
scenario generator-validator tool has been implemented and industrial trials are
commencing. Clearly coverage in terms of the number of validation frames and generic
requirements contained in the tool database is a key issue effecting the utility of our
approach. Our approach is eclectic and depends on knowledge in literature such as from
ergonomics, human resource management and user interface design. The contribution we
have made to implement and integrate this knowledge in an extensible architecture. The
advice currently contained retrieved by the validation frames provides requirements
knowledge at a summary level. Although this may be criticised as lacking sufficient detail
initial industrial reaction to the tool has been encouraging, in particular the value of raising
design issues which are not addressed by current methods, e.g. DSDM [15]. So far we
have demonstrated proof of concept in terms of operation. This will be followed by further
testing of utility within a small scale, but industrially realistic application.
In many senses the strength of the method we have proposed lies in integration of previous
ideas. We have brought concepts from safety critical system assessment [24], [42] to bear
on requirements analysis, and integrated these with scenario based approaches to RE. We
acknowledge the heritage of the Inquiry Cycle [39]; however, our research has contributed
an advanced method and support tool that give more comprehensive guidance for solving
problems. Specification of requirements to deal explicitly with the implications of
human error is a novel contribution where we have broken ground beyond the previous
approaches [30], [16]. Furthermore, the influencing factors that bear on causes for
potential error are useful heuristics to stimulate debate about many higher level
requirements issues, such as task and workplace design. However, we acknowledge it is
difficult to provide prescriptive guidance from such heuristics. While some may contend
that formalising analytic heuristics can not capture the wealth of possible causes of error in
different domains, we answer that some heuristics are better than none and point out that
the method is incremental and grows by experience. Failure to formalise knowledge can
only hinder RE.
Parts of the scenario based method reported in this paper are related to the enterprise
modelling approach of Yu and Mylopoulos [54] and Chung [7]. They create models of the
system and its immediate environment using similar semantics for tracing of dependencies
between agents, the goals and tasks with limited reasoning support for trade-offs between
functional requirements and non-functional requirements (referred to as soft goals).
However the i* method does not contain detailed event dependency analysis such as we
have reported. Scenarios have been used for assessing the impact of technical systems by
several authors [6], [30], [16]. However, these reports give little prescriptive guidance for
analysis, so the practitioner is left with examples and case studies from which general
lessons have to be extracted. For instance, the ORDIT method [16] gives limited heuristics
that advise checking agent role allocations, but these fall far short of the comprehensive
guidance we have proposed.
Dependencies between systems and their environment have been analysed in detail by
Jackson and Zave [28] who point out that input events impose obligations on a required
system. They propose a formalism for modelling such dependencies. Formal modelling is
applicable to the class of systems they implicitly analyse, e.g. real time and safety critical
applications, but it is less clear how such models can deal with the uncertainties of human
behaviour. To deal with uncertainty in human computer interaction, we believe our
scenario based approach is more appropriate as it focuses on eliciting requirements to
repair problems caused by unreliable human behaviour. Another approach is the KAOS
specification language and its associated GRAIL tool [32], [33] a formal modelling that
refines goal-oriented requirements into constraint based specifications. Van Lamsweerde
et al [33] have also adopted problems and obstacles from the Inquiry cycle [39];
furthermore, they have also employed failure concepts from the safety critical literature in
a similar manner to CREWS-SAVRE. Their approach is anchored in goal-led
requirements refinement and does not use scenarios explicitly. In contrast, CREWS-SAVRE
covers a wider range of issues in RE than KAOS but with less formal rigour,
representing a trade-off in RE between modelling effort, coverage and formal reasoning.
So far the method has only partially dealt with non functional requirements. Scenarios
could be expressed in more quantifiable terms, for instance by the Goal- Question-Metric
approach of Basili et al. [4], or by Boehm's [5] quality-property model. Scenarios in this
sense will contain more contextual information to represent rich pictures of the system and
its environment [29]. The description could be structured to include information related to
the NF goal being investigated and metrics for benchmark testing achievement of the goal.
Validation frames may be extended for assessing such rich picture scenarios for non
functional and functional requirements. For instance, each inbound/ outbound event that
involves a human agent will be mediated by a user interface. Usability criteria could be
attached to this event pattern with design guidelines e.g. ISO 9241 [25]. Performance
requirements could be assessed by checking the volume and temporal distribution of
events against system requirements. Elaborating the scenario based approach to cover non
functional requirements is part of our ongoing research [52].
In spite of the advances that scenario based RE may offer, we have still to demonstrate its
effectiveness in practice. There is evidence that the approach is effective in empirical
studies of earlier versions of the method which did use scenarios but without the support
tool [50]. Further validation with industrial case studies is in progress.
Acknowledgements
This research has been funded by the European Commission ESPRIT 21903 long term
research project 'CREWS' - Co-operative Requirements Engineering With Scenarios. The
project partners include RWTH-Aachen (project co-ordinator), City University, London,
University of Paris I, France, FUNDP, University of Namur, Belgium.
--R
Object Oriented Analysis
Dynamic Systems Development Method (DSDM) Version 3.0
Analysis Patterns: Reusable Object Models
Design Patterns: Elements of reusable object-oriented software
ISO 9241
Human Factors Engineering and Design
Human Computer Interface Design
"Scenario-based Analysis of Non-Functional Requirements"
--TR
--CTR
Lin Liu , Eric Yu, Designing information systems in social context: a goal and scenario modelling approach, Information Systems, v.29 n.2, p.187-203, April 2004
Norbert Seyff , Paul Grunbacher , Neil Maiden , Amit Tosar, Requirements Engineering Tools Go Mobile, Proceedings of the 26th International Conference on Software Engineering, p.713-714, May 23-28, 2004
Stan Jarzabek , Wai Chun Ong , Hongyu Zhang, Handling variant requirements in domain modeling, Journal of Systems and Software, v.68 n.3, p.171-182, 15 December
John A. van der Poll , Paula Kotz , Ahmed Seffah , Thiruvengadam Radhakrishnan , Asmaa Alsumait, Combining UCMs and formal methods for representing and checking the validity of scenarios as user requirements, Proceedings of the annual research conference of the South African institute of computer scientists and information technologists on Enablement through technology, p.59-68, September 17-19,
Zhang , Dan Xie , Wei Zou, Viewing use cases as active objects, ACM SIGSOFT Software Engineering Notes, v.26 n.2, March 2001
Sebastian Uchitel , Greg Brunet , Marsha Chechik, Behaviour Model Synthesis from Properties and Scenarios, Proceedings of the 29th International Conference on Software Engineering, p.34-43, May 20-26, 2007
Emmanuel Letier , Jeff Kramer , Jeff Magee , Sebastian Uchitel, Monitoring and control in scenario-based requirements analysis, Proceedings of the 27th international conference on Software engineering, May 15-21, 2005, St. Louis, MO, USA
Dalal Alrajeh , Alessandra Russo , Sebastian Uchitel, Inferring operational requirements from scenarios and goal models using inductive learning, Proceedings of the 2006 international workshop on Scenarios and state machines: models, algorithms, and tools, May 27-27, 2006, Shanghai, China
Nico Lassing , Daan Rijsenbrij , Hans van Vliet, How well can we predict changes at architecture design time?, Journal of Systems and Software, v.65 n.2, p.141-153, 15 February
Giuseppe Della Penna , Benedetto Intrigila , Anna Rita Laurenzi , Sergio Orefice, An XML environment for scenario based requirements engineering, Journal of Systems and Software, v.79 n.3, p.379-403, March 2006
Julia Galliers , Alistair Sutcliffe , Shailey Minocha, An impact analysis method for safety-critical user interface design, ACM Transactions on Computer-Human Interaction (TOCHI), v.6 n.4, p.341-369, Dec. 1999
PerOlof Bengtsson , Nico Lassing , Jan Bosch , Hans van Vliet, Architecture-level modifiability analysis (ALMA), Journal of Systems and Software, v.69 n.1-2, p.129-147, 01 January 2004
Sascha Konrad , Betty H. C. Cheng , Laura A. Campbell, Object Analysis Patterns for Embedded Systems, IEEE Transactions on Software Engineering, v.30 n.12, p.970-992, December 2004
Idris Hsi, Measuring the conceptual fitness of an application in a computing ecosystem, Proceedings of the 2004 ACM workshop on Interdisciplinary software engineering research, November 05-05, 2004, Newport Beach, CA, USA
Andreas Gregoriades , Alistair Sutcliffe, Scenario-Based Assessment of Nonfunctional Requirements, IEEE Transactions on Software Engineering, v.31 n.5, p.392-409, May 2005
Colette Rolland , Naveen Prakash, From conceptual modelling to requirements engineering, Annals of Software Engineering, v.10 n.1-4, p.151-176, 2000
Alistair Sutcliffe, On the effective use and reuse of HCI knowledge, ACM Transactions on Computer-Human Interaction (TOCHI), v.7 n.2, p.197-221, June 2000
Axel van Lamsweerde , Emmanuel Letier, Handling Obstacles in Goal-Oriented Requirements Engineering, IEEE Transactions on Software Engineering, v.26 n.10, p.978-1005, October 2000
Robb Klashner , Sameh Sabet, A DSS Design Model for complex problems: Lessons from mission critical infrastructure, Decision Support Systems, v.43 n.3, p.990-1013, April, 2007
Marcos Andr Gonalves , Edward A. Fox , Layne T. Watson , Neill A. Kipp, Streams, structures, spaces, scenarios, societies (5s): A formal model for digital libraries, ACM Transactions on Information Systems (TOIS), v.22 n.2, p.270-312, April 2004
N. Robinson , Suzanne D. Pawlowski , Vecheslav Volkov, Requirements interaction management, ACM Computing Surveys (CSUR), v.35 n.2, p.132-190, June | scenarios;use cases;reuse;requirements engineering;scenario generation;exception types;patterns |
297588 | The Use of Cooperation Scenarios in the Design and Evaluation of a CSCW System. | AbstractDesign and evaluation of groupware systems raise questions which do not have to be addressed in the context of single user systems. The designer has to take into account not only the interaction of a single user with the computer, but also the computer-supported interaction of several users with each other. In this article we describe the use of cooperation scenarios in the design and evaluation of an innovative access control system for a concrete groupware application developed in the POLITeam project. We have used informal textual scenarios to capture a rich description of the particularities of access to cooperatively used documents in three different organizations. Based on these scenarios, we have developed an access control system, which not only allows specification of access rights in advance but also allows involvement of third persons at the actual time of access, using negotiation and notification mechanisms. We describe our evaluation strategy which again employs the cooperation scenarios developed in the empirical phase. After relating our approach to other work, we summarize and discuss our experiences and the advantages (and disadvantages) of using scenarios for the design and evaluation of Computer Systems Cooperative Work (CSCW) systems. Finally, we give a brief outlook on future work. | Introduction
COMPUTER SUPPORTED COOPERATIVE WORK (CSCW) is an interdisciplinary field of research, dealing with
cooperative work supported by computer systems (for an overview see e.g. [6]). Computer science naturally is a
major contributor. However, psychology, sociology, and other disciplines are involved, as well. The developers
of CSCW systems (groupware) not only have to design for single users interacting with the system but also for
groups interacting via the system. This collaborative perspective raises or amplifies, for instance, issues like
privacy and access control ([9, 23]), conflicts ([27]), awareness ([4, 7]), and tailorability ([21, 22]), which we
discuss briefly in the following in order to give the reader an overview of some of the challenges in groupware
design.
As early as 1986, in the context of the first CSCW conference, Greif and Sarin ([9]) stated that the access control
mechanisms and concepts used in operating systems of that time were not flexible enough to express access
policies for group interaction. They suggested the development of more sophisticated controls taking into
account additional factors like user roles, access rights concerning abstract operations other than read and write
(e.g sharing operations), and specific object-user-relationships (e.g. the current or past users of an object). Shen
and Dewan ([23]) discuss the problem of access rights for collaborative work in the context of the multi-user
editor SUITE. They stress the dynamic nature of collaborative work and thus state the support of multiple,
dynamic user roles and the need for easy specification of access rights as important requirements for access
control in groupware. Furthermore, they introduce specific collaboration rights for operations whose results can
affect other users.
The fact that groupware functionality can affect multiple users gives rise to a strong potential for conflict
concerning the configuration and use of such systems ([27]). Groupware systems can distinctively change the
division of labor in an organization, obliterate jobs, and open new opportunities for communication between
employees (a development which might not be in the interest of management). Furthermore, supporting
communication and coordination with computers opens new ways of controlling and monitoring work. Often
small things like the publicly visible creation date of a document are cause for vehement negative reactions of
groupware users.
While groupware systems allow for more control over work processes on one hand, on the other hand important
context information can be obliterated by cooperating e.g. via a shared document work space. Even small clues
like seeing the overflowing (physical) inbox of a colleague or hearing the voice of a somebody in the hallway
might have formerly been helpful in deciding questions like whom to send which document or when to ask for
the completion of some piece of work. When supporting cooperative work with computers, this raises the issue
of awareness (see e.g. [4]). Groupware users sometimes need to be aware of what is going on in the systems and
what other users are currently doing or have done in the past. Designing mechanisms to provide awareness in
groupware is nontrivial because one has to walk a tightrope concerning the conflicting goals of privacy versus
the need for awareness and information overflow versus lack of awareness. Fuchs et al. ([7]) present as one
solution to this problem a model for an awareness mechanism which is characterized by a high degree of
tailorability, i.e. it can be adapted to different and dynamically changing needs and preferences of individuals,
groups, and organizations.
While tailorability (or adaptability) is also an issue outside CSCW, the complexity, dynamics and diversity of
cooperative work increase its importance in this field beyond the configuration of awareness mechanisms (see
e.g. [21, 22]). Furthermore, the fact that groupware functionality has the potential to affect multiple users raises
the question of who is allowed to tailor this functionality and how one can explore and try out tailored
functionality without disturbing other users.
Access control, potential for conflicts, awareness, and tailorability are some examples (and by no means all) of
the issues which arise in the design of CSCW functionality. However, when extending the use of computers from
the support of single users to the support of groups, not only the functionality but also the development processes
have to be rethought. [10] gives an extensive overview of the novel challenges groupware designers have to face.
Among others, he identifies the need for almost unanimous acceptance of groupware in order to achieve a
critical mass of users and a well balanced benefit profile (e.g. subordinates gain as much from the system as
managers) as key factors to the success of groupware. Thus, groupware developers have to know much more
about the context of use of the prospective application than developers of single user applications, who do not
have to pay as much attention to group-related aspects like, for instance, trust, awareness of each other,
negotiations, social dynamics, power structures, and work processes. Trust, awareness, and negotiation are
aspects which proved highly relevant in the design case described later on.
[10] also points out the difficulties in evaluating groupware. The interaction of a single user with a system is
much easier to evaluate in a laboratory setting than group processes. Apart from the sheer logistical problems of
getting even a small-sized group in the same lab at the same time or the nightmare of installing a prototypical
application in an organization, the necessary duration of the evaluation is a major problem, because "group
interactions unfold over days or weeks" ([10], p. 101). In the following we want to concentrate on the two design
process related problems identified above: capturing of rich contextual requirements and evaluation support.
In the POLITeam project (see [17]), we employ textual scenarios drawn from field studies, interviews, and
workshops to inform the design processes and support the evaluation prior to field tests and large scale
workshops. We call our scenarios cooperation scenarios, as they not only capture the work and interaction of
single users with the system but also the group and organizational context and its work practices.
Within the POLITeam project a groupware application for a German federal ministry and selected ministries of a
state government and the concurrent engineering division of a car producer is developed in an evolutionary and
participative way. The first system version was generated by configuring the commercial product LINKWORKS
by Digital. Based on the experiences gained by introducing the first system version in three different fields of
application, we develop advanced versions of the system. The functionality mainly consists of an electronic
circulation folder, shared workspaces, and an event notification service.
In this article we describe the use of cooperation scenarios in the design of the access control system in
POLITeam. Specifically, we want to point out the value of cooperation scenarios for the creation of novel
CSCW-functionality, in this case the integration of a traditional anticipative access control system with
negotiation (computer mediated decision making, see also [27]) and notification services. In section two we
describe the concept of cooperation scenarios in more detail, comparing it with other types of scenarios in the
related literature. Section three introduces the access control design problem and presents three cooperation
scenarios which have notably guided and motivated the new design. Section four contains the analysis of the
scenarios and briefly outlines the resulting implementation. Section five describes the use of scenarios in three
consecutive evaluation steps. Section six relates our approach to other work. Section seven summarizes and
discusses the value of cooperation scenarios. Finally, section eight suggests future research efforts.
Cooperation Scenarios for the design and evaluation of CSCW systems
[3] identifies several different roles which scenarios can play in the development process of software. We
employ scenarios in the CSCW design process in three roles: as a tool for the first (informal) requirements
analysis, for communication support (user-designer and designer-designer) during validation, and for evaluation
support. Figure 1 shows how the scenario supported steps are positioned in the overall design process. Note that
the process is cyclical and that we believe the value of cooperation scenarios lies foremost in the first cycles
when attempting to identify or create innovative functionality which supports cooperative work. Later cycles rely
on more formal methods and models.
The form of cooperation scenarios
Scenarios can take many different forms. [18] describes two extreme positions concerning the scope of
scenarios. The first one "sees a scenario as an external description of what a system does" (p. 21), while the
second one looks "at the use process as situated in a larger context" (same page). According to these different
positions, scenarios can take the form of exact protocols or formal sequence descriptions on one hand, and rather
broad, mostly textual descriptions also covering contextual aspects which are only loosely related (however,
relevant) to system design on the other hand (also see [3]).
For our purposes we need a form of scenario, which allows designers to capture a broad range of (eventually)
unanticipated, contextual information. Thus, our cooperation scenarios are based on informal textual descriptions
of work practices, including the motivation and goals behind cooperation. The textual form also facilitates the
discussion between users and designers (role of communication support) and among designers. This first
representation can later be augmented with alternatives, especially visual techniques.
Workshops
Work place
visits
Requirements
(informal textual
scenarios)
Design
functionality
Evaluation
without
users
e.g. role-based
analysis of
scenarios
yielding a benefit
profile
Evaluation
with users
Presentation and
discussion of new
functionality
based on
scenarios
Re-design
Re-design
Validation of
scenarios
Use
Cyclical design in the POLITeam project
Scenario based process steps
Figure
1: Scenario support in the POLITeam design process
Building cooperation scenarios
The information captured in cooperation scenarios is gathered in the POLITeam project through extensive field
studies, involving semi-structured interviews and work place visits. While these techniques are usually single
user oriented, we attempt to identify the role a person plays in the organizations and the different cooperative
tasks. The question of individual motivations and goals is especially important in the context of groupware
development, as the product has to be acceptable to users in radically different roles with respect to the
cooperative activity which is being supported by the system (e.g. managers and subordinates, see [10]). For this
purpose, we use what we call continuously refined heuristic user selection schemes (see [25]). Having identified
a cooperative activity, we attempt to identify the users involved in this activity. For the interviews and work
place visits, we select users who play different roles in the cooperation. However, our initial selection of users is
often not exhaustive and as we learn more about the processes in our target organizations, we identify additional
users, who might play a different role in a more subtle, however relevant, way. Thus, we continuously refine the
user selection.
A methodologically important aspect of cooperation scenarios is the degree to which the cooperation in the
respective field of application is already supported by computer systems. The three scenarios described in the
next section range from full computer support to completely non-IT based activities which might be subject to
future groupware support. The existing work practices are usually heavily influenced by the features (and
problems, see scenario one) of the computer system already in place. It is essential to abstract from existing
technology in order to understand the nature and especially the motivation and goals behind the cooperative
activity. The main point of this article is to make a case for the value of scenarios as support for developing this
understanding. Additionally, we will discuss the value of scenarios in the evaluation phase.
Three example scenarios concerning access control in the context of collaborative activities
When the first POLITeam version was introduced, it quickly became obvious that the users had severe problems
with the traditional (matrix based) access control system (see e.g. [20]). They just did not use it. In order to
understand the nature of these problems, we began to investigate the work practices revolving around access to
collaboratively used documents.
The three scenarios presented in this section were distilled from a series of interviews in two fields of application
of the POLITeam project, an additional interview session in a newsletter's editorial office, and a workshop with
users from the federal ministry involved in the POLITeam project. The workshop was also concerned with other
questions (which are not relevant here) and is not to be confused with the feedback workshop described later on
in this paper. The scenarios and their analysis are taken from [26], where they are used to motivate the novel
aspects of the access control system. (The main points of [26] are technical aspects of the implementation of this
system and thus it employs a more formal method than scenarios (petri nets) in order to describe and discuss
implementation details and alternatives at the heart of the novel functionality. [26] does not discuss the design
process as such).
First scenario: a state representative body - keeping user passwords in
sealed envelopes in a strongbox
This scenario is based in the representative body of a North German state at the federal
capital in Bonn. As participant in the POLITeam project, the body has been equipped with
a groupware system, supporting internal cooperation as well as the cooperation with the
state government in the state capital. The installation consists of 28 workstations. An
important cooperative task being supported is the preparation of documents for the
state's vote in the Bundesrat (the German assembly of state's representatives). The
cooperative activity is rather time constrained, as decisions concerning the state's vote
have to be coordinated, distributed, and validated within a very short time frame (about
one week).
In order to deal with unexpected absences due to illness or travel, the office has
implemented a seemingly complex, but rather effective non-IT-based work practice
around the POLITeam groupware system. Every user has to write his password on a
piece of paper, which is placed in a sealed envelope. This envelope is locked away in the
office strongbox. The keys to the strongbox are kept by two trusted persons: the system
administrator and the department head) (in the following referred to as key-holders).
If a person urgently needs to access a document on the "virtual desk" of an absentee, the
person has to ask one of the key-holders to open the strongbox and release the envelope
with the password. The envelope is opened and the "virtual desk" can then be accessed
with the appropriate password.
On one hand, this work practice is effective in the sense, that misuse or illegitimate
access is very difficult because of the necessity to negotiate with the key-holder to
release the envelope and because the broken seal of the envelope is an indicator that the
virtual desktop was accessed.
On the other hand, the granularity of the access granted by the system is rather coarse,
since once the password is released, every possible action can be taken in the name of
the absent person. Other documents than the one needed can be accessed or deleted;
mails can be sent and received. Additionally, the eventual change of a user password is
causing some organizational overhead, because the password not only has to be
changed in the system, but also a new envelope has to be placed in the strongbox.
Theoretically, a person could use several virtual desktops with different password to
achieve a finer granularity. This solution, however, would severely decrease the
effectiveness of the regular use of the system, since the users would have to log in and
out of the system if they change from one task to another.
Second scenario: a federal ministry - searching your colleagues' desktops
under the watchful eyes of a trustworthy third person
A different scenario was discovered in a department of a German federal ministry. The
department also participates in POLITeam. The groupware application is used by 12
employees mainly in an operative section and the central typing office. The application
supports the cooperative generation of documents, typically involving the head of the
section, a member of the section and several typists. In the course of their work, the
members of the section occasionally have to search for documents on other colleagues'
virtual desktops, for instance, if they need a specific document, but do not know who is
working on it right now.
For this task the POLITeam base system provides a search tool, which allows users to
search for and access all documents, which are not explicitly declared private. This tool,
however, was severely constrained in the installed version of the groupware system,
because in the initial requirement analysis, the users objected to the tool due to privacy
considerations. The current version of the tool can only search the folder-hierarchy on
one's own virtual desktop.
Conventions have been established, which deal with the problem of searching for work
outside one's own virtual desktop. Users are held to keep documents which are jointly
created in shared workspaces. Each member of the section has a workspace whose
access he shares with the head of the section and the typists. Thus, the two typists
working for the section have a link to all shared workspaces on their desktop. This means
that they can conduct a department-wide search for documents. If one of the workers in
the section looks for a specific document outside his own desktop, he asks them to
search for him and provide him with a link (a POLITeam concept for shared access to
documents) to the found document.
The advantage of this work practice is that the two typists are aware of every department-wide
search and - as they are considered extremely trustworthy - nobody feels that his
privacy is being invaded. The disadvantage is that access to documents still has to be
anticipated, in order for these documents to be kept in shared workspaces. Furthermore,
because the typists actually have to search for and copy or link the respective
documents, this practice results in additional work for them, even though its sole point is
that they are aware of the search and can intervene, if they doubt its appropriateness.
Third scenario: newsletter editorial offices - smoothing cooperation with
limited trespassing in private domains
Another example for a successful work-practice from the physical world, which is very
difficult to support in classical access control systems, was encountered in the editorial
offices of a small newsletter, specialized on providing in-depth information about EU
agricultural matters to interested parties. The seven editors are responsible for up to three
EU countries each.
Thus, in theory the work of the editors is non-collaborative. However, in practice there is
lot of information which concerns transnational aspects, e.g. studies comparing
agricultural performance of several EU countries or simply a newspaper article from one
country which concerns another country. As a consequence, the editors heavily rely on
being kept up to date by their colleagues.
The supporting mechanism for this collaboration is a (physical) circulation system, which
is based on a number of open post-boxes in the main hall of the suite of offices. The post-
6boxes are also used for distributing the normal mail to the editors. If an editor wants to
share a document with his colleagues, he simply writes the initials of the respective
persons on the document on drops it in one of the post-boxes.
Now, sometimes the editor sharing the information decides afterwards that he needs the
document himself again. Or he might tell one of his colleagues about the document over
lunch, who might discover that this is exactly the information he has been looking for for
weeks and immediately needs to see the document. The document in question, however,
still resides in one of the post-boxes in the main hall. Thus, in order to speed up the
process, the editor searches the post-boxes of his colleagues. However, he would not
dare to remove any other documents or look too closely at private mail or faxes, because
the post-boxes are located in the main hall and it is very likely that a colleague might pass
by. Again, similar to the last scenario, the editor is respecting a social protocol. According
to the interviews this custom is honored even by the last person leaving the office at
night.] When trying to support this kind of work-practice in a computer system, one
quickly reaches the limit of classical access control system.
Analyzing and Using the Scenarios to Guide the Design Process
We have used the three scenarios described above as the basis for the development of a new access control
model. We found cooperation scenarios very useful at this point, because they revealed to us some problems
with traditional access control systems in collaborative work practice which we had not seen before.
In the following, we briefly describe three observations (taken from [26]) which indicate why traditional access
control mechanisms do not support current collaborative work practice very well. These observations are based
on the information captured in the three scenarios and guided the subsequent design process, in particular the
integration of negotiation and notification mechanisms into the access control system.
Observation 1: Trusted third persons play an important role in the three
scenarios
The role of a trusted third person is a central feature in the first two scenarios. In state
representative office, the employees put the envelopes with their passwords in the
strongbox, well knowing that there are others who have the keys. The key-holders are
trusted not to abuse their powers. Similar in the federal ministry the typists are considered
trustworthy by their colleagues. Thus, they are allowed to search for documents in
restricted spaces. Trusted third persons obviously play an important role when dealing
with unexpected absences. This is exactly the point where classical access control
system fail. These systems only deal with two roles, the one who specifies the access
rules and the one who tries to access a document. Access policies, e.g. in form of access
lists or matrices, have to be specified in advance, precisely denying or permitting access
to an object for certain persons or groups. The problem in the two scenarios is, that the
users would like unexpected accesses not to be decided upon in advance, but by a
trusted third person in the context of the actual access situation, e.g. by the typists or the
In theory these scenarios could be supported by a traditional access control system by
just granting full access to the trusted person in advance. This solution, however, has the
consequence that the trusted person would not only be burdened with the decision
concerning access but also has to do the actual work, i.e. accessing and copying the
document for the person requesting the access as in the scenario from the federal
ministry. If unanticipated accesses are not exceptional, this solution is not acceptable for
the trusted persons. Thus, it makes sense to look for a more efficient implementation of
these requirements within the access control system.
Observation 2: Awareness can be used to control access
Awareness plays an important role in all three scenarios. In the state representative
office, the passwords are sealed in envelopes. A broken seal indicates that the password
was used. Thus, there is a degree of awareness about accesses to one's desktop. In the
federal ministry, access to restricted spaces has to be made aware to the typists. This
awareness ensures that every access is carefully considered because one can be held
accountable for every action. The situation in the editorial offices is rather similar. The
position of the post-boxes in the main hall provides a degree of awareness, which
ensures that other editors' boxes are only accessed, if there is a justifiable reason.
Awareness plays an important role by supporting and enforcing social protocols or
conventions. Again this is the point where classical access control systems fail. They only
allow for the options yes and no, not yes, but I want to know about it or yes, but I want the
typists to know about it. The POLITeam base system includes functionality, which allows
users to register interest in changes to specific documents. This functionality, however, is
much too coarse in that it does not differentiate between different kinds of accesses.
Additionally, it can only be specified for one document at a time, which induces a lot of
overhead when trying to use the functionality in real work practice.
Observation 3: Access can be subject to negotiation at the time of access
Looking at the work practices described in the scenarios, it is obvious that there is a great
need for negotiation. If it is, for any reason, not possible to precisely specify access rights
in advance, the anticipated specification is replaced by negotiate [sic] during the actual
situation of use, either with a trusted third person (see first subsection) or the owner of the
document himself. In our scenarios, the necessity for negotiation is a direct consequence
of the application of trust in the respective organization.
In the scenario taken from the state representative body, one has to negotiate with one of
the key-holders about the release of the password. The key-holder has the opportunity to
grant or deny access according to his understanding and motives in the current context.
In the federal ministry one has to negotiate with one of the typists in order to search for
certain documents.
Implementation
Based on this analysis, we have developed a new access control model, which integrates awareness and
negotiation services with a traditional, anticipative access control system. When specifying access rights for a
document or a shared folder, the users not only have the opportunity to allow or deny access in advance. They
have two additional options: notification and negotiation. The former allows access, but notifies a single user or
a group about the access. The latter only allows access, if a specified single user or group agrees. For a detailed
discussion of the implementation see [26].
Scenario supported evaluation
So far, we have described the use of cooperation scenarios for the purpose of capturing requirements which stem
from cooperative work practice and guide the design of new CSCW functionality. However, we also found
scenarios very useful during the evaluation phase. As mentioned in the introduction, evaluating groupware is
rather difficult, due to the many factors involved in success or failure of a product. A full-fledged test with
groups of users - preferably at their work place - is impractical before the product has reached a certain maturity
(especially stability). If groups of users are invited to the lab, they leave a lot of important context factors behind
and are subjected to several new ones. Our strategy is to capture as much of the relevant context as possible in
cooperation scenarios, especially the motivation, the goals, and also the workload of the different participating
roles. We try to evaluate our designs against this information as early as possible in the process. Specifically, we
use scenarios for evaluation in three stages of the design process (see figure 1): evaluation of scenario validity,
theoretical evaluation of system design, and the practical evaluation in user workshops.
Evaluation of scenario validity
When we transcribed the scenarios presented in [26], we found that there were some differences in the
interpretation of relevant details. This was despite the fact that both authors had attended the (first) workshop
(which had been protocolled) and both authors conducted the relevant interviews together (which had been
audio-taped).
A good example for a significant misinterpretation was discovered when writing up and discussing scenario two.
During the workshop, one of the attendees from the federal ministry was talking about searching other peoples
desktops and the importance of other, trusted persons knowing about this search. She was referring to the
"virtual" desktops of her colleagues, which was clear to everybody knowing that she had her office three stories
above the colleagues she was talking about. However, the author writing up the workshop protocol did not know
that and assumed she was talking about "physical" desktops. The mistake was finally discovered, when the other
author reviewed the scenario.
We believe that misinterpretations like this are a major problem in design processes with user participation.
Common misconceptions can be cemented into what we call "project folklore", i.e. things that all designers
believe in and which even are passed on to new project members as "facts". Our cooperation scenarios rely on
anecdotal evidence which can be severely distorted depending on who writes it up and what prior knowledge this
person has about the field of application in question. Thus, we found it very helpful, to exchange cooperation
scenarios among designers after write-up and critically compare and discuss them, specifically asking questions
like "I did not get this detail. Who said that and are you sure, you've interpreted it correctly?'' As a last measure,
we sometimes even asked the interviewees later on, whether this or that detail matched reality. For this purpose a
feedback workshop with all end users involved is quite useful.
Theoretical evaluation of system design
As mentioned before, we tried to capture not only the cooperative activity but also the motivation and goals
behind individual contributions to the activity. Another important aspect is the workload of a specific role in the
scenario.
Having produced an early stage of the new design, we can already employ scenarios for a role-oriented benefit
evaluation. We "insert" the new design, i.e. the new CSCW functionality, in the scenario and analyze for each
role in the cooperative activity how individual parts of the task change. Take, for instance, the first scenario. In
the old scenario, the system administrator or department head had to be asked to open the strongbox and extract
the password from the envelope. In the new scenario, the documents are protected employing the negotiation
service. How does the new scenario impact upon the individual parts of the tasks? Table 1 shows a role-oriented
analysis of the old and new scenario one:
Role Old scenario New scenario with negotiation
service support
User requesting access . Has to contact system
administrator.
. Has to wait for system
administrator's decision
. Has to log in under data owner's
account
. Has to copy the relevant documents
to his own account
. Can be held responsible for any
additional changes to data owner's
account
. Has to wait for system
administrators decision (System
administrator or any other trusted
person is contacted automatically)
System administrator
(department head is
omitted here for
simplicity)
. Has to make decision
. Has to open safe
. Has to extract password from
envelope
. Has to communicate password to
user
. Has to change password later on
. Has to create new envelope with
new password
. Has to answer the automatically
generated request.
Data owner . Does not know, what other data has
been compromised
. Has to submit his password to the
system administrator in advance
. Has to specify access rights and
negotiation service in advance
(can be a lot of work in dynamic
work environments)
Table
1: Role-oriented analysis of the new, technology-enhanced scenario one
Role-oriented analysis of technology-enhanced scenarios can show, who benefits from the introduction of new
technology and who has to carry additional burden (compare [10]). The analysis shown in table 1 indicates that
the user requesting access and the system administrator benefit from the new functionality, while the data owner
has additional work in specifying the configuration of access rights and negotiation services.
While role-oriented analysis gives some hints concerning the possible acceptance of a system, there are still a lot
of factors which might be relevant, but do not surface in the analysis. For example, it might be a huge benefit for
the data owner for personal reasons, if he can specify another person than the system administrator as decision
maker. Or there might be certain documents on a persons desktop which have to be excluded by law from the
scheme (e.g. medical documents), which is possible with the fine-grained configuration of the negotiation
service. These examples show the importance of capturing context in cooperation scenarios for theoretical
evaluation.
Practical evaluation of system design in workshops
While theoretical analysis might yield some useful hints concerning possible problems with the groupware
system, the almost canonical unreliability of any requirements analysis process indicates the need for user
feedback as early as possible in the design process.
For this purpose we conducted a feedback workshop, during which we presented an early (rather unstable)
version of our prototype to end users (the feedback workshop is not to be confused with the earlier one). 11 end
users from one of our fields of application (the federal ministry) participated in the workshop, among them the
head of department. Seven designers were present; 3 from the University of Bonn and 4 from GMD (St.
Augustin, Germany) where the workshop took place. The project members from GMD were not directly
involved in the design of the access control system presented here, but contributed to the discussion during the
workshop. The main goal of the presentation of the prototype was to help the users in understanding the new
functionality achieved by our integration of traditional "yes or no" access control with negotiation and
notification services. Based on this understanding we wanted to know from the users how they believed the
system could be used to support (or change) their work practices. Implementation details (e.g. the user interface)
of the prototype were also discussed, but are not in the scope of this paper, as are the usability tests we ran (not
during the workshop) to evaluate the interfaces for configuring the negotiation and notification services.
The presentation during the workshop was based on a scenario drawn from the federal ministry concerning the
deletion of an address list in a common workspace. (not one of the scenarios described before). Together with
student volunteers we "played" the technology enhanced scenario from a script, which we had ensured did not
cause the prototype any trouble. The screens of two workstations were fed into beamers, so all participants could
see the details. The workshop, especially the discussion and contribution of the end users, was protocolled.
The real world cooperation scenario captured the imagination of the workshop attendees and helped us, first to
discover some flaws in the base scenario and secondly to discuss the design of the new functionality (the users
wanted more high-level and powerful configuration mechanisms for the negotiation service and the basic access
rights, which supports the result from the theoretical analysis that the additional workload for the data owner
might be a problem). Concerning the central question of how the new functionality might support their work
practices, the users voiced rather diverse opinions. Interestingly, the head of the department said that if all
responsibilities (e.g. for deleting address lists) were assigned "correctly", negotiation during the time of access
would not be necessary. Furthermore, a high level of individualized (as opposed to standardized) procedures
would actually hinder cooperation, because single contributions would not fit together. Both points were rather
vehemently opposed by his subordinates, who stated that there were not one "correct" way to assign the
responsibilities, but that in their everyday work practice many things were in flux and could not be cast into
fixed and standardized structures. From each point of view, the statements are justified. The head of department
wants to have clear (at best static) picture of what is going on in his department, while the subordinates have to
actually deal with the surprises and non-standard situations in everyday work. This exchange of views
exemplifies the group-related aspects which were mentioned in the introduction and which CSCW design has to
take into account. The cooperation scenario we employed as basis for the presentation, helped in supporting the
understanding of the end users and the subsequent discussion.
Related work
The approach presented here mainly builds on work which is concerned with putting the end user and his or her
work in the focus of attention during software development. This perspective is due to the nature and the current
state of CSCW. The discussion in this field is still in an exploratory phase. It is concerned with the question of
what functionality groupware systems should provide (e.g. notification mechanisms) and what the benefits of
these systems for individual users, groups of users and organizations really are. This reflects on the design
methodologies which are used to support groupware design processes. While more traditional software design
(see e.g. [24]) proceeds rather quickly from the initial requirements analysis and specification phase to formally
verifiable application models and perhaps executable specifications, the designer of a groupware system has to
spend much more time in the early phases of development, envisioning how the new functionality might change
current work practice in an organization. Similarly, Hammer and Champy ([11]) postulate the "enabling role of
information technology" (p. 83). Organizational and technical changes are intertwined and cannot be viewed or
implemented separately. The introduction of a groupware system should not cement existing work practice in
program code, but allow for new forms of collaboration which perhaps better support the organizational goals.
Putting the end user and his or her work in the focus of attention is not a new concern in system development. As
computers enter more domains of private and working life, novel uses of information technology are pioneered.
This development necessitates design methodologies which put the use and the user of the system in the center
of the development efforts. This does not mean that traditional, more formal techniques are obsolete. To the
contrary, as system become more complex, formal methods are indispensable. However, they have to be
complemented with a sound understanding of the current work practice and the possible future use of the system
from the end users' perspective.
The use-case methodology by Jacobsen et al. ([15]) is an example for the incorporation of the end user's
perspective into system design. Use cases are descriptions of the interaction of an actor external to the system
(usually a human user) and the system itself. A use case thus encapsulates "one specific way of using the system
by using some part of the functionality" (p. 154). A set of uses cases and the set of related actors (or user roles)
constitute the use case model which is part of the whole requirements model (together with a problem domain
object model and user interface descriptions). Use cases thus permit the explicit representation of the intended
use of the system and can serve as a basis for discussion with the end users. In this respect they are similar to our
technology-enhanced scenarios. However, as use cases represent interactions of single users with the system, the
cooperative aspects of using the system are not adequately captured. While it is possible to decompose the
cooperative use of a groupware system into several use cases related to different actors, this decomposition omits
- especially in the case of synchronous groupware - essential design information like dependencies between
different use cases which actually concern the same cooperative system usage. We have made the experience
that groupware designers are prone to neglect the cooperative aspects of system use when specifying
requirements as use cases.
In order to capture the complex dependencies and requirements arising from cooperative work practices, CSCW
often draws on methods from the field of Participatory Design (PD). These methods are based on the actual end
users taking part in the design process of the system. The first PD projects in Scandinavia were trade union-
oriented and focused on giving workers the right to influence their own working conditions, as well as on
improving the design of computer systems (For an overview of the history of PD see e.g. the introduction to [8]).
However, the obvious advantages of letting those people who know most about the work participate in the
design of the systems supporting it, give the methods developed in the context of PD a pragmatic significance
beyond their initial political coloring ([16]). PD approaches usually focus on the first part of development
processes and spend a lot of time building an understanding of the work context and current work practices of
the end users. Sometimes they initially employ ethnographical techniques as a way of gathering socially oriented
context information (see e.g. [2]). These techniques rely on project members working very closely with the users
in their "natural" surroundings. Not unlike the observation of a tribe's social structures and rules in the Amazon,
project members study fields of application like air traffic control centers ([2]) or a control room for the London
Underground ([12]). However, ethnographical studies in the proper sense take a lot of time (several month,
perhaps years). Thus, their usefulness for commercial (or even short term research) projects has been doubted.
[14] discusses different roles which ethnography can play realistically in a design process, ranging from short
term evaluation of prior, more long term studies to "quick and dirty" ethnography with the goal of gaining as
much insight as possible in a short time. In Holtzblatt and Beyer's Contextual Design methodology [13], the
"contextual inquiry" phase ([13], p. 93-94) is an example for the pragmatic use of ethnography in design
projects. They suggest that project members - at best the actual designers - spend time (2-3 hours per session)
with the users in their usual environment while they are working. The project members can interrupt the work at
any time and ask about the goals and motivations behind user actions. In order to provide an efficient view
across the whole organization, several project members interview several users in parallel. The interview results
are later shared within the project team during a process of structuring what is known about the design problem.
Additionally the method employs a variety of work models to capture e.g. the single tasks, steps, or the strategy
of the work to be supported. Relevant for CSCW, for instance, is the flow model (p. 97) which depicts the
communication between people in the organization. However, the method does not impose a single modeling
language on the designer, but proposes to introduce new languages if needed, since there is no one language
which can capture all relevant aspects: "Let modeling languages help you. When you must, invent new ones to
say exactly what you have to say" (p. 96). Contextual design also involves iteratively going back to the users
with clarification questions and design suggestions, e.g. in form of rough paper prototypes (see e.g. [5]) of user
interface designs. Even though contextual design does not rely on scenarios in the way the approach presented in
this paper does, it is based on a similar understanding of design of interactive systems as a creative and user-centered
process.
Another approach, which is very much in the tradition of PD, is the MUST method by Kensing et al ([16]).
MUST is a Danish acronym for theories of and methods for initial analysis and design activities. MUST
specifically supports design in an organizational context and thus explicitly includes steps like strategic analysis
which integrates the development efforts into the overall business strategy of the organization. The approach
suggest a variety of techniques for developing an understanding of current work practices, including interviews,
observations, workshops, document analysis of documents used in the work practice etc. The method also places
a strong emphasis on the cooperative development of visions of the future system by both users and developers.
Within this process, the use of "scenarios describing envisioned future work practice supported by the proposed
design" (p. 137) is suggested. Concerning the use of formal methods, the authors state: ". formalism play a
minor role in the MUST method. Instead we suggest plain text, freehand drawings, and sketches for the
production and presentation of the relation between proposed IT systems an users' current and future work
practice, postponing an extended use of formalism to later on in the development process" (p. 138). Because of
its strong organizational focus and its reliance on the PD tradition, the MUST method attempts to explicitly
accommodate conflicts of interest between management and workers e.g. in rationalization and downsizing
processes. The authors suggest to achieve a consensus concerning the objectives of the system design
beforehand. While these issues are out of the scope of this paper, it is nevertheless interesting to note how far the
consequences of system design can reach and what responsibilities result for the designers. Scenarios can serve
as a tool to make these consequences explicit for everybody involved in the process.
A concrete application of scenarios in a CSCW project is discussed by Kyng ([19]), who describes the use of a
variety of scenario types in the context of the EUROCOOP project. This project was concerned with designing
computer support for cooperation in the Great Belt Link ltd. Company, a state-owned company responsible for
the building of a bridge/tunnel between Zealand and Funen in Denmark. It involved the design of four generic,
interrelated CSCW applications. Scenarios were used in four different roles: work situation descriptions were
supposed to capture relevant, existing situations which the users find to be important parts of their work,
bottlenecks, or generally insufficiently supported. These freeform textual descriptions (which Kyng does not yet
call scenarios, in contrast to the terminology of this paper) first served as basis for the discussion of current work
practices between users and designers. Secondly, they were used in the process of developing mock-up
prototypes and accompanying use scenarios which textually describe the intended future use of the envisioned
system. In this sense, use scenarios are similar to our technology-enhanced scenarios. In the EUROCOOP project,
however, they were mainly employed in setting the stage for user workshops and not for theoretical evaluation.
Kyng also describes the project-internal use of more detailed, technically oriented exploration/requirement
scenarios. They complemented use scenarios by giving details which are relevant for evaluating technical details
(e.g. locking mechanisms) of the proposed design. Finally, explanation scenarios gave a description of the new
possibilities offered by the proposed design and the explanation of the rationale behind the design in terms of the
working situation. These scenarios were more detailed than use scenarios.
Building on the work presented by Kyng, Bardram ([1]) describes the use of scenarios in the SAIK project which
was concerned with the design of a Hospital Information System in Denmark. Similar to our work, scenarios
were used to describe the current work practices and to envision future, computer supported work activities.
Bardram's approach is characterized by the fact that scenarios were used on two different levels of detail. All
relevant work practices were textually described as work activity scenarios, while work activities central to
system design were documented in greater detail with the help of more structured, tabular analytical scenarios.
Furthermore, Bardram points out that current and future scenarios were "alive during the whole system
development process" (p. 59) and were constantly updated as the designer's understanding of current work
practices and the design itself evolved. In contrast to the role of the use scenarios described by Kyng, analytical
scenarios were also employed in a systematic comparison (logical confrontation in Bardram's terminology) of
the current work practice and the proposed design. This comparison resulted, for instance, in the discovery of
problems concerning the integration of the new system with existing systems for the exchange of EDIFACT
messages with external agencies. Additionally, Bardram advocates the use of scenarios in workshops together
with the future users and management representatives (compare [16]).
This overview of related work shows the span of different roles scenarios can play in the design of systems
supporting cooperative work. Scenarios are used to describe current and future, technology supported work
practices. They are used for internal, technically-oriented discussions among designers and as a basis for the
presentation of prototypes of future systems to end users. They can have different levels of detail and different
forms (e.g. tabular or textual). Furthermore, scenarios can be used to support design processes which
accommodate organizational and business strategy issues as well as processes which focus on the differences the
system makes for individual users (compare our role-based analysis). We conclude that scenarios are an
extremely flexible tool for system design and that there is not one single best form and use of scenarios.
Depending on the circumstances and constraints of the project, the preferences and experiences of the designers,
and last but not least the end users, scenarios can be employed in a number of variations.
Summary and Discussion
We have presented our use of cooperation scenarios in the design and evaluation of a novel access control
system for groupware. Cooperation scenarios are context-rich, informal, textual descriptions of cooperative
activities which are gathered and refined through interviews and workshops with user participation. They not
only contain a step-by-step description of events but also the goals and subjective opinions (e.g. trust) of persons
and other possibly relevant contextual elements. During evaluation we have used scenarios for requirements
validation (through communication among designers), role-oriented analysis, and as a basis for realistic
workshop presentations.
The main advantage of cooperation scenarios lies in the fact that at least some contextual factors can be taken
into account early on in the design process. Other, more restrictive forms of scenarios or formal models might
omit this qualitative and sometimes rather subjective information. We have shown how cooperation scenarios
can be used to identify issues which are important for cooperative activity (trust, awareness, and negotiation).
The scenarios allow the designers to think in terms of actual use situations instead of abstract technical criteria.
Especially the role-oriented analysis allows a multi-perspective view on a possible design, perhaps helping to
identify problems early on (e.g. a benefit and workload disparity). Furthermore, we found that realistic workshop
presentations make it possible to establish close contact with the users and get them involved in the evaluation
and generation of new ideas in the early stages of the project.
On the negative side, we were surprised how easily anecdotal evidence becomes severely distorted due to
misunderstandings and misinterpretations of issues in the field of application (e.g. the "virtual" / "physical"
desktop problem) and how far apart the different designers' ideas about the users and the field of application can
be. We have suggested to address this problem by critically reviewing scenarios among designers and validating
them in feedback workshops.
Furthermore, we point out again that we see the role of cooperation scenarios early on in the design process
when creating innovative functionality and envisioning its effects on cooperative work. Our review of the current
literature shows that this reflects the experience made with the use of scenarios in other (CSCW) projects. A
good system design subsequently of course makes more formal methods necessary to produce a high quality
implementation.
Future research
We still have a lot to learn about how to design CSCW functionality and how to introduce groupware systems
into organizations. Experiences from our projects and reports in the current literature (see e.g. [21]) show that
tailorability (or adaptability) is an important success factor for such systems because of the diversified and
dynamic nature of requirements of cooperative work. Most current design methodologies still aim at producing a
"one-size-fits-all" system design. Part of our current work is concerned with the development of methodologies
which not only explicitly specify diversity and dynamics but also help us in deriving the necessary degree of
flexibility the final system has to exhibit.
Acknowledgements
We thank the other past and present members of the POLITeam project at the University of Bonn: Andreas
Pfeifer, Helge Kahler, Volkmar Pipek, Markus Won, and Volker Wulf. Furthermore, we are grateful to our
POLITeam project partners at GMD in St. Augustin, Germany. Furthermore, the comments of the anonymous
reviewers were extremely helpful in improving the final version of this article. The POLITeam project is funded
by the BMBF (German Ministry of Research and Education) in the context of the PoliKom research program
under grant 01 QA 405 / 0.
--R
"Scenario-Based Design of Cooperative Systems"
"Ethnographically-Informed Systems Design for Air Traffic Control"
"Awareness and Coordination in Shared Workspaces"
"Cardboard Computers: Mocking-it-up or Hands-on the Future,"
"Groupware - some Issues and experiences,"
"Supporting Cooperative Awareness with Local Event Mechanisms: The GoupDesk System"
Design at Work.
"Data Sharing in Group Work"
"Groupware and social dynamics: eight challenges for developers,"
Reengineering the Corporation - A Manifesto for Business Revolution
"Collaboration and Control: Crisis management and multimedia technology in London Underground Line Control Rooms,"
"Making Customer-Centered Design Work for Teams,"
"Moving Out from the Control Room: Ethnography in System Design"
"MUST - a Method for Participatory Design"
"POLITeam - Bridging the Gap between Bonn and Berlin for and with the Users"
"Work Processes: Scenarios as a Prelimiary Vocabulary,"
"Creating Contexts for Design,"
"Protection,"
"Experiments with Oval: A Radically Tailorable Tool for Cooperative Work,"
"Situationsbedingte und benutzerorientierte Anpabarkeit von Groupware,"
"Access Control for Collaborative Environments"
"How to Make Software Softer - Designing Tailorable Applications"
""
"On Conflicts and Negotiation in Multiuser Application,"
--TR
--CTR
Volker Wulf , Helge Kahler , Volkmar Pipek , Stefan Andiel , Torsten Engelskirchen , Matthias Krings , Birgit Lemken , Meik Poschen , Tim Reichling , Jens Rinne , Markus Rittenbruch , Oliver Stiemerling , Bettina Trpel , Markus Won, ProSEC: research group on HCI and CSCW, ACM SIGGROUP Bulletin, v.21 n.2, p.10-12, August 2000
Steven R. Haynes , Sandeep Purao , Amie L. Skattebo, Situating evaluation in scenarios of use, Proceedings of the 2004 ACM conference on Computer supported cooperative work, November 06-10, 2004, Chicago, Illinois, USA
Elizabeth S. Guy, "...real, concrete facts about what works...": integrating evaluation and design through patterns, Proceedings of the 2005 international ACM SIGGROUP conference on Supporting group work, November 06-09, 2005, Sanibel Island, Florida, USA | access control;evaluation;design methodology;groupware;CSCW;cooperation scenarios;scenario-based design |
297636 | Convergence to Second Order Stationary Points in Inequality Constrained Optimization. | We propose a new algorithm for the nonlinear inequality constrained minimization problem, and prove that it generates a sequence converging to points satisfying the KKT second order necessary conditions for optimality. The algorithm is a line search algorithm using directions of negative curvature and it can be viewed as a nontrivial extension of corresponding known techniques from unconstrained to constrained problems. The main tools employed in the definition and in the analysis of the algorithm are a differentiable exact penalty function and results from the theory of LC1 functions. | Introduction
We are concerned with the inequality constrained minimization problem (P)
min f(x)
are three times continuously differentiable.
Our aim is to develope an algorithm that generates sequences converging to points
x satisfying, together with a suitable multiplier - 2 IR m , both the KKT first order
necessary optimality conditions
(1)
and the KKT second order necessary optimality conditions
z - 0: (2)
In the sequel we will call a point x satisfying (1) a first order stationary point (or just
stationary point), while a point satisfying both (1) and (2) will be termed second order
stationary point.
In the unconstrained case the conditions (1) and (2) boil down to
respectively. Standard algorithms for unconstrained minimization usually
generate sequences converging to first order stationary points. In a landmark paper
[21] (see also [23, 22, 16, 20] and references therein for subsequent developments), McCormick
showed that, by using directions of negative curvature in an Armijo-type line
search procedure, it is possible to guarantee convergence to second order stationary
points. From a theoretical point of view, this is a very strong result, since it makes much
more likely that the limit points of the sequence generated by the algorithm are local
minimizers and not just saddle points. Furthermore, from a practical point of view, the
use of negative curvature directions turns out to be very helpful in the minimization of
problems with large non-convex regions [16, 20]. Convergence to second order stationary
points was later established also for trust-region algorithms [25, 24], and this constitute
one of the main reasons for the popularity of this class of methods. Trust-region algorithms
have been extended to equality constrained and box constrained problems so as
to mantain convergence to second order stationary points [2, 8, 13, 5, 25, 24, 26]; while
negative curvature line search algorithms have been proposed for the linearly inequality
constrained case [22, 17]. However, as far as we are aware of, no algorithm for the
solution of the more complex nonlinearly inequality constrained minimization Problem
(P) exists which generates sequences converging to second order stationary points. The
main purpose of this paper is to fill this gap by presenting a negative curvature line
search algorithm which enjoys this property.
The basic idea behind our approach can be easily explained as follows.
(a) Reduce the constrained minimization Problem (P) to an equivalent unconstrained
minimization problem by using a differentiable exact penalty function.
(b) Apply a negative curvature line search algorithm to the minimization of the
penalty function.
Although appealingly simple we have to tackle some difficulties to make this approach
viable. First of all we have to establish a connection between the unconstrained
stationary points of the penalty function provided by the unconstrained minimization
algorithm and the constrained second order stationary points of Problem (P). Secondly,
we must cope with the fact that differentiable exact penalty functions, although once
continuously differentiable, are never twice continuously differentiable everywhere, so
that we cannot use an off-the-shelf negative curvature algorithm for its minimization.
Furthermore, even in points where the second order derivatives exist, their explicit evaluation
would require the use of the third order derivatives of the functions f and g,
which we are not willing to calculate.
To overcome these difficulties we develop a negative curvature algorithm for the
unconstrained minimization of the penalty function which is based on the theory of LC 1
functions and on generalized Hessians. We show that by using a suitable approximation
to elements of the generalized Hessian of the penalty function we can guarantee that the
unconstrained minimization of the penalty function yields an unconstrained stationary
point where a matrix which approximates an element of the generalized Hessian is
positive semidefinite. This suffices to ensure that the point so found is also a second
order stationary point of Problem (P).
We believe that the algorithm proposed in this paper is of intereset because, for
the first time, we are able to prove convergence to second order stationary points for
general inequality constrained problems. We do so by a fairly natural extension of
negative curvature algorithms from unconstrained to constrained problems; we note
that the computation of the negative curvature direction can be performed in a manner
analogous to and at the same cost as in the unconstrained case. We also remark that
we never require the complementarity slackness assumption to establish our results.
Finally, we think that the use of some non trivial nonsmooth analysis results to analyze
the behavior of smooth algorithms is a novel feature that could be fruitfully applied
also in other cases.
The paper is organized as follows. In the next section we recall some few known
facts about LC 1 functions and generalized Hessians and on the asymptotic identification
of active constraints. Furthermore, we also introduce the penalty function along with
some of its relevant properties. In Section 3 we introduce and analyze the algorithm.
In the fourth section we give some hints on the practical realization of the algorithm.
Finally, in the last section we outline possible improvements and make some remarks.
We finally review the notation used in this paper. The gradient of a function h :
indicated by rh, while its Hessian matrix is denoted by r 2 h. If
then the matrix rH is given by I is an index set, with
I I is the vector obtained by considering the components of H in I. We
indicate by k \Delta k the Euclidean norm and the corresponding matrix norm. If S is a subset
of IR n , coS denotes its convex hull. If A is a square matrix, - min (A) denotes its smallest
eigenvalue. The Lagrangian of Problem (P) is L(x;
by L(x; -(x)) the Lagrangian of Problem (P) evaluated in -(x). Analogously, we
indicate by r x L(x; -(x)) (r 2
xx L(x; -(x))) the gradient (Hessian) of the Lagrangian with
respect to x evaluated in -(x).
Background material
In this section we review some results on differentiability of functions and on the identification
of active constraints. We also recall the definition and some basic facts about
a differentiable exact penalty function for Problem (P) and we establish some related
new results.
2.1 LC 1 functions.
is said to be an LC 1 function on an open set
O if
- h is continuously differentiable on O,
- rh is locally Lipschitz on O
functions were first systematically studied in [18], where the definition of generalized
Hessian and the theorem reported below where also given.
The gradient of h is locally Lipschitz on O; rh is therefore differentiable almost
everywhere in O so that its generalized Jacobian in Clarke's sense [3] can be defined.
This is precisely the generalized Hessian of h, whose definition is as follows.
be an LC 1 function on the open set O and let x belong
to O. We define the generalized Hessian of h at x to be the set @ 2 h(x) of matrices defined
as
differentiable at x k and r 2 h(x k
Note that @ 2 h(x) is a nonempty, convex, compact set of symmetric matrices. Further-
more, the point-to-set map x 7! @ 2 h(x) is bounded on bounded sets [18].
For LC 1 functions a second-order Taylor-like expansion is possible. This is the main
result on LC 1 function we shall need in this paper.
Theorem 2.1 Let h : be an LC 1 function on the open set O and let x and y
be two points in O such that [x; y] is contained in O. Then
2.2 Identification of active constraints
In this section we recall some results on the identification of active constraints at a
stationary point -
x of the nonlinear program (P).
We refer the interested reader to [14] and to references therein for a detailed discussion
of this issue. Here we recall only some results in order to stress the fact that, in a
neighborhood of a first order stationary point, it is possible, under mild assumptions,
to correctly identify those constraints that are active at the solution.
First of all we need some terminology. Given a stationary point -
x with a corresponding
multiplier -
-, which we suppose to be unique, we denote by I 0 (-x) the set of
active constraints
I 0 (-x) := fij g i
while I + (-x) denotes the index set of strongly active constraints
I
Our aim is to construct a rule which is able to assign to every point x an estimate
A(x) so that lies in a suitably small neighborhood of the
stationary point -
x.
Usually estimates of this kind are obtained by comparing the values of g i (x) with
the value of an estimate of the multiplier -. For example, it can be easily shown that
the set
I \Phi (x) :=
where c is a positive constant and - : is a multiplier function (i.e. a continuous
function such that
-) coincides with the set I 0 (-x) for all x in a sufficiently
small neighborhood of a stationary point -
x which satisfies the strict complementarity
condition (see the next section for an example of multiplier function). If this condition
is violated, then only the inclusions
I
hold [14]. If the stationary point -
x does not satisfy strict complementarity, the situation
is therefore more complex, and only recently it has been shown that it is nevertheless
possible to correctly estimate the set I 0 (-x) [14]. We will not go into details here, we only
point out that the identification of the active constraints when strict complementarity
does not hold is possible under very mild assumptions in a simple way. The identification
rule takes the following form:
where ae(x) is a function that can take different forms according to the assumptions
made on the stationary point - x. For example, if in a neighborhood of -
x both f and g
are analytic (an assumption which is met in most of the practical cases), one can define
ae(x) as
log(r(x))
where
With this choice the set A(x) defined by (4) will coincide with I 0 (-x) in a suitable
neighborhood of -
x.
2.3 Penalty function
In this section we consider a differentiable penalty function for Problem (P), we recall
some relevant known facts and prove some new results which are related to the
differentiability issues dealt with in Section 2.1.
In order to define the differentiable penalty function and to guarantee some of the
properties that will be needed we make the following two assumptions.
Assumption A. For any x 2 IR n , the gradients rg i (x), i 2 I 0 (x), are linearly independent
Assumption B. For any x 2 IR n , the following implication holds
Assumptions A and B, together with Assumption C, which will be stated in Section 3,
are the only assumptions used to establish the results of this paper. These assumptions,
or assumptions similar to them, are frequently encountered in the analysis of constrained
minimization algorithms. However, we point out that they can be considerably relaxed;
this will be discussed in Section 5. We chose to use this particular set of assumptions
in order to simplify the analysis and to concentrate on the issues related to the main
topic of the paper, i.e. convergence to second order stationary points.
We start by defining a multiplier function
where M(x) is the m \Theta m matrix defined by:
and G(x) := diag(g i (x)). The main property of this function is that it is continuously
differentiable (see below) and, if -
x is a first order stationary point, then -x) is the
corresponding multiplier (which, by Assumption A, is unique). Using this function we
can define the following penalty function
is the so-called penalty parameter.
Theorem 2.2 The following properties hold:
(a) For every ffl, the penalty function Z is continuously differentiable on IR n and its
gradient is given by
'-
where
e i is the i-th column of the m \Theta m identity matrix and (x) := diag(- i (x)).
(b) For every ffl, the function Z is an LC 1 function on IR n .
(c) Let -
x be a first order stationary point for Problem (P) and let ffl be given. Then there
exists a
neighborhood\Omega of -
x such that, for every x
in\Omega , the following overestimate
of the generalized Hessian of Z evaluated at x holds
where
A
is matrix for which we can write kKA (x)k - ae(x) for a nonegative continuous
function ae such that
In particular, the following overstimate holds in - x
where
xx L(-x; -x)) +r-A (-x)rg A (-x) T
Proof. Point (a) is proved in [11].
Point (b) follows from the expression of the gradient given in (a) taking into account
the differentiability assumptions
The proof of point (c) can be derived from the very definition of generalized Hessian
in the following way. Let -
x be a stationary point of Problem (P). Consider a point
x in a
neighborhood\Omega of -
x and sequences of points fx k g converging to x with the
gradient of Z existing in x k . This will happen either if (a) for no i
or if (b) for all i for which g i
I \Phi (x)g. By recalling the expression of the gradient given
previously we can write
'-
'-
where I \Psi It is now easy to see that, both in case (a) and
(b), the Hessian of Z(x; ffl) in x k can be obtained by differentiating this expression and
this gives
rg I \Phi
where K I \Phi rapresents the sum of terms always containing as a factor either
I \Phi Taking into account the definition of @ 2 Z(x; ffl) and that, as
discussed in the previous section,
if\Omega is suitably small
I
we have that both g I \Phi
x. The assertion of point
(c) now follows from these facts and the definition of A.
The following theorem gives a sufficient condition, in terms of matrices in the overestimate
~
stationary point of Problem (P) to be a second order stationary
point.
Theorem 2.3 Let -
x be a first order stationary point of Problem (P) and let ffl be given.
Then, if a matrix H exists in ~
which is positive semidefinite, - x is a second
order stationary point of Problem (P).
Proof. Let H in ~
positive semidefinite and suppose by contradiction that
x does not satisfy the KKT second order necessary conditions (2). Then a vector z exists
such that
rg I 0
(recall that -x) equals the multiplier associated with -
x). On the other hand, by
Theorem 2.2 (c) and by Caratheodory theorem, we also have that, for some integer
where, for each i, fi i - 0,
A. Since, for each i, A i 2 A, we can
write, taking into account the definition of H(-x; ffl; A i ) and (9),
z T H(-x; ffl; A i
from which
z
immediately follows. But this contradict the assumption that H is positive semidefinite
and the proof is complete.
This result will turn out to be fundamental to our approach, since our algorithm will converge
to first order stationary stationary points where at least one element in ~
is positive semidefinite.
In the remining part of this section we consider some technical results about penalty
functions that will be used later on and that help illustrate the relation between the
function Z and Problem (P).
Proposition 2.4 (a) Let ffl ? 0 and x 2 IR n be given. If x is an unconstrained stationary
point of Z and then x is a first order stationary point of
Problem (P).
(b) Conversely, if x is a first order stationary point of Problem (P), then, for every
positive ffl, rZ(x;
Proof. See [11].
Proposition 2.5 Let D ae IR n be a compact set. Then, there exists an
for every x 2 D and for every ffl 2 (0; -ffl], we have
Proof. Let D 1 ae IR n be a compact subset such that D ae intD 1 . In the proof of
Proposition 14 in [12] it is shown that if - x is a feasible point in intD 1 , and therefore in
D, we can find positive ffl( -
x), oe(-x) and ae 0 (-x) such that
More precisely, (10) derives from formula (24) in [12] and the discussion which follows
that formula. Indicate by M the maximum of krg(x)k for x
note that, by Assumption A, M ? 0. Then, recalling that krg(x) T rZ(x; ffl)k -
krg(x)kkrZ(x; ffl)k, we can easily deduce from (10) that
where we set
Suppose now that the theorem is not true. Then, sequences fx k g and fffl k g exist,
such that
and
Since
recalling the expression of rZ, gives
Thus, by Assumption B we have that -
x is feasible. But then, we get a contradiction
between (12) and (11), and this concludes the proof.
Note that Proposition 2.4 and Proposition 2.5 imply that, given a compact set D, if ffl is
sufficiently small, then every stationary point of Problem (P) in D is an unconstrained
stationary point of Z and, vice versa, every unconstrained stationary point of Z in D is
a stationary point of Problem (P). We refer the interested reader to the review paper [9]
and references therein for a more detailed discussion of the properties of differentiable
penalty functions.
3 Convergence to second order stationary points
In this section we consider a line search algorithm for the minimization of Z(x; ffl) which
yields second order stationary points of Problem (P). For the sake of clarity we break
the exposition in three parts. In Section 3.1 we first consider a line search algorithm
(Algorithm M) which converges, for a fixed value of the penalty parameter ffl, to an
unconstrained stationary point of the penalty function Z. By Proposition 2.4 we know
that if the penalty parameter were sufficiently small, we would have thus obtained a
first order stationary point of Problem (P). Therefore, in Section 3.2, we introduce
an algorithm (Algorithm SOC) where Algorithm M is embedded in a simple updating
scheme for the penalty parameter based on Proposition 2.5. We show that after a
finite number of reductions the penalty parameter stays fixed and every limit point of
Algorithm SOC is a first order stationary point of Problem (P). Finally, in Section 3.3
we refine the analysis of Algorithm SOC and we show that every limit point is actually
a second order stationary point of Problem (P).
In order to establish the results of Sections 3.1, 3.2 and 3.3 we assume that the
directions used in Algorithm M satisfy certain conditions. In Section 4 we will illustrate
possible ways for generating directions which fulfil these conditions.
In order to simplify the analisys we shall assume, from now on, that the following
assumption is satisfied.
Assumption C. The sequence fx k g of points generated by the algorithms considered
below is bounded.
3.1 Convergence for fixed ffl to unconstrained stationary points
of Z: Algorithm M
We first consider a line search algorithm for the unconstrained minimization of the
penalty function Z which generates, for a fixed value ffl of the penalty parameter, sequences
converging to unconstrained stationary points of the penalty function. In all
this section, ffl is understood to be a fixed positive constant.
The algorithm generates a sequence fx k g according to the following rule:
Algorithm M
where
and where ff k is compute by the Linesearch procedure below.
Linesearch procedure
Step 2: If
set ff
Step 3: Choose ff 2 [oe 1 ff; oe 2 ff], and go to Step 2.
We assume that the matrices H k depend on the sequence fx k g and that the directions
and the matrices H k are bounded and satisfy the following conditions:
Condition 1. The directions s k are such that rZ(x k ; ffl) T s k - 0 and
0:
Condition 2. The directions d k are such that rZ(x k ; ffl) T d k - 0 and, together with the
matrices H k , they satisfy
ae d T
k is not positive semidefinite
0:
Condition 3. Let fx k g and fu k g be sequences converging to a first order stationary
point -
x of Problem (P). Then, for every sequence of matrices fQ k g, with
is a sequence of numbers converging to 0.
Algorithm M resembles classical line search algorithms using negative curvature
directions to force convergence to second order stationary points in unconstrained min-
imization. The only apparent difference is that we have the exponent t(x k ) defined by
while in corresponding unconstrained algorithms we usually have t(x k
every k. We need this change in order to be able to tackle the fact that the penalty
function is not everywhere twice continuously differentiable (see, for example, the proof
of Proposition 3.1).
We also assume that the directions s k , d k and the matrices H k satisfy Conditions
1-3. Conditions 1 and 2 are fairly standard and similar to those employed in the
unconstrained case. Condition 3, on the sequence of matrices H k , is, again, related
to the nondifferentiability of the gradient of Z. In fact, the matrix H k is supposed
to convey some second order information on the penalty function; therefore Condition
3 imposes a certain relation between the matrices H k and the generalized Hessians
of Z. Note that if the function Z were twice continuously differentiable the choice
would satisfy, by continuity, Condition 3.
The following proposition shows that the linesearch procedure described above is
well defined in all the cases that, we shall see, are of interest for us.
Proposition 3.1 The linesearch procedure is well defined, namely at each iteration the
test of Step 2 is satisfied for every ff sufficiently small if the point x k either (a) is not
an unconstrained stationary point of the function Z or (b) is a first order stationary
point of Problem (P) and H k is not positive semidefinite.
Proof. Assume by contradiction the assertion of the proposition is false. Then there
exists a sequence fff j g such that ff
Z
and either the condition (a) or the condition (b) hold. By Theorem 2.1 and taking
into account that rZ(x k ; ffl) T d k - 0 by Condition 2, we can find a point
Z
where Q(u k ) is a symmetric matrix belonging to @ 2 Z(u k ; ffl).
Therefore, by (14) and (15), we have:
Now we consider two cases. If condition (a) holds, then krZ(x k ; ffl)k 6= 0. We have
that dividing both sides
of (16) by ff 2
, by taking into account that by making the limit for
and by recalling that the sequence fQ(u k )g is bounded, we obtain the contradiction
If condition (b) holds, then x k is a first order stationary point of Problem (P) and
H k is not positive semidefinite. We have, by Proposition 2.4 (b), that rZ(x k ;
so that rZ(x k ; ffl) T s dividing both sides of (16)
by ff 2t(x k )
by making the limit for recalling that the sequence fQ(u k )g is
bounded, and by recalling that Condition 3 implies
with we obtain from (16)
which, recalling that H k is not positive semidefinite, contradicts Condition 2.
Proposition 3.1 shows that Algorithm M can possibly fail to produce a new point
only if, for some k, rZ(x k ; supposing that this trivial case does not
occur, the next theorem illustrates the behaviour of an infinite sequence generated by
Algorithm M.
Theorem 3.2 Let fx k g be an infinite sequence produced by Algorithm M. Then, every
limit point x of fx k g is such that rZ(x
Proof. Since the sequence fZ(x k ; ffl)g is monotonically decreasing, Z is continuous
and fx k g is bounded by Assumption C, it follows that fZ(x k ; ffl)g converges. Hence
lim
Then, by recalling the acceptability criterion of the line search, Condition 1 and Condition
2, we have:
Therefore, (17), (18), Condition 1 and Condition 2 yield:
The boundness of s k and d k , Condition 1, Condition 2, (19) and (20) imply in turn:
Suppose now, by contradiction, that there exists a converging subsequence fx k gK 1
whose limit point x is not a stationary point. For semplicity and without loss of
generality we can rename the subsequence fx k gK 1
by fx k g.
Condition 1, (19) and rZ(x ; ffl) 6= 0 imply
By (23) we have that there exists an index -
k such that, for all k -
Z
for some oe k 2 [oe Theorem 2.1 and taking into account that rZ(x k ; ffl) T d k - 0
by Condition 2, we can find, for any k -
k, a point
Z
with From (24) and (25) It follows that
Dividing both sides by
and by simple manipulations we obtain
By (21) and (22) we have Condition 1,
and since the sequence fQ(u k )g is bounded,
we have by (27)
lim
Condition 1 now implies that rZ(x ; which contradicts the fact the subsequence
does not converge to an unconstrained stationary point and this proves the theorem
In the next sections, given x k , we indicate by M(x k ) the new point produced by the
Algorithm M described above.
3.2 Updating ffl to guarantee convergence to stationary points
of Problem (P): Algorithm SOC
In this section we show that it is possible to update in a simple way the value of the
penalty parameter ffl while minimizing the penalty function Z by Algorithm M, so that
every limit point of the sequence of points generated is a first order stationary point of
Problem (P). This is accomplished by the Algorithm SOC below. In the next section we
shall show that actually, under some additional conditions, the limit points generated by
Algorithm SOC are also second order stationary points of Problem (P). This motivates
the name SOC, which stands for Second Order Convergence.
Algorithm SOC
Step 0: Select x 0 and ffl 0 . Set
to Step 2; else go to Step 3.
Step 2: If max[g(x k
and H k 6- 0 go to Step 4; otherwise if max[g(x k
Step 3: If krZ(x k go to Step 4; else go to Step 5.
Step 4: Compute to Step 1.
Step 5: Set x and go to Step 1.
Algorithm SOC is related to similar schemes already proposed in the literature (see,
e.g., the review paper [9] and references therein). The core step is Step 3, where, at
each iteration, the decision of whether to update ffl is taken. This Step is obviously
motivated by Propositions 2.5 and Proposition 2.4 (a).
Theorem 3.3 Algorithm SOC is well defined. Furthermore, let fx k g and fffl k g be the
sequences produced by Algorithm SOC. Then, either the algorithm terminates after p
iterations in a first order stationary point x p of Problem (P), or there exist an index -
and an -ffl ? 0 such that, for every k - k, ffl and every limit point of the sequence
is a first order stationary point of Problem (P).
Proof. The algorithm is well defined because every time we reach Step 4 Proposition
3.1 ensures the M(x k ) is well defined. If the algorithm stops after a finite number
of iterations, then, by the instructions of Steps 1 and 2, we have rZ(x
The thesis then follows by Proposition 2.4 (a). Therefore,
assume that an infinite sequence of points is generated. Assumption C and Theorem
2.5 guarantee that ffl is updated only a finite number of times. So, after a finite number
of times ffl Algorithm SOC reduces to the application of Algorithm M to
Z(x; -ffl). Then, by Theorem 3.2, every limit point -
x of fx k g is such that rZ(-x;
Since the test at Step 3 is eventually always satisfied, this implies, in turn, that
The thesis now follows by Theorem 2.4 (a).
3.3 Algorithm SOC: Second order convergence
In this section we prove that under additional suitable conditions, every limit point
of the sequence fx k g generated by Algorithm SOC actually satisfies the KKT second
order necessary conditions. To establish this result we need the two further conditions
below.
Condition 4. Let fx k g be a sequence converging to a first order stationary point of
Problem (P). Then the directions d k and the matrices H k satisfy
Condition 5. Let fx k g be a sequence converging to a first order stationary point -
x of
Problem (P), and let ffl ? 0 be given. Then
Condition 4 mimics similar standard conditions in the unconstrained case, where H k
is the Hessian of the objective function. Roughly speaking, it requires the direction d k
to be a sufficiently good approximation to an eigenvector corresponding to the smallest
eigenvalue of H k . Condition 5, similarly to Condition 3, imposes a connection between
the matrices H k and the generalized Hessian of Z.
The following theorem establishes the main result of this paper.
Theorem 3.4 Let fx k g be the sequence produced by Algorithm SOC. Then, either the
algorithm terminates at a second order stationary point x p of Problem (P) or it produces
an infinite sequence fx k g such that every limit point x of fx k g is a second order
stationary point of Problem (P).
Proof. If Algorithm SOC terminates after a finite number of iterations we have, by
Theorem 3.3, that x p is a first order stationary point of Problem (P). On the other
hand, by the instructions of Step 2 and by Condition 5, we have that H p is positive
semidefinite and belongs to ~
Therefore, the assertion follows from Theorem
2.3. We then pass to the case in which an infinite sequence is generated.
We already know, by Theorem 3.3, that every limit point of the sequence is a
first order stationary point of Problem (P). We also know that eventually ffl k is not
updated, so that ffl Then, by Theorem 2.3 it will suffice to show that ~
contains a positive semidefinite element. Suppose the contrary. Let fx k g converge to
x . Reasoning as in the beginning of the proof of Theorem 3.2, we have that (21) and
still hold. Then, we can assume, renumbering if necessary, that
0: (30)
In fact, if this is not the case, (22), Conditions 4 and 5 imply the contradiction that
tends to a positive semidefinite element in ~
Then, by (30) and by repeating again the arguments used in the proof of Theorem
3.2, we have that there exists an index - k such that, for all k - k, (26) holds. From (26)
we get, recalling Condition 3:
which, taking into account that, by Condition 1, rZ(x and the fact that
dividing both sides by
we have:
By (21) and (22) we have so that fQ(u k )g is bounded, while by Condition 5 we
have, renumbering if necessary, H
by Condition 2, (31) implies :
lim
and hence, by recalling Condition 4, we have that - min (H Condition
5, contradicts the fact the subsequence fx k g converges to a KKT point where every
element in ~
-ffl) is not positive semidefinite.
4 Practical realization
In this section we show how we can calculate directions s k , d k and matrices H k satisfying
Conditions 1-5 required in the previous sections.
Let the matrix H k be defined as
and A(x) is any estimate of the active set with the
property that, in a neighborhood of a stationary point - x, In section 2.3
we discussed more in detail some possible choices for A(x) and gave adequate references.
Note also that, in a stationary point -
x, the matrix H k belongs to ~
Given
this matrix we have a wide range of choices for s k and d k .
theoretically sound option is to take s k to be \GammarZ(x k
(b) A more practical choice, however, could be that of taking s k as the solution
of the linear system
where D k is a diagonal matrix chosen so that the H k +D k is positive definite and
the smallest eigenvalue of the sequence fH k +D k g is bounded away from 0. The
matrix D k should be 0 if the matrix H k is positive definite with smallest eigenvalue
greator than a (small) positive threshold value. Methods for automatically
constructing the matrix D k while solving the system H k s are well
known and used in the unconstrained case.
(c) Another possible choice is to take s k as the direction employed in [10, 1, 15].
can be chosen to be to be an eigenvector associated to the smallest eigenvalue
of the matrix H k with the sign possibly changed, in order to ensure rZ(x k ; ffl) T d k -
(b) Suitable approximations of the direction of point (a) calculated as indicated,
for example, in [23] and [20] could also be employed.
The design of an algorithmically effective choice for s k and d k is beyond the scope
of this paper. Here we only wanted to illustrate the a wide range of options is available;
further choices are certainly possible.
In the sequel, for the sake of concreteness, we shall assume that both s k and d k are
chosen according to the options (a) listed above. With these choices, and since we are
supposing that fx k g remains in a bounded set, it is easy to see that also the sequences
are bounded. It is also standard to show that Conditions 1, 2 and
4 are satisfied. Furthermore, if we recall that, in a neighborhood of a stationary point -
x
of Problem (P), is easy to see that, by the very definition of ~
also Condition 5 is met by our choice for the matrix H k . In the next proposition we
show that also the more cumbersome Condition 3 is satisfied.
Proposition 4.1 The sequence of matrices defined by (32) satisfies Condition 3.
Proof. Let sequences fx k g and fu k g converging to a stationary point of Problem (P)
be given. Let fQ k g be any sequence such that Q k 2 @ 2 Z(u k ; ffl) for every k. By Theorem
2.2 (c) we know that we can assume, without loss of generality, that eventually, for x k
sufficiently close to the point -
x, the matrix Q k has the following form, for some integer
where fae k g is a sequence converging to 0 and where, for each i and for each k, fi k
A. Since, for each i, A k
sufficiently large. We also recall that if A and B are two s \Theta r matrices we can write:
r
a
where a j and b j are the j-th columns of A and B respectively. By employing Taylor
expansion we can write
r- A k
rg A k
r- I 0
rg I 0
r- A k
rg A k
r- I 0
rg I 0
fflB @
r- D k
. Now, if we take into account the previous
formula and we set
a k
we can write
From this relation the thesis of the proposition readily follows by setting
5 Remarks and conclusions
We have presented a negative curvature line search algorithm for the minimization of a
nonlinear function subject to nonlinear inequality constraints. The main novel feature
of this method is that every limit point of the sequence it generates satisfies both the
KKT first and second order necessary optimality conditions. The main tools employed
to obtain this result are a continuously differentiable penalty function and some results
from the theory of LC 1 functions.
For sake of simplicity we did not include equality constraints in our analysis, but
they can be easily handled. All the results of this paper go through if one considers
also equality constraints, it is sufficient to use an analogous of the penalty function Z
where equality constraints are included, see [12].
Another point which deserves attention are the Assumtions A, B and C that we
employ. These assumptions are mainly dictated by the penalty function considered;
however they can be relaxed if a more sophisticated choice is made for the penalty
function. We chose to use the (relatively) simple function Z to concentrate on the
main issues related to the second order convergence; however, if the continuously differentiable
function proposed in [7] is employed instead of Z, we can improve on the
assumptions A, B and C. For example, Assumption A can be relaxed to: For any feasible
x, the gradients rg i (x), i 2 I 0 (x) are linearly independent. More significantly,
also Assumptions B and C can be considerably relaxed, but to illustrate this point we
should introduce some technical notation and we prefere to omit this here and to refer
the reader to [7] for more details. We only point out that Assumption C can be replaced
by natural and mild assumptions on the problem data which guarantee that the levels
sets of the penalty function are compact.
--R
Constrained Optimization and Lagrange Multiplier Methods.
A trust region algorithm for nonlinearly constrained optimization.
Optimization and Nonsmooth Analysis.
An interior trust region approach for nonlinear minimization subject to bounds.
A new trust-region algorithm for equality constrained optimization
Global convergence of a class of trust region algorithms for optimization with simple bounds.
A continuously differentiable exact penalty function for nonlinear programming problems with unbounded feasible set.
On the convergence theory of trust-region- based algorithms for equality-constrained optimization
"Algorithms for continuous optimization"
"System Modelling and Optimization"
A continuously differentiable exact penalty function for nonlinear programming problems with inequalty constraints.
Exact penalty functions in constrained optimiza- tion
Convergence to a second-order point for a trust-region algorithm with a nonmonotonic penalty parameter for constrained optimization
"La Sapienza"
Globally and quadratically convergent exact penalty based methods for inequality constrained problems.
Nonmonotone curvilinear line search methods for unconstrained optimization.
Newton methods for large-scale linear inequality constrained minimization
Generalized Hessian matrix and second-order optimality conditions for problems with C 1
New results on a continuously differentiable exact penalty function.
"La Sapienza"
A modification of Armijo's step-size rule for negative curva- ture
Nonlinear Programming: Theory
Newton's method with a model trust region modification.
--TR
--CTR
Giovanni Fasano , Massimo Roma, Iterative computation of negative curvature directions in large scale optimization, Computational Optimization and Applications, v.38 n.1, p.81-104, September 2007
Immanuel M. Bomze , Laura Palagi, Quartic Formulation of Standard Quadratic Optimization Problems, Journal of Global Optimization, v.32 n.2, p.181-205, June 2005
X. Q. Yang , X. X. Huang, Partially Strictly Monotone and Nonlinear Penalty Functions for Constrained Mathematical Programs, Computational Optimization and Applications, v.25 n.1-3, p.293-311 | inequality constrained optimization;penalty function;KKT second order necessary conditions;LC 1 function;negative curvature direction |
297706 | A Trace Cache Microarchitecture and Evaluation. | AbstractAs the instruction issue width of superscalar processors increases, instruction fetch bandwidth requirements will also increase. It will eventually become necessary to fetch multiple basic blocks per clock cycle. Conventional instruction caches hinder this effort because long instruction sequences are not always in contiguous cache locations. Trace caches overcome this limitation by caching traces of the dynamic instruction stream, so instructions that are otherwise noncontiguous appear contiguous. In this paper, we present and evaluate a microarchitecture incorporating a trace cache. The microarchitecture provides high instruction fetch bandwidth with low latency by explicitly sequencing through the program at the higher level of traces, both in terms of 1) control flow prediction and 2) instruction supply. For the SPEC95 integer benchmarks, trace-level sequencing improves performance from 15 percent to 35 percent over an otherwise equally sophisticated, but contiguous, multiple-block fetch mechanism. Most of this performance improvement is due to the trace cache. However, for one benchmark whose performance is limited by branch mispredictions, the performance gain is almost entirely due to improved prediction accuracy. | Introduction
High performance superscalar processor organizations
divide naturally into an instruction fetch mechanism and an
instruction execution mechanism. These two mechanisms
are separated by instruction issue buffers, for example, issue
queues or reservation stations. Conceptually, the instruction
fetch mechanism acts as a "producer" which fetches,
decodes, and dispatches instructions into the buffer. The instruction
execution engine is the "consumer" which issues
instructions from the buffer and executes them, subject to
data dependence and resource constraints.
The instruction issue buffers are collectively called the
instruction window. The window is the mechanism for exposing
instruction-level parallelism (ILP) in sequential pro-
grams: a larger window increases the opportunity for finding
data-independent instructions that may issue and execute
in parallel. Thus, the trend in superscalar design is to
construct larger instruction windows, and provide wider is-
sue/execution paths to exploit the corresponding increase in
available ILP.
These trends place increased demand on the instruction
supply mechanism. In particular, the peak instruction fetch
rate should match the peak instruction issue rate, or the benefit
of aggressive ILP techniques are diminished.
In this paper, we are concerned with instruction fetch
bandwidth becoming a performance bottleneck. Current
fetch units are limited to one branch prediction per cycle
and can therefore fetch no more than one basic block per
cycle. Previous studies have shown, however, that the average
size of basic blocks in integer codes is small, around
four to six instructions [30, 3]. While fetching a single basic
block each cycle is sufficient for implementations that
issue at most four instructions per cycle, it is not for processors
with higher peak issue rates. If multiple branch prediction
[30, 3, 4, 26] is used, then the fetch unit can at least
fetch multiple contiguous basic blocks in a cycle. As will
be shown in this paper, fetching multiple contiguous basic
blocks is important, but the upper bound on fetch band-width
is still limited due to the frequency of taken branches.
Therefore, if a taken branch is encountered, it is necessary
to fetch instructions down the taken path in the same cycle
that the branch is fetched.
1.1. The trace cache
The job of the fetch unit is to feed the dynamic instruction
stream to the decoder. A problem is that instructions
are placed in the cache in their compiled order. Storing
programs in this static form favors fetching code with infrequent
taken branches or with large basic blocks. Neither
of these cases is typical of integer programs.
Figure
1(a) shows an example dynamic sequence of basic
blocks as they are stored in the instruction cache. The
arrows indicate taken branches. Even with multiple branch
predictions per cycle, four cycles are required to fetch the
instructions in basic blocks ABCDE because the instructions
are stored in noncontiguous cache locations.
(a) Instruction cache.
(b) Trace cache.
Figure
1. Storing a noncontiguous sequence of instructions
It is for this reason that several researchers have proposed
a special instruction cache for capturing long dynamic
instruction sequences [15, 22, 23, 24, 21]. This structure
is called a trace cache because each line stores a snap-
shot, or trace, of the dynamic instruction stream. Referring
again to Figure 1, the same dynamic sequence of blocks that
appear noncontiguous in the instruction cache are contiguous
in the trace cache (Figure 1(b)).
The primary constraint on a trace is a maximum length,
determined by the trace cache line size. There may be
any number of other implementation-dependent constraints,
such as the number and type of embedded control transfer
instructions, or special terminating conditions for tuning
various performance factors [25].
A trace is fully specified by a starting address and a sequence
of branch outcomes which describe the path fol-
lowed. The first time a trace is encountered, it is allocated a
line in the trace cache. The line is filled as instructions are
fetched from the instruction cache. If the same trace is encountered
again in the course of executing the program, i.e.
the same starting address and predicted branch outcomes,
it will be available in the trace cache and is fed directly to
the decoder in a single cycle. Otherwise, fetching proceeds
normally from the instruction cache.
Other high bandwidth fetch mechanisms have been proposed
that are based on the conventional instruction cache
[30, 4, 3, 26]. Every cycle, instructions from noncontiguous
locations are fetched from the instruction cache and assembled
into the predicted dynamic sequence. This typically
requires multiple pipeline stages: (1) a level of indirection
through special branch target tables to generate pointers to
all of the noncontiguous instruction blocks, (2) a moderate
to highly interleaved instruction cache to provide simultaneous
access to multiple lines, with the possibility for bank
conflicts, and (3) a complex alignment network to shift and
align blocks into dynamic program order, ready for decod-
ing/renaming.
The trace cache approach avoids this complexity by
caching dynamic instruction sequences themselves, rather
than information for constructing them. If the predicted dynamic
sequence exists in the trace cache, it does not have
to be recreated on the fly from the instruction cache's static
representation. The cost of this approach is redundant instruction
storage: the same instructions may reside in both
the primary cache and the trace cache, and there is redundancy
among different lines in the trace cache.
1.2. Related prior work
Alternative High Bandwidth Fetch Mechanisms
Four previous studies have focused on mechanisms to
fetch multiple, possibly noncontiguous basic blocks each
cycle from the instruction cache. These are the branch address
cache [30], the subgraph predictor [4], the collapsing
buffer [3], and the multiple-block ahead predictor [26].
Trace Cache Development
Melvin, Shebanow, and Patt proposed the fill unit and
multinodeword cache [18, 16]. The first work qualitatively
describes the performance implications of smaller or
larger atomic units of work at the instruction-set architecture
(ISA), compiler, and hardware levels. The authors argue
for small compiler atomic units and large execution
atomic units to achieve highest performance. The fill unit
is proposed as the hardware mechanism for compacting the
smaller compiler units into the large execution units, which
are then stored for reuse in a decoded instruction cache. The
evaluates the performance potential of
large execution atomic units. Although this work only evaluates
sizes up to that of a single VAX instruction and a basic
block, it also suggests joining two consecutive basic blocks
if the intervening branch is "highly predictable".
In [17], software basic block enlargement is discussed.
In the spirit of trace scheduling [5] and trace selection
[11], the compiler uses profiling to identify candidate basic
blocks for merging into a single execution atomic unit. The
hardware sequences at the level of execution atomic units as
created by the compiler. The advantage of this approach is
the compiler can optimize and schedule across basic block
boundaries.
Franklin and Smotherman [6] extended the fill unit's
role to dynamically assemble VLIW-like instruction words
from a RISC instruction stream, which are then stored in a
shadow cache. This structure eases the issue complexity of
a wide issue processor. They further applied the fill unit and
a decoded instruction cache to improve the decoding performance
of a complex instruction-set computer (CISC) [27].
In both cases the cache lines are augmented to store trees to
improve the utilization of each line.
Four works have independently proposed the trace cache
as a complexity-effective approach to high bandwidth instruction
fetching. Johnson [15] proposed the expansion
cache, which addresses cache alignment, branch prediction
throughput, and instruction run merging. The expansion
process also predetermines the execution schedule of instructions
in a line. Unlike a pure VLIW cache, the schedule
may consist of multiple cycles via cycle tagging. Peleg
and Weiser [22] describe the design of a dynamic flow
instruction cache which stores instructions independent of
their virtual addresses, the defining characteristic of trace
caches. Rotenberg, Bennett, and Smith [23, 24] motivate
the concept with comparisons to other high bandwidth fetch
mechanisms proposed in the literature, and defines some of
the trace cache design space. Patel, Friendly, and Patt [21]
expand upon and present detailed evaluations of this design
space, arguing for a more prominent role of the trace cache.
The mispredict recovery cache proposed by Bondi,
Nanda, and Dutta [1] caches instruction threads from alternate
paths of mispredicted branches. The goal of this work
is to quickly bypass the multiple fetch and decode stages of
a long CISC pipeline following a branch mispredict. Nair
and Hopkins [19] employ dynamic instruction formatting to
cache large scheduled groups, similar in spirit to the cycle
tagging approach of the expansion cache.
There has also been recent work incorporating trace
caches into new processing models. Vajapeyam and Mitra
[29], Sundararaman and Franklin [28], and Rotenberg,
Jacobson, Sazeides, and Smith [25] exploit the data and
control hierarchy implied by traces to overcome complexity
and architectural hurdles of superscalar processors. Jacob-
son, Rotenberg, and Smith [14] propose a control prediction
model well suited to the trace cache called next trace pre-
diction, discussed in later sections. Friendly, Patel, and Patt
propose a new processing model called inactive issue for
reducing the effects of branch mispredictions [7], and dynamically
optimizing traces before storing them in the trace
cache, reducing their execution time significantly [8].
Microcode, VLIW, and Block-Structured ISAs
Clearly the concept of traces exists in the software
realm of instruction-level parallelism. Early work by Fisher
[5], Hwu and Chang [11], and others on trace scheduling
and trace selection for microcode recognized the problem
imposed by branches on code optimization. Subsequent
VLIW architectures and novel ISA techniques, for example
[12, 10], further promote the ability to schedule long
sequences of instructions containing multiple branches.
2. Trace cache microarchitecture
In Section 1.1 we introduced the concept of the trace
cache - an instruction cache which captures dynamic instruction
sequences, or traces. We now present a microarchitecture
organized around traces.
2.1. Trace-level sequencing
The premise of the proposed microarchitecture, shown
in
Figure
2, is to provide high instruction fetch bandwidth
with low latency. This is achieved by explicitly sequencing
through the program at the higher level of traces, both for
(1) control flow prediction and (2) supplying instructions.
Cache
Instruction
Branch
Trace Trace
Cache
outstanding
trace buffers
Execution
Engine
branch outcomes
update
Figure
2. Microarchitecture.
A next trace predictor [14] treats traces as basic units
and explicitly predicts sequences of traces. Because traces
are the unit of prediction, rather than individual branches,
high branch prediction throughput is implicitly achieved
with only a single trace prediction per cycle. Jacobson et
al [14] demonstrated that explicit trace prediction not only
removes fundamental constraints on the number of branches
in a trace (usually a consequence of adapting single branch
predictors to multiple branch predictor counterparts [23]),
but it also holds the potential for achieving higher overall
branch prediction accuracy than single branch predictors.
Details of next trace prediction are presented in Section 2.3.
The output of the trace predictor is a trace identifier: a
given trace is uniquely identified by its starting PC and the
outcomes of all conditional branches embedded in the trace.
The trace identifier is used to lookup the trace in the trace
cache. The index into the trace cache can be derived from
just the starting PC, or a combination of PC and branch out-
comes. Using branch outcomes in the index has the advantage
of providing path associativity - multiple traces emi-
nating from the same start PC can reside simultaneously in
the trace cache even if it is direct mapped [24].
The output of the trace cache is one or more traces, depending
on the cache associativity. A trace identifier is
stored with each trace in order to determine a trace cache
hit, analogous to the tag of conventional caches. The desired
trace is present in the cache if one of the cached trace
identifiers matches the predicted trace identifier.
The trace predictor and trace cache together provide fast
trace-level sequencing. Unfortunately, trace-level sequencing
does not always provide the required trace. This is particularly
true at the start of the program or when a new region
of code is reached - neither the trace predictor nor the
trace cache has "learned" any traces yet. Instruction-level
sequencing, discussed in the next section, is required to construct
non-existent traces or repair trace mispredictions.
2.2. Instruction-level sequencing
The outstanding trace buffers in Figure 2 are used to (1)
construct new traces that are not in the trace cache and (2)
track branch outcomes as they become available from the
execution engine, allowing detection of mispredictions and
repair of the traces containing them.
Each fetched trace is dispatched to both the execution
engine and an outstanding trace buffer. In the case of a
trace cache miss, only the trace prediction is received by
the allocated buffer. The trace prediction itself provides
enough information to construct the trace from the instruction
cache, although this typically requires multiple cycles
due to predicted-taken branches.
In the case of a trace cache hit, the trace is dispatched
to the buffer. This allows repair of a partially mispredicted
trace, i.e. when a branch outcome returned from execution
does not match the path indicated within the trace. In the
event of a branch misprediction, the trace buffer begins reconstructing
the tail of the trace (or all of the trace if the
start PC is incorrect) using the corrected branch target and
the instruction cache. For subsequent branches in the trace,
a second-level branch predictor is used to make predictions.
We advocate an aggressive instruction cache design for
providing robust performance over a broad range of trace
cache miss rates. The instruction cache is 2-way interleaved
so that up to a full cache line can be fetched each cycle,
independent of PC alignment [9]. The second-level branch
prediction mechanism is simple - a 2-bit counter and branch
target stored with each branch. Logically, the instructions,
counters, and targets are all stored in the instruction cache
(as opposed to a separate cache and branch target buffer) to
allow fast, parallel prediction of any number of not-taken
branches. We call this instruction fetch mechanism SEQ.n
in keeping with the terminology of [24] - any number (de-
noted n) of sequential basic blocks, up to the line size, can
be fetched in a single cycle.
When a trace buffer is through constructing its trace, it is
written into the trace cache and dispatched to the execution
engine. If the newly constructed trace is a result of misprediction
recovery, the trace identifier is also sent to the trace
predictor for repairing its path history.
2.3. Next trace prediction
The next trace predictor, shown in Figure 3, is based on
Jacobson's work on path-based, high-level control flow prediction
[13, 14].
An index into a correlated prediction table is formed
from the sequence of past trace identifiers. The hash function
used to generate the index is called a DOLC func-
tion: 'D'epth specifies the path history depth in terms of
traces; 'O'ldest indicates the number of bits selected from
each trace identifier except the two most recent ones; 'L'ast
path
PC path
PC path
HASH
PC, path
Prediction
Table
path
to Trace Cache
Figure
3. Jacobson's next trace predictor.
and 'C'urrent indicate the number of bits selected from the
second-most recent and most recent trace identifiers, respectively
Each entry in the correlated prediction table contains a
trace identifier and a 2-bit counter for replacement. The
predictor is augmented with several other mechanisms [14].
ffl Hybrid prediction. In addition to the correlated table,
a second, smaller table is indexed with only the most
recent trace identifier. This second table requires a
shorter learning time and suffers less aliasing pressure.
ffl Return history stack. At call instructions, the path history
is pushed onto a special stack. When the corresponding
return point is reached, path history before
the call is restored. This improves accuracy because
control flow following a subroutine is highly correlated
with control flow before the call.
Alternate trace identifier. An entry in the correlated
table may be augmented with an alternate trace predic-
tion, a form of associativity in the predictor. If a trace
misprediction is detected, the outstanding trace buffer
responsible for repairing the trace can use the alternate
prediction if it is consistent with known branch
outcomes in the trace. If so, the trace buffer does
not have to resort to the second-level branch predic-
instruction-level sequencing is avoided altogether
if the alternate trace also hits in the trace cache.
2.4. Trace selection
The performance of the trace cache is strongly dependent
on trace selection, the algorithm used to divide the dynamic
instruction stream into traces. Trace selection primarily affects
average trace length and trace cache hit rate, both of
which, in turn, affect fetch bandwidth. The interaction between
trace length and hit rate, however, is not well under-
stood. Preliminary studies indicate that longer traces result
in lower hit rates, but this may be an artifact of naive trace
selection policies. Sophisticated selection techniques that
are conscious of control flow constructs - loop back-edges,
loop fall-through points, call sites, and re-convergent points
in general - may lead to different conclusions. The reader
is referred to [21, 25, 20] for a few interesting control-flow-
conscious selection heuristics.
Trace selection in this paper is constrained only by
the maximum trace length of 16 instructions, and indirect
branches (returns and jump/call indirects) terminate traces.
2.5. Hierarchical sequencing
In
Figure
4(a), a portion of the dynamic instruction
stream is shown with a solid horizontal arrow from left to
right. The stream is divided into traces through T5. This
sequence of traces is produced independent of where the
instructions come from - trace predictor/trace cache, trace
predictor/instruction cache, or branch predictor/instruction
cache.
mispredicted
branch
(a) Hierarchical.
mispredicted
branch
(b) Non-hierarchical.
Figure
4. Two sequencing models.
For example, if the trace predictor mispredicts T3, the
trace buffer assigned to T3 resorts to instruction-level se-
quencing. This is shown in the diagram as a series of steps,
depicting smaller blocks fetched from the instruction cache.
The trace buffer strictly adheres to the boundary between
T3 and T4, dictated by trace selection, even if the final instruction
cache fetch produces a larger block of sequential
instructions than is needed by T3 itself.
We call this process hierarchical sequencing because
there exists a clear distinction between inter-trace control
flow and intra-trace control flow. Inter-trace control flow,
i.e. trace boundaries, is effectively pre-determined by trace
selection and is unaffected by dynamic effects such as trace
cache misses and mispredictions.
A contrasting sequencing model is shown in Figure 4(b).
In this model, trace selection is "reset" at the point of the
mispredicted branch, producing the shifted traces T3 0 , T4 0 ,
and T5 0 . This sequencing model does not work well with
path-based next trace prediction. After resolving the branch
misprediction, trace T3 0 and subsequent traces must somehow
be predicted. However, this requires a sequence of
traces leading to T3 0 and no such sequence is available (in-
dicated with question marks in the diagram).
A potential problem with hierarchical sequencing is mis-prediction
recovery latency. Explicit next trace prediction
uses a level of indirection: a trace is first predicted, and
then the trace cache is accessed. This implies an extra cycle
is added to the latency of misprediction recovery. How-
ever, this extra cycle is not exposed. First, consider the
case in which the alternate trace prediction is used. The
primary and alternate predictions are supplied by the trace
predictor at the same time, and stored together in the trace
buffer. Therefore, the alternate prediction is immediately
available for accessing the trace cache when the misprediction
is detected. Second, if the alternate is not used, then
the second-level branch predictor and instruction cache are
used to fetch instructions from the correct path. In this case,
the instruction cache is accessed immediately with the correct
branch target PC returned by the execution engine.
In our evaluation, we assume a trace must be fully constructed
before any of its instructions are dispatched to the
execution engine, because traces are efficiently renamed as
a unit [29, 25]. This aggravates both trace misprediction and
trace cache miss recovery latency. We want to make it clear,
however, that this is not due to any fundamental constraint
of the fetch model, only an artifact of our dispatch model.
3. Simulation methodology
3.1. Fetch models
To evaluate the performance of the trace cache microar-
chitecture, we compare it to several more constrained fetch
models. We first determine the performance advantage of
fetching multiple contiguous basic blocks per cycle over
conventional single block fetching. Then, the benefit of
fetching multiple noncontiguous basic blocks is isolated.
In all models a next trace predictor is used for control
prediction, for two reasons. First, next trace prediction
is highly accurate, and whether predicting one or many
branches at a time, it is comparable to or better than some of
the best single branch predictors in the literature. Second, it
is desirable to have a common underlying predictor for all
fetch models so we can separate performance due to fetch
bandwidth from that due to branch prediction (more on this
in Section 3.2).
What differentiates the following models is the trace selection
algorithm.
ffl SEQ.1 ("sequential, 1 block"): A "trace" is a single
basic block up to 16 instructions in length.
ffl SEQ.n ("sequential, n blocks"): A "trace" may contain
any number of sequential basic blocks up to the
instruction limit.
("trace cache"): A trace may contain any number
of conditional branches, both taken and not-taken, up
to instructions or the first indirect branch.
The SEQ.1 and SEQ.n models do not use a trace cache
because an interleaved instruction cache is capable of supplying
a "trace" in a single cycle [9] - a consequence of the
sequential selection constraint. Therefore, one may view
the SEQ.1/SEQ.n fetch unit as identical to the trace cache
microarchitecture in Figure 2, except the trace cache block
is replaced with a conventional instruction cache. That is,
the next trace predictor drives a conventional instruction
cache, and the trace buffers are used to construct "traces"
from the L2 cache/main memory if not present in the cache.
Finally, to establish an upper bound on the performance
of noncontiguous instruction fetching, we introduce a fourth
model, TC-perfect, which is identical to TC but the trace
cache always hits.
3.2. Isolating trace predictor/trace cache performance
An interesting side-effect of trace selection is that it
significantly affects trace prediction accuracy. In general,
smaller traces (resulting from more constrained trace selec-
tion) result in lower accuracy. We have determined at least
two reasons for this. First, longer traces naturally capture
longer path history. This can be compensated for by using
more trace identifiers in the path history if the traces are
small; that is, a good DOLC function for one trace length is
not necessarily good for another. For the TC model, DOLC
(a depth of 7 traces) consistently performs well
over all benchmarks [14]. For SEQ.1 and SEQ.n, a brief
search of the design space shows
depth of 17 traces) performs well.
We have observed, however, that tuning the DOLC parameters
is not enough - trace selection affects accuracy
in other ways. The graph in Figure 5 shows trace predictor
performance using an unbounded table, i.e. using
full, unhashed path history to make predictions. The graph
shows trace mispredictions per 1000 instructions for SEQ.1,
SEQ.n, and TC trace selection, as the history depth is var-
ied. For the go benchmark, trace mispredictions for the
SEQ.n model do not dip below 8.8 per 1000 instructions,
whereas the TC model reaches as few as 8.0 trace mispredictions
per 1000 instructions. Unconstrained trace selection
results in the creation of many unique traces. While
this trace explosion generally has a negative impact on trace
cache performance, we hypothesize it also creates many
more unique contexts for making predictions. A large prediction
table can exploit this additional context.
ideal trace prediction (GO)913171 6 11
history depth (traces)
trace
misp/1000
instr.
SEQ.n
Figure
5. Impact of trace selection on unbounded
trace predictor performance.
We conclude that it is difficult to separate the performance
advantage of the trace cache from that of the trace
predictor, because both show positive improvement with
longer traces. Nonetheless, when we compare TC to SEQ.n
or SEQ.1, we would like to know how much benefit is derived
from the trace cache itself.
To this end, we developed a methodology to statistically
"adjust" the overall branch prediction accuracy of a given
fetch model to match that of another model. The trace
predictor itself is not adjusted - it produces predictions in
the normal fashion. However, after making a prediction,
the predicted trace is compared with the actual trace, determined
in advance by a functional simulator running in
parallel with the timing simulator. If the prediction is in-
correct, the actual trace is substituted for the mispredicted
trace with some probability. In other words, some fraction
of mispredicted traces are corrected. The probability for injecting
corrections was chosen on a per-benchmark basis to
achieve the desired branch misprediction rate.
This methodology introduces two additional fetch mod-
els, SEQ.1-adj and SEQ.n-adj, corresponding to the "ad-
justed" SEQ.1 and SEQ.n models. Clearly these models
are unrealizable, but they are useful for performance comparisons
because their adjusted branch misprediction rates
match that of the TC model.
3.3. Simulator and benchmarks
A detailed, fully-execution driven superscalar processor
simulator is used to evaluate the trace cache microarchitec-
ture. The simulator was developed using the simplescalar
platform [2]. This platform uses a MIPS-like instruction set
and a gcc-based compiler to create binaries.
The datapath of the fetch engine as shown in Figure 2
is faithfully modeled. The next trace predictor has 2
tries. The DOLC functions for compressing the path history
into a 16-bit index were described earlier in Section 3.2, for
both the TC and SEQ models. The trace cache configuration
- size, associativity, and indexing - is varied. There
are sufficient outstanding trace buffers to keep the instruction
window full. The trace buffers share a single port to the
combined instruction cache and second-level branch predic-
tor. The instruction cache is 64KB, 4-way set-associative,
and 2-way interleaved. The line size is 16 instructions and
the cache hit and miss latencies are 1 cycle and 12 cycles
respectively. The second-level branch predictor consists of
2-bit counters and branch targets, assumed to be logically
stored with each branch in the instruction cache.
An instruction window of 256 instructions is used in all
experiments. The processor is 16-way superscalar, i.e. the
processor can fetch and issue up to 16 instructions each
cycle. Five basic pipeline stages are modeled. Instruction
fetch and dispatch take 1 cycle each. Issue takes at
least 1 cycle, possibly more if the instruction must stall for
operands; any 16 instructions, including loads and stores,
may issue each cycle. Execution takes a fixed latency based
on instruction type, plus any time spent waiting for a result
bus. Instructions retire in order.
For loads and stores, address generation takes 1 cycle
and the cache access is 2 cycles for a hit. The data cache
is 64KB, 4-way set-associative with a line size of 64 bytes
and a miss penalty of 14 cycles. Realistic but aggressive
memory disambiguation is modeled. Loads may proceed
ahead of any unresolved stores, and any memory hazards
are detected as store addresses become available - recovery
is via selective reissuing of misspeculated loads and their
dependent instructions [25].
Seven of the SPEC95 integer benchmarks, shown in Table
are simulated to completion.
Table
1. Benchmarks.
benchmark input dataset dynamic instr. count
gcc -O3 genrecog.i 117M
jpeg vigo.ppm 166M
li queens 7 202M
perl scrabbl.pl < scrabbl.in 108M
vortex persons.250 101M
4. Results
4.1. Performance of fetch models
Figure
6 shows the performance of the six fetch models
in terms of retired instructions per cycle (IPC). The
model in this section uses a 64KB (instruction storage only),
4-way set-associative trace cache. The trace cache is indexed
using only the PC (i.e. no explicit path associativity,
except that afforded by the 4 ways).357go gcc jpeg li perl m88k vortex
IPC
TC-perfect
Figure
6. Performance of the fetch models.
We can draw several conclusions from the graph in Figure
6. First, comparing the SEQ.n models to the SEQ.1
models, it is apparent that predicting and fetching multiple
sequential basic blocks provides a significant performance
advantage over conventional single-block fetching.
The graph in Figure 7 shows that the performance advantage
of the SEQ.n model over the SEQ.1 model ranges from
about 5% to 25%, with the majority of benchmarks showing
greater than 15% improvement. Similar results hold
whether or not branch prediction accuracy is adjusted for
the SEQ.n and SEQ.1 models.
This first observation is important because the SEQ.n
model only requires a more sophisticated, high-level control
flow predictor, and retains a more-or-less conventional
instruction cache microarchitecture.
0%
5%
10%
15%
20%
30%
go gcc jpeg li perl m88k vort
improvement
in
IPC
SEQ.n over SEQ.1
SEQ.n-adj over SEQ.1-adj
Figure
7. Speedup of SEQ.n over SEQ.1.
Second, the ability to fetch multiple, possibly noncontiguous
basic blocks improves performance significantly
over sequential-only fetching. The graph in Figure 8 shows
that the performance advantage of the TC model over the
SEQ.n model ranges from 15% to 35%.
Speedup of TC over SEQ.n
0%
5%
10%
15%
20%
30%
40%
go gcc jpeg li perl m88k vort
improvement
in
IPC trace cache
trace predictor
Figure
8. Speedup of TC over SEQ.n.
Figure
8 also isolates the contributions of next trace prediction
and the trace cache to performance. The lower part
of each bar is the speedup of model SEQ.n-adj over SEQ.n.
And since the overall branch misprediction rate of SEQ.n-
adj is adjusted to match that of the TC model, this part of
the bar approximately isolates the impact of next trace prediction
on performance. The top part of the bar therefore
isolates the impact of the trace cache on performance.
For go, which suffers noticeably more branch mispredictions
than other benchmarks, most of the benefit of the
model comes from next trace prediction. In this case, the
longer traces of the TC model are clearly more valuable for
improving the context used by the next trace predictor than
for providing raw instruction bandwidth. For gcc, however,
both next trace prediction and the trace cache contribute
equally to performance. The other five benchmarks benefit
mostly from higher fetch bandwidth.
Finally, Figure 6 shows the moderately large trace cache
of the TC model very nearly reaches the performance upper
bound established by TC-perfect (within 4%).
Table
shows trace- and branch-related measures. Average
trace lengths for TC range from 12.4 (li) to 15.8 (jpeg)
instructions (1.6 to over 2 times longer than SEQ.n traces).
The table also shows predictor performance: primary
and alternate trace mispredictions per 1000 instructions, and
overall branch misprediction rates (the latter is computed by
checking each branch at retirement to see if it caused a mis-
prediction, whether originating from the trace predictor or
second-level branch predictor). In all cases prediction improves
with longer traces. TC has from 20% to 45% fewer
trace mispredictions than SEQ.1, resulting in 15% (jpeg)
to 41% (m88ksim) fewer total branch mispredictions. Note
that the adjusted branch misprediction rates for the SEQ
models are nearly equal to those of TC.
Shorter traces, however, generally result in better alternate
trace prediction accuracy. Shorter traces result in (1)
fewer total traces and thus less aliasing, and (2) fewer possible
alternative traces from a given starting PC. For all
benchmarks except gcc and go, the alternate trace prediction
is almost always correct given the primary trace prediction
is incorrect - both predictions taken together result
in fewer than 1 trace misprediction per 1000 instructions.
Trace caches introduce redundancy - the same instruction
can appear multiple times in one or more traces. Table 2
shows two redundancy measures. The overall redundancy
factor, RF overall , is computed by maintaining a table of all
unique traces ever retired. Redundancy is the ratio of total
number of instructions to total number of unique instructions
for traces collected in the table. RF overall is independent
of trace cache configuration and does not capture dynamic
behavior. The dynamic redundancy factor, RF dyn , is
computed similarly, but using only traces in the trace cache
in a given cycle; the final value is an average over all cycles.
RF dyn was measured using a 64KB, 4-way trace cache.
RF overall varies from 2.9 (vortex) to 14 (go). RF dyn
is less than RF overall and only ranges between 2 and 4,
because the fixed size trace cache limits redundancy, and
perhaps temporally there is less redundancy.
4.2. Trace cache size and associativity
In this section we measure performance of the TC model
as a function of trace cache size and associativity. Figure 9
shows overall performance (IPC) for 12 trace cache config-
urations: direct mapped, 2-way, and 4-way associativity for
each of four sizes, 16KB, 32KB, 64KB, and 128KB.
Associativity has a noticeable impact on performance for
Table
2. Trace statistics.
model measure gcc go jpeg li m88k perl vort
trace length 4.9 6.2 8.3 4.2 4.8 5.1 5.8
trace misp./1000 8.8 14.5 5.2 6.9 3.5 3.4 1.5
SEQ.1 alt. trace misp./1000 2.1 4.5
branch misp. rate 5.0% 11.0% 7.7% 3.7% 2.2% 2.2% 1.1%
adjusted misp. rate 3.6% 8.2% 6.6% 3.2% 1.3% 1.4% 0.8%
trace length 7.2 8.0 9.6 6.3 6.0 7.1 8.2
trace misp./1000 7.3 12.7 4.6 6.9 3.3 3.1 1.2
SEQ.n alt. trace misp./1000 2.7 5.4 0.5 0.9 0.6 0.3 0.3
branch misp. rate 4.4% 10.1% 7.0% 3.7% 2.1% 2.0% 0.9%
adjusted misp. rate 3.6% 8.1% 6.7% 3.1% 1.3% 1.4% 0.8%
trace length 13.9 14.8 15.8 12.4 13.1 13.0 14.4
trace misp./1000 5.4 9.6 4.2 5.5 2.0 2.1 1.0
alt. trace misp./1000 2.7 5.3 0.9 1.3 0.5 0.3 0.3
branch misp. rate 3.6% 8.2% 6.7% 3.1% 1.3% 1.5% 0.8%
control instr. per trace 2.8 2.3 1.3 2.9 2.5 2.5 2.3
RF overall 7.1 14.4 5.3 3.1 3.7 4.1 2.9
RF dyn 3.0 3.3 3.7 3.2 3.1 2.9 2.13.54.55.56.57.5
trace cache size
IPC
jpeg
perl
vort
go
gcc
li4
Figure
9. Performance vs. size/associativity.
all of the benchmarks except go. Go has a particularly large
working set of unique traces [25], and total capacity is more
important than individual trace conflicts. The curves of jpeg
and li are fairly flat - size is of little importance, yet increasing
associativity improves performance. These two benchmarks
suffer few general conflict misses (otherwise size
should improve performance), yet conflicts among traces
with the same start PC are significant. Associativity allows
simultaneously caching these path-associative traces.
The performance improvement of the largest configuration
(128KB, 4-way) with respect to the smallest one
(16KB, direct mapped) ranges from 4% (go) to 10% (gcc).
Figure
shows trace cache performance in misses per
1000 instructions. Trace cache size is varied along the x-
axis, and there are six curves: direct mapped (DM), 2-
way (2W), and 4-way (4W) associative caches, both with
and without indexing for path associativity (PA). We chose
(somewhat arbitrarily) the following index function for
achieving path associativity: the low-order bits of the PC
form the set index, and then the high-order bits of this index
are XORed with the first two branch outcomes of the trace
identifier.
Gcc and go are the only benchmarks that do not fit entirely
within the largest trace cache. As we observed earlier,
go has many heavily-referenced traces, resulting in no fewer
than 20 misses/1000 instructions.
Path associativity reduces misses substantially, particularly
for direct mapped caches. Except for vortex, path associativity
closes the gap between direct mapped and 2-way
associative caches by more than half, and often entirely.
misses/1000
instr GO253545
misses/1000
instr LI51525M88KSIM2610misses/1000
instr PERL5152535
trace cache size
trace cache size
misses/1000
instr
DM
DM-PA
4W-PA
Figure
10. Trace cache misses.
5.
Summary
It is important to design instruction fetch units capable of
fetching past multiple, possibly taken branches each cycle.
Trace caches provide this capability without the complexity
and latency of equivalent-bandwidth instruction cache
designs. We evaluated a microarchitecture incorporating a
trace cache, with the following major results.
ffl The trace cache improves performance from 15% to
35% over an otherwise equally-sophisticated, but contiguous
multiple-block fetch mechanism.
ffl Longer traces improve trace prediction accuracy. For
the misprediction-bound benchmark go, this factor contributes
almost entirely to the observed performance gain.
ffl A moderately large and associative trace cache performs
as well as a perfect trace cache. For go, however,
trace mispredictions mask poor trace cache performance.
ffl Overall performance is not as sensitive to trace cache
size and associativity as one might expect, due in part to robust
instruction-level sequencing. IPC varies no more than
10% over a wide range of configurations.
ffl The complexity advantage of the trace cache comes at
the price of redundant instruction storage: for gcc, a factor
of 7 redundancy among all traces created, corresponding to
a factor of 3 redundancy in the trace cache.
ffl An instruction cache combined with an aggressive
trace predictor can fetch any number of contiguous basic
blocks per cycle, yielding from 5% to 25% improvement
over single-block fetching.
Acknowledgments
Our research on trace caches had its genesis in stimulating
group discussions with Guri Sohi and his students
Todd Austin, Scott Breach, Andreas Moshovos, Dionisios
Pnevmatikatos, and T. N. Vijaykumar; their contribution is
gratefully acknowledged.
We would also like to give special thanks to Quinn Jacobson
for his valuable input and for providing access to
next trace prediction simulators.
This work was supported in part by NSF Grants MIP-
9505853 and MIP-9307830 and by the U.S. Army Intelligence
Center and Fort Huachuca under Contract DABT63-
95-C-0127 and ARPA order no. D346.
Eric Rotenberg is supported by an IBM Fellowship.
--R
Integrating a misprediction recovery cache (mrc) into a superscalar pipeline.
Evaluating future mi- croprocessors: The simplescalar toolset
Optimization of instruction fetch mechanisms for high issue rates.
Control flow prediction with tree-like subgraphs for superscalar processors
Trace scheduling: A technique for global microcode compaction.
Alternative fetch and issue policies for the trace cache fetch mechanism.
Putting the fill unit to work: Dynamic optimizations for trace cache microproces- sors
Branch and fixed-point instruction execution units
Increasing the instruction fetch rate via block-structured instruction set ar- chitectures
Trace selection for compiling large c application programs to microcode.
Control flow speculation in multiscalar processors.
Expansion caches for superscalar processors.
Performance benefits of large execution atomic units in dynamically scheduled machines.
Exploiting fine-grained parallelism through a combination of hardware and software techniques
Hardware support for large atomic units in dynamically scheduled machines.
Exploiting instruction level parallelism in processors by caching scheduled groups.
Improving trace cache effectiveness with branch promotion and trace packing.
Critical issues regarding the trace cache fetch mechanism.
Dynamic flow instruction cache memory organized around trace segments independent of virtual address line.
Trace cache: a low latency approach to high bandwidth instruction fetch- ing
Trace cache: a low latency approach to high bandwidth instruction fetch- ing
Trace processors.
Improving cisc instruction decoding performance using a fill unit.
Multiscalar execution along a single flow of control.
Improving superscalar instruction dispatch and issue by exploiting dynamic code se- quences
Increasing the instruction fetch rate via multiple branch prediction and a branch address cache.
--TR
--CTR
Emil Talpes , Diana Marculescu, Power reduction through work reuse, Proceedings of the 2001 international symposium on Low power electronics and design, p.340-345, August 2001, Huntington Beach, California, United States
S. Bartolini , C. A. Prete, A proposal for input-sensitivity analysis of profile-driven optimizations on embedded applications, ACM SIGARCH Computer Architecture News, v.32 n.3, p.70-77, June 2004
Emil Talpes , Diana Marculescu, Execution cache-based microarchitecture power-efficient superscalar processors, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.13 n.1, p.14-26, January 2005
Emil Talpes , Diana Marculescu, Increased Scalability and Power Efficiency by Using Multiple Speed Pipelines, ACM SIGARCH Computer Architecture News, v.33 n.2, p.310-321, May 2005
Michael Behar , Avi Mendelson , Avinoam Kolodny, Trace cache sampling filter, ACM Transactions on Computer Systems (TOCS), v.25 n.1, p.3-es, February 2007
Oliverio J. Santana , Ayose Falcn , Alex Ramirez , Mateo Valero, Branch predictor guided instruction decoding, Proceedings of the 15th international conference on Parallel architectures and compilation techniques, September 16-20, 2006, Seattle, Washington, USA
Oliverio J. Santana , Alex Ramirez , Josep L. Larriba-Pey , Mateo Valero, A low-complexity fetch architecture for high-performance superscalar processors, ACM Transactions on Architecture and Code Optimization (TACO), v.1 n.2, p.220-245, June 2004
Sang-Jeong Lee , Pen-Chung Yew, On Augmenting Trace Cache for High-Bandwidth Value Prediction, IEEE Transactions on Computers, v.51 n.9, p.1074-1088, September 2002
Xianglong Huang , Stephen M. Blackburn , David Grove , Kathryn S. McKinley, Fast and efficient partial code reordering: taking advantage of dynamic recompilatior, Proceedings of the 2006 international symposium on Memory management, June 10-11, 2006, Ottawa, Ontario, Canada
S. Bartolini , C. A. Prete, Optimizing instruction cache performance of embedded systems, ACM Transactions on Embedded Computing Systems (TECS), v.4 n.4, p.934-965, November 2005
Yoav Almog , Roni Rosner , Naftali Schwartz , Ari Schmorak, Specialized Dynamic Optimizations for High-Performance Energy-Efficient Microarchitecture, Proceedings of the international symposium on Code generation and optimization: feedback-directed and runtime optimization, p.137, March 20-24, 2004, Palo Alto, California
Michele Co , Dee A. B. Weikle , Kevin Skadron, Evaluating trace cache energy efficiency, ACM Transactions on Architecture and Code Optimization (TACO), v.3 n.4, p.450-476, December 2006
independence in trace processors, Proceedings of the 32nd annual ACM/IEEE international symposium on Microarchitecture, p.4-15, November 16-18, 1999, Haifa, Israel
Roni Rosner , Yoav Almog , Micha Moffie , Naftali Schwartz , Avi Mendelson, Power Awareness through Selective Dynamically Optimized Traces, ACM SIGARCH Computer Architecture News, v.32 n.2, p.162, March 2004
Roni Rosner , Micha Moffie , Yiannakis Sazeides , Ronny Ronen, Selecting long atomic traces for high coverage, Proceedings of the 17th annual international conference on Supercomputing, June 23-26, 2003, San Francisco, CA, USA
Alex Ramirez , Oliverio J. Santana , Josep L. Larriba-Pey , Mateo Valero, Fetching instruction streams, Proceedings of the 35th annual ACM/IEEE international symposium on Microarchitecture, November 18-22, 2002, Istanbul, Turkey | multiple branch prediction;instruction fetching;trace cache;instruction cache;superscalar processors |
297709 | Automatic Compiler-Inserted Prefetching for Pointer-Based Applications. | AbstractAs the disparity between processor and memory speeds continues to grow, memory latency is becoming an increasingly important performance bottleneck. While software-controlled prefetching is an attractive technique for tolerating this latency, its success has been limited thus far to array-based numeric codes. In this paper, we expand the scope of automatic compiler-inserted prefetching to also include the recursive data structures commonly found in pointer-based applications.We propose three compiler-based prefetching schemes, and automate the most widely applicable scheme (greedy prefetching) in an optimizing research compiler. Our experimental results demonstrate that compiler-inserted prefetching can offer significant performance gains on both uniprocessors and large-scale shared-memory multiprocessors. | Introduction
OFTWARE -controlled data prefetching [1], [2] offers
the potential for bridging the ever-increasing speed
gap between the memory subsystem and today's high-performance
processors. In recognition of this potential,
a number of recent processors have added support for
prefetch instructions [3], [4], [5]. While prefetching has enjoyed
considerable success in array-based numeric codes [6],
its potential in pointer-based applications has remained
largely unexplored. This paper investigates compiler-inserted
prefetching for pointer-based applications-in par-
ticular, those containing recursive data structures.
Recursive Data Structures (RDSs) include familiar objects
such as linked lists, trees, graphs, etc., where individual
nodes are dynamically allocated from the heap, and
nodes are linked together through pointers to form the over-all
structure. For our purposes, "recursive data structures"
can be broadly interpreted to include most pointer-linked
data structures (e.g., mutually-recursive data structures, or
even a graph of heterogeneous objects). From a memory
performance perspective, these pointer-based data structures
are expected to be an important concern for the following
reasons. For an application to suffer a large memory
penalty due to data replacement misses, it typically must
have a large data set relative to the cache size. Aside from
multi-dimensional arrays, recursive data structures are one
of the most common and convenient methods of building
large data structures (e.g, B-trees in database applications,
octrees in graphics applications, etc. As we traverse a
C.-K. Luk is with the Department of Computer Science, University
of Toronto, Toronto, Ontario M5S 3G4, Canada. E-mail:
[email protected].
T. C. Mowry is with the Computer Science Department, Carnegie
Mellon University, Pittsburgh, PA 15213. E-mail: [email protected].
large RDS, we may potentially visit enough intervening
nodes to displace a given node from the cache before it is
revisited; hence temporal locality may be poor. Finally,
in contrast with arrays-where consecutive elements are at
contiguous addresses-there is little inherent spatial locality
between consecutively-accessed nodes in an RDS, since
they are dynamically allocated at arbitrary addresses.
To cope with the latency of accessing these pointer-based
data structures, we propose three compiler-based
schemes for prefetching RDSs, as described in Section II.
We implemented the most widely-applicable of these
schemes-greedy prefetching-in a modern research compiler
(SUIF [7]), as discussed in Section III. To evaluate
our schemes, we performed detailed simulations of their impact
on both uniprocessor and multiprocessor systems in
Sections IV and V, respectively. Finally, we present related
work and conclusions in Sections VI and VII.
II. Software-Controlled Prefetching for RDSs
A key challenge in successfully prefetching RDSs is
scheduling the prefetches sufficiently far in advance to
fully hide the latency, while introducing minimal runtime
overhead. In contrast with array-based codes, where the
prefetching distance can be easily controlled using software
pipelining [2], the fundamental difficulty with RDSs is that
we must first dereference pointers to compute the prefetch
addresses. Getting several nodes ahead in an RDS traversal
typically involves following a pointer chain. However,
the very act of touching these intermediate nodes along the
pointer chain means that we cannot tolerate the latency of
fetching more than one node ahead.
To overcome this pointer-chasing problem [8], we propose
three schemes for generating prefetch addresses without following
the entire pointer chain. The first two schemes-
greedy prefetching and history-pointer prefetching-use a
pointer within the current node as the prefetching address;
the difference is that greedy prefetching uses existing point-
ers, whereas history-pointer prefetching creates new point-
ers. The third scheme-data-linearization prefetching-
generates prefetch addresses without pointer dereferences.
A. Greedy Prefetching
In a k-ary RDS, each node contains k pointers to other
nodes. Greedy prefetching exploits the fact that when
only one of these k neighbors can be immediately
followed as the next node in the traversal, but there is often
a good chance that other neighbors will be visited sometime
in the future. Therefore by prefetching all k pointers
when a node is first visited, we hope that enough of these
preorder(treeNode * t) f
prefetch(t!left);
prefetch(t!right);
preorder(t!left);
preorder(t!right);
4 5partial latency cache miss
cache hit
cache miss
9 15122
(a) Code with Greedy Prefetching (b) Cache Miss Behavior
Fig. 1. Illustration of greedy prefetching.
prefetches are successful that we can hide at least some
fraction of the miss latency.
To illustrate how greedy prefetching works, consider the
pre-order traversal of a binary tree (i.e. Figure
1(a) shows the code with greedy prefetching added.
Assuming that the computation in process() takes half
as long as the cache miss latency L, we would want to
prefetch two nodes ahead to fully hide the latency. Figure
1(b) shows the caching behavior of each node. We
obviously suffer a full cache miss at the root node (node 1),
since there was no opportunity to fetch it ahead of time.
However, we would only suffer half of the miss penalty ( L
when we visit node 2, and no miss penalty when we eventually
visit node 3 (since the time to visit the subtree rooted
at node 2 is greater than L). In this example, the latency
is fully hidden for roughly half of the nodes, and reduced
by 50% for the other half (minus the root node).
Greedy prefetching offers the following advantages: (i)
it has low runtime overhead, since no additional storage
or computation is needed to construct the prefetch point-
ers; (ii) it is applicable to a wide variety of RDSs, regardless
of how they are accessed or whether their structure
is modified frequently; and (iii) it is relatively straightforward
to implement in a compiler-in fact, we have implemented
it in the SUIF compiler, as we describe later in
Section III. The main disadvantage of greedy prefetching
is that it does not offer precise control over the prefetching
distance, which is the motivation for our next algorithm.
B. History-Pointer Prefetching
Rather than relying on existing pointers to approximate
prefetch addresses, we can potentially synthesize more accurate
pointers based on the observed RDS traversal pat-
terns. To prefetch d nodes ahead under the history-pointer
prefetching scheme [8], we add a new pointer (called a
history-pointer) to a node n i to record the observed address
of n i+d (the node visited d nodes after n i ) on a recent
traversal of the RDS. On subsequent traversals of the
RDS, we prefetch the nodes pointed to by these history-
pointers. This scheme is most effective when the traversal
pattern does not change rapidly over time. To construct
the history-pointers, we maintain a FIFO queue of length
d which contains pointers to the last d nodes that have just
been visited. When we visit a new node n i , the oldest node
in the queue will be n i\Gammad (i.e. the node visited d nodes ear-
lier), and hence we update the history-pointer of n i\Gammad to
point to n i . After the first complete traversal of the RDS,
all of the history-pointers will be set.
In contrast with greedy prefetching, history-pointer
prefetching offers no improvement on the first traversal of
an RDS, but can potentially hide all of the latency on subsequent
traversals. While history-pointer prefetching offers
the potential advantage of improved latency tolerance, this
comes at the expense of (i) execution overhead to construct
the history-pointers, and (ii) space overhead for storing
these new pointers. To minimize execution overhead, we
can potentially update the history-pointers less frequently,
depending on how rapidly the RDS structure changes. In
one extreme, if the RDS never changes, we can set the
history-pointers just once. The problem with space overhead
is that it potentially worsens the caching behavior.
The desire to eliminate this space overhead altogether is
the motivation for our next prefetching scheme.
C. Data-Linearization Prefetching
The idea behind data-linearization prefetching [8] is to
map heap-allocated nodes that are likely to be accessed
close together in time into contiguous memory locations.
With this mapping, one can easily generate prefetch addresses
and launch them early enough. Another advantage
of this scheme is that it improves spatial locality. The major
challenge, however, is how and when we can generate
this data layout. In theory, one could dynamically remap
the data even after the RDS has been initially constructed,
but doing so may result in large runtime overheads and may
also violate program semantics. Instead, the easiest time to
map the nodes is at creation time, which is appropriate if
either the creation order already matches the traversal or-
der, or if it can be safely reordered to do so. Since dynamic
remapping is expensive (or impossible), this scheme obviously
works best if the structure of the RDS changes only
slowly (or not at all). If the RDS does change radically,
the program will still behave correctly, but prefetching will
not improve performance.
III. Implementation of Greedy Prefetching
Of the three schemes that we propose, greedy prefetching
is perhaps the most widely applicable since it does not
rely on traversal history information, and it requires no additional
storage or computation to construct prefetch ad-
dresses. For these reasons, we have implemented a version
of greedy prefetching within the SUIF compiler [7], and
we will simulate the other two algorithms by hand. Our
implementation consists of an analysis phase to recognize
RDS accesses, and a scheduling phase to insert prefetches.
A. Analysis: Recognizing RDS Accesses
To recognize RDS accesses, the compiler uses both type
declaration information to recognize which data objects are
RDSs, and control structure information to recognize when
these objects are being traversed. An RDS type is a record
type r containing at least one pointer that points either
directly or indirectly to a record type s. (Note that r and
s are not restricted to be the same type, since RDSs may
struct T f
int data;
struct T *left;
struct T *right;
struct A f
int
struct B *kids[8];
struct C f
int j;
double f;
(a) RDS type (b) RDS type (c) Not RDS type
Fig. 2. Examples of which types are recognized as RDS types.
while (l)
f
list *m;
for (.)
f
list *n;
f(tree *t)
f
k(tree tn)
f
(a) (b) (c) (d)
Fig. 3. Examples of control structures recognized as RDS traversals.
be comprised of heterogeneous nodes.) For example, the
type declarations in Figure 2(a) and Figure 2(b) would be
recognized as RDS types, whereas Figure 2(c) would not.
After discovering data structures with the appropriate
types, the compiler then looks for control structures that
are used to traverse the RDSs. In particular, the compiler
looks for loops or recursive procedure calls such that during
each new loop iteration or procedure invocation, a pointer
p to an RDS is assigned a value resulting from a dereference
of p-we refer to this as a recurrent pointer update.
This heuristic corresponds to how RDS codes are typically
written. To detect recurrent pointer updates, the compiler
propagates pointer values using a simplified (but less pre-
cise) version of earlier pointer analysis algorithms [9], [10].
Figure
3 shows some example program fragments that
our compiler treats as RDS accesses. In Figure 3(a), l is
updated to l!next!next inside the while-loop. In Figure
3(b), n is assigned the result of the function call g(n)
inside the for-loop. (Since our implementation does not
perform interprocedural analysis, it assumes that g(n) results
in a value n!.!next.) In Figure 3(c), two dereferences
of the function argument t are passed as the parameters
to two recursive calls. Figure 3(d) is similar to
Figure
3(c), except that a record (rather than a pointer) is
passed as the function argument.
Ideally, the next step would be to analyze data locality
across RDS nodes to eliminate unnecessary prefetches. Although
we have not automated this step in our compiler,
we evaluated its potential benefits in an earlier study [8].
B. Scheduling Prefetches
Once RDS accesses have been recognized, the compiler
inserts greedy prefetches as follows. At the point where
an RDS object is being traversed-i.e. where the recurrent
pointer update occurs-the compiler inserts prefetches
of all pointers within this object that point to RDS-type
objects at the earliest points where these addresses are
available within the surrounding loop or procedure body.
The availability of prefetch addresses is computed by prop-
while (l) f
while (l) f
prefetch(l!next);
(a) Loop
tree *q;
if (test(t!data))
else
if (q != NULL)
tree *q;
prefetch(t!left);
prefetch(t!right);
if (test(t!data))
else
if (q != NULL)
(b) Procedure
Fig. 4. Examples of greedy prefetch scheduling.
I
Benchmark characteristics.
Node
Recursive Data Input Memory
Benchmark Structures Used Data Set Allocated
octree
Bisort Binary tree 250,000 1,535 KB
integers
EM3D Singly-linked lists 2000 H-nodes, 1,671 KB
100 E-nodes,
75% local
Health Four-way tree and level = 5, 925 KB
doubly-linked lists
MST Array of singly- 512 nodes 10 KB
linked lists
Perimeter A quadtree 4Kx4K image 6,445 KB
Power Multi-way tree and 10,000 418 KB
singly-linked lists customers
TreeAdd Binary tree 1024K nodes 12,288 KB
Binary tree and 100,000 cities 5,120 KB
doubly-linked lists
Voronoi Binary tree 20,000 points 10,915 KB
agating the earliest generation points of pointer values
along with the values themselves. Two examples of greedy
prefetch scheduling are shown in Figure 4. Further details
of our implementation can be found in Luk's thesis [11].
IV. Prefetching RDSs on Uniprocessors
In this section, we quantify the impact of our prefetching
schemes on uniprocessor performance. Later, in Section V,
we will turn our attention to multiprocessor systems.
A. Experimental Framework
We performed detailed cycle-by-cycle simulations of the
entire Olden benchmark suite [12] on a dynamically-
scheduled, superscalar processor similar to the MIPS
R10000 [5]. The Olden benchmark suite contains ten
pointer-based applications written in C, which are briefly
summarized in Table I. The rightmost column in Table I
shows the amount of memory dynamically allocated to
RDS nodes.
Our simulation model varies slightly from the actual
MIPS R10000 (e.g., we model two memory units, and we
II
Uniprocessor simulation parameters.
Pipeline Parameters
Issue Width 4
Functional Units 2 Int, 2 FP, 2 Memory, 1 Branch
Reorder Buffer Size
Integer Multiply 12 cycles
Integer Divide 76 cycles
All Other Integer 1 cycle
FP Divide 15 cycles
FP Square Root 20 cycles
All Other FP 2 cycles
Branch Prediction Scheme 2-bit Counters
Memory Parameters
Primary Instr and Data Caches 16KB, 2-way set-associative
Unified Secondary Cache 512KB, 2-way set-associative
Line Size 32B
Primary-to-Secondary Miss 12 cycles
Primary-to-Memory Miss 75 cycles
Data Cache Miss Handlers 8
Data Cache Banks 2
Data Cache Fill Time 4 cycles
(Requires Exclusive Access)
Main Memory Bandwidth 1 access per 20 cycles
assume that all functional units are fully-pipelined), but
we do model the rich details of the processor including the
pipeline, register renaming, the reorder buffer, branch pre-
diction, instruction fetching, branching penalties, the memory
hierarchy (including contention), etc. Table II shows
the parameters of our model. We use pixie [13] to instrument
the optimized MIPS object files produced by the com-
piler, and pipe the resulting trace into our simulator.
To avoid misses during the initialization of dynamically-
allocated objects, we used a modified version of the IRIX
mallopt routine [14] whereby we prefetch allocated objects
before they are initialized. Determining these prefetch addresses
is straightforward, since objects of the same size
are typically allocated from contiguous memory. This
optimization alone led to over twofold speedups relative
to using malloc for the majority of the applications-
particularly those that frequently allocate small objects.
B. Performance of Greedy Prefetching
Figure
5 shows the results of our uniprocessor experi-
ments. The overall performance improvement offered by
greedy prefetching is shown in Figure 5(a), where the two
bars correspond to the cases without prefetching (N) and
with greedy prefetching (G). These bars represent execution
time normalized to the case without prefetching, and
they are broken down into four categories explaining what
happened during all potential graduation slots. (The number
of graduation slots is the issue width-4 in this case-
multiplied by the number of cycles.) The bottom section
(busy) is the number of slots when instructions actually
graduate, the top two sections are any non-graduating slots
that are immediately caused by the oldest instruction suffering
either a load or store miss, and the inst stall section
is all other slots where instructions do not graduate. Note
that the load stall and store stall sections are only a first-order
approximation of the performance loss due to cache
misses, since these delays also exacerbate subsequent data
dependence stalls.
As we see in Figure 5(a), half of the applications enjoy
a speedup ranging from 4% to 45%, and the other half are
within 2% of their original performance. For the applications
with the largest memory stall penalties-i.e. health,
perimeter, and treeadd-much of this stall time has been
eliminated. In the cases of bisort and mst, prefetching
overhead more than offset the reduction in memory stalls
(thus resulting in a slight performance degradation), but
this was not a problem in the other eight applications.
To understand the performance results in greater depth,
Figure
breaks down the original primary cache misses
into three categories: (i) those that are prefetched and
subsequently hit in the primary cache (pf hit), (ii) those
that are prefetched but remain primary misses (pf miss),
and (iii) those that are not prefetched (nopf miss). The
sum of the pf hit and pf miss cases is also known as the
coverage factor, which ideally should be 100%. For em3d,
power, and voronoi, the coverage factor is quite low (un-
der 20%) because most of their misses are caused by array
or scalar references-hence prefetching RDSs yields little
improvement. In all other cases, the coverage factor is
above 60%, and in four cases we achieve nearly perfect
coverage. If the pf miss category is large, this indicates
that prefetches were not scheduled effectively-either they
were issued too late to hide the latency, or else they were
too early and the prefetched data was displaced from the
cache before it could be referenced. This category is most
prominent in mst, where the compiler is unable to prefetch
early enough during the traversal of very short linked lists
within a hash table. Since greedy prefetching offer little
control over prefetching distance, it is not surprising that
scheduling is imperfect-in fact, it is encouraging that the
pf miss fractions are this low.
To help evaluate the costs of prefetching, Figure 5(c)
shows the fraction of dynamic prefetches that are unnecessary
because the data is found in the primary cache. For
each application, we show four different bars indicating the
total (dynamic) unnecessary prefetches caused by static
prefetch instructions with hit rates up to a given threshold.
Hence the bar labeled "100" corresponds to all unnecessary
prefetches, whereas the bar labeled "99" shows the total
unnecessary prefetches if we exclude prefetch instructions
with hit rates over 99%, etc. This breakdown indicates
the potential for reducing overhead by eliminating static
prefetch instructions that are clearly of little value. For
example, eliminating prefetches with hit rates over 99%
would eliminate over half of the unnecessary prefetches in
perimeter, thus decreasing overhead significantly. In con-
trast, reducing overhead with a flat distribution (e.g., bh)
is more difficult since prefetches that sometimes hit also
miss at least 10% of the time; therefore, eliminating them
may sacrifice some latency-hiding benefit. We found that
eliminating prefetches with hit rates above 95% improves
performance by 1-7% for these applications [8].
Finally, we measured the impact of greedy prefetching on
memory bandwidth consumption. We observe that on av-
0Normalized
Execution
Time load stall
100.0 96.6 100.0 101.2 100.0 99.8 100.0100.0 101.6 100.0100.0 99.9 100.0100.0100.0 99.4
bh bisort em3d health mst perimeter power treeadd tsp voronoi
store stall
inst stall
busy
(a) Execution Time
|||||||%
of
Original
Load
D-Cache
nopf_miss
bisort health perimeter treeadd voronoi
bh em3d mst power tsp
pf_miss
pf_hit
|||%
of
that
Hit
in
10099 95 90 10099 95 90 10099 95 90 10099 95 90 10099 95 90 10099 95 90 10099 95 90 10099 95 90 10099 95 90 10099 95 90
bh bisort em3d health mst perimeter power treeadd tsp voronoi
(b) Coverage Factor (c) Unnecessary Prefetches
Fig. 5. Performance impact of compiler-inserted greedy prefetching on a uniprocessor.
erage, greedy prefetching increases the traffic between the
primary and secondary caches by 12.7%, and the traffic
between the secondary cache and main memory by 7.8%.
In our experiments, this has almost no impact on perfor-
mance. Hence greedy prefetching does not appear to be
suffering from memory bandwidth problems.
In summary, we have seen that automatic compiler-inserted
prefetching can result in significant speedups for
uniprocessor applications containing RDSs. We now investigate
whether the two more sophisticated prefetching
schemes can offer even larger performance gains.
C. Performance of History-Pointer Prefetching and Data-
Linearization Prefetching
We applied history-pointer prefetching and data-
linearization prefetching by hand to several of our applica-
tions. History-pointer prefetching is applicable to health
because the list structures that are accessed by a key procedure
remain unchanged across the over ten thousand times
that it is called. As a result, history-pointer prefetching
achieves a 40% speedup over greedy prefetching through
better miss coverage and fewer unnecessary prefetches.
Although history-pointer prefetching has fewer unnecessary
prefetches than greedy prefetching, it has significantly
higher instruction overhead due to the extra work required
to maintain the history-pointers.
Data-linearization prefetching is applicable to both
perimeter and treeadd, because the creation order is
identical to the major subsequent traversal order in both
cases. As a result, data linearization does not require
changing the data layout in these cases (hence spatial locality
is unaffected). By reducing the number of unnecessary
prefetches (and hence prefetching overhead) while maintaining
good coverage factors, data-linearization prefetching
results in speedups of 9% and 18% over greedy prefetching
for perimeter and treeadd, respectively. Overall, we
see that both schemes can potentially offer significant improvements
over greedy prefetching when applicable.
V. Prefetching RDSs on Multiprocessors
Having observed the benefits of automatic prefetching
of RDSs on uniprocessors, we now investigate whether
the compiler can also accelerate pointer-based applications
running on multiprocessors. In earlier studies, Mowry
demonstrated that the compiler can successfully prefetch
parallel matrix-based codes [2], [15], but the compiler used
in those studies did not attempt to prefetch pointer-based
access patterns. However, through hand-inserted prefetch-
ing, Mowry was able to achieve a significant speedup in
BARNES [15], which is a pointer-intensive shared-memory
parallel application from the SPLASH suite [16].
BARNES performs a hierarchical n-body simulation of the
evolution of galaxies. The main computation consists of
a depth-first traversal of an octree structure to compute
the gravitational force exerted by the given body on all
other bodies in the tree. This is repeated for each body in
the system, and the bodies are statically assigned to processors
for the duration of each time step. Cache misses
occur whenever a processor visits a part of the octree that
is not already in its cache, either due to replacements or
communication. To insert prefetches by hand, Mowry used
a strategy similar to greedy prefetching: upon first arriving
at a node, he prefetched all immediate children before
descending depth-first into the first child.
III
Memory latencies in multiprocessor simulations.
Destination of Access Read Write
Primary Cache 1 cycle 1 cycle
Secondary Cache 15 cycles 4 cycles
Remote Node 101 cycles 89 cycles
Dirty Remote, Remote Home 132 cycles 120 cycles
||||||||Normalized
Execution
Time memory stalls
synchronization
instructions86 85
of
Original
D-Cache
nopf_miss
pf_miss
pf_hit
||||||%
of
that
Hit
in
D-Cache
(a) Execution (b) Coverage (c) Unnecessary
Time Factor Prefetches
Fig. 6. Impact of compiler-inserted greeding prefetching on BARNES
on a multiprocessor compiler-inserted
greedy prefetching, hand-inserted prefetching).
To evaluate the performance of our compiler-based implementation
of greedy prefetching on a multiprocessor, we
compared it with hand-inserted prefetching for BARNES. For
the sake of comparison, we adopted the same simulation
environment used in Mowry's earlier study [15], which we
now briefly summarize. We simulated a cache-coherent,
shared-memory multiprocessor that resembles the DASH
multiprocessor [17]. Our simulated machine consists of 16
processors, each of which has two levels of direct-mapped
caches, both using 16 byte lines. Table III shows the latency
for servicing an access to different levels of the memory
hierarchy, in the absence of contention (our simulations
did model contention, however). To make simulations fea-
sible, we scaled down both the problem size and cache sizes
accordingly (we ran 8192 bodies through 3 times steps on
an 8K/64K cache hierarchy), as was done (and explained
in more detail) in the original study [2].
Figure
6 shows the impact of both compiler-inserted
greedy prefetching (G) and hand-inserted prefetching (H)
on BARNES. The execution times in Figure 6(a) are broken
down as follows: the bottom section is the amount of time
spent executing instructions (including any prefetching instruction
overhead), and the middle and top sections are
synchronization and memory stall times, respectively. As
we see in Figure 6(a), the compiler achieves nearly identical
performance to hand-inserted prefetching. The compiler
prefetches 90% of the original cache misses with only
15% of these misses being unnecessary, as we see in Figures
6(b) and 6(c), respectively. Of the prefetched misses,
the latency was fully hidden in half of the cases (pf hit),
and partially hidden in the other cases (pf miss). By eliminating
roughly half of the original memory stall time, the
compiler was able to achieve a 16% speedup.
The compiler's greedy strategy for inserting prefetches
is quite similar to what was done by hand, with the following
exception. In an effort to minimize unnecessary
prefetches, the compiler's default strategy is to prefetch
only the first 64 bytes within a given RDS node. In the
case of BARNES, the nodes are longer than 64 bytes, and we
discovered that hand-inserted prefetching achieves better
performance when we prefetch the entire nodes. In this
case, the improved miss coverage of prefetching the entire
nodes is worth the additional unnecessary prefetches,
thereby resulting in a 1% speedup over compiler-inserted
prefetching. Overall, however, we are quite pleased that
the compiler was able to do this well, nearly matching the
best performance that we could achieve by hand.
VI. Related Work
Although prefetching has been studied extensively for
array-based numeric codes [6], [18], relatively little work
has been done on non-numeric applications. Chen et al. [19]
used global instruction scheduling techniques to move address
generation back as early as possible to hide a small
cache miss latency (10 cycles), and found mixed results.
In contrast, our algorithms focus only on RDS accesses,
and can issue prefetches much earlier (across procedure
and loop iteration boundaries) by overcoming the pointer-chasing
problem. Zhang and Torrellas [20] proposed a
hardware-assisted scheme for prefetching irregular applications
in shared-memory multiprocessors. Under their
scheme, programs are annotated to bind together groups
of data (e.g., fields in a record or two records linked by a
pointer), which are then prefetched under hardware con-
trol. Compared with our compiler-based approach, their
scheme has two shortcomings: (i) annotations are inserted
manually, and (ii) their hardware extensions are not
likely to be applicable in uniprocessors. Joseph and Grunwald
[21] proposed a hardware-based Markov prefetching
scheme which prefetches multiple predicted addresses upon
a primary cache miss. While Markov prefetching can potentially
handle chaotic miss patterns, it requires considerably
more hardware support and has less flexibility in
selecting what to prefetch and controlling the prefetch distance
than our compiler-based schemes.
To our knowledge, the only compiler-based pointer
prefetching scheme in the literature is the SPAID scheme
proposed by Lipasti et al. [22]. Based on an observation
that procedures are likely to dereference any pointers
passed to them as arguments, SPAID inserts prefetches
for the objects pointed to by these pointer arguments at
the call sites. Therefore this scheme is only effective if the
interval between the start of a procedure call and its dereference
of a pointer is comparable to the cache miss latency.
In an earlier study [8], we found that greedy prefetching offers
substantially better performance than SPAID by hiding
more latency while paying less overhead.
VII. Conclusions
While automatic compiler-inserted prefetching has
shown considerable success in hiding the memory latency
of array-based codes, the compiler technology for successfully
prefetching pointer-based data structures has thus far
been lacking. In this paper, we propose three prefetching
schemes which overcome the pointer-chasing problem,
we automate the most widely applicable scheme (greedy
prefetching) in the compiler, and we evaluate its performance
on both a modern superscalar uniprocessor (sim-
ilar to the MIPS R10000) and on a large-scale shared-memory
multiprocessor. Our uniprocessor experiments
show that automatic compiler-inserted prefetching can accelerate
pointer-based applications by as much as 45%.
In addition, the more sophisticated algorithms (which we
currently simulate by hand) can offer even larger performance
gains. Our multiprocessor experiments demonstrate
that the compiler can potentially provide equivalent performance
to hand-inserted prefetching even on parallel ap-
plications. These encouraging results suggest that the latency
problem for pointer-based codes may be addressed
largely through the prefetch instructions that already exist
in many recent microprocessors.
Acknowledgments
This work is supported by a grant from IBM Canada's
Centre for Advanced Studies. Chi-Keung Luk is partially
supported by a Canadian CommonwealthFellowship.
Todd C. Mowry is partially supported by a Faculty Development
Award from IBM.
--R
"Software prefetching,"
Tolerating Latency Through Software-Controlled Data Prefetching
"Com- piler techniques for data prefetching on the PowerPC,"
"Data prefetching on the HP PA8000,"
"The MIPS R10000 superscalar microprocessor,"
"Design and evaluation of a compiler algorithm for prefetching,"
"SUIF: An infrastructure for research on parallelizing and optimizing compilers,"
"Compiler-based prefetching for recursive data structures,"
"Context-sensitive interprocedural points-to analysis in the presence of function pointers,"
"Interprocedural modification side effect analysis with pointer aliasing,"
Optimizing the Cache Performance of Non-Numeric Applications
"Support- ing dynamic data structures on distributed memory machines,"
"Tracing with pixie,"
"Fast fits,"
"Tolerating latency in multiprocessors through compiler-inserted prefetching,"
parallel applications for shared memory,"
"The Stanford DASH multiproces- sor,"
"An effective on-chip preloading scheme to reduce data access penalty,"
"Data access microarchitectures for superscalar processors with compiler-assisted data prefetching,"
"Speeding up irregular applications in shared-memory multiprocessors: Memory binding and group prefetching,"
"Prefetching using Markov predic- tors,"
"SPAID: Software prefetching in pointer- and call-intensive environments,"
--TR
--CTR
Subramanian Ramaswamy , Jaswanth Sreeram , Sudhakar Yalamanchili , Krishna V. Palem, Data trace cache: an application specific cache architecture, ACM SIGARCH Computer Architecture News, v.34 n.1, March 2006
Shimin Chen , Phillip B. Gibbons , Todd C. Mowry, Improving index performance through prefetching, ACM SIGMOD Record, v.30 n.2, p.235-246, June 2001
Tatsushi Inagaki , Tamiya Onodera , Hideaki Komatsu , Toshio Nakatani, Stride prefetching by dynamically inspecting objects, ACM SIGPLAN Notices, v.38 n.5, May
Evangelia Athanasaki , Nikos Anastopoulos , Kornilios Kourtis , Nectarios Koziris, Exploring the performance limits of simultaneous multithreading for memory intensive applications, The Journal of Supercomputing, v.44 n.1, p.64-97, April 2008
Chi-Keung Luk, Tolerating memory latency through software-controlled pre-execution in simultaneous multithreading processors, ACM SIGARCH Computer Architecture News, v.29 n.2, p.40-51, May 2001 | compiler optimization;prefetching;performance evaluation;caches;shared-memory multiprocessors;recursive data structures;pointer-based applications |
297713 | Analysis of Temporal-Based Program Behavior for Improved Instruction Cache Performance. | AbstractIn this paper, we examine temporal-based program interaction in order to improve layout by reducing the probability that program units will conflict in an instruction cache. In that context, we present two profile-guided procedure reordering algorithms. Both techniques use cache line coloring to arrive at a final program layout and target the elimination of first generation cache conflicts (i.e., conflicts between caller/callee pairs). The first algorithm builds a call graph that records local temporal interaction between procedures. We will describe how the call graph is used to guide the placement step and present methods that accelerate cache line coloring by exploring aggressive graph pruning techniques. In the second approach, we capture global temporal program interaction by constructing a Conflict Miss Graph (CMG). The CMG estimates the worst-case number of misses two competing procedures can inflict upon one another and reducing higher generation cache conflicts. We use a pruned CMG graph to guide cache line coloring. Using several C and C++ benchmarks, we show the benefits of letting both types of graphs guide procedure reordering to improve instruction cache hit rates. To contrast the differences between these two forms of temporal interaction, we also develop new characterization streams based on the Inter-Reference Gap (IRG) model. | Introduction
C ACHE memories are found on most microprocessors
designed today. Caching the instruction stream can
be very beneficial since instruction references exhibit a high
degree of spatial and temporal locality. Still, cache misses
will occur for one of three reasons [1]:
1. first time reference,
2. finite cache capacity, or
3. memory address conflict.
Our work here is focused on reducing memory address
conflicts by rearranging a program on the available memory
space. Analysis of program interaction can be performed
at a range of granularities, the coarsest being an individual
procedure [2]. We begin by considering the procedure
Call Graph Ordering (CGO) associated with a program.
The CGO captures local temporal interaction by weighting
its edges with the number of times one procedure follows
another during program execution.
We also consider the interaction of basic blocks contained
within procedures by identifying the number of cache lines
touched by each basic block in our Conflict Miss Graph
(CMG). We do not attempt to move basic blocks or split
procedures [3] though. We weight CMG edges by measuring
global temporal interaction between procedures occurring
in a finite window, containing as many entries as there
are cache lines. Program interaction outside this window
is not of interest because of the finite cache effect. We use
these graphs as input to our coloring algorithm to produce
an improved code layout for instruction caches.
To characterize the temporal behavior captured by these
graphs, we extend the Inter-Reference Gap (IRG) model
[4]. We define three new IRG-based streams that describe
different levels of procedure-based temporal interaction.
We show how we can use them to compare the temporal
content between the CGO, CMG and the Temporal Relationship
Graph (TRG) (as described by Gloy et al. in [5]).
There has been a considerable amount of work done on
code repositioning for improved instruction cache performance
[3], [5], [6], [7], [8], [9], [10]. In the following section
we discuss some of this work, as it relates to our work here.
A. Related Work
Pettis and Hansen [3] employ procedure and basic block
reordering as well as procedure splitting based on frequency
counts to minimize instruction cache conflicts. The layout
of a program is directed by traversing call graph edges in
decreasing edge weight order using a closest-is-best placement
strategy. Chains are formed by merging nodes, laying
them out next to each other until the entire graph is processed
A number of related techniques have been proposed, focusing
on mapping loops [6], operating system code [10],
traces [9], and activity sets [7]. Two other approaches discussed
in [8] and [11] reorganize code based on compile-time
information.
The profile-guided algorithms described above use calling
frequencies to weight a graph and guide placement [3],
[6], [9], [10]. Our first approach also uses calling frequencies
but improves performance by intelligently placing procedures
in the cache by coloring cache lines. The second
algorithm described in this paper captures global temporal
information and attempts to minimize conflicts present between
procedures that do not immediately follow each other
during execution. It is similar in spirit with the approach
described in [5], with some differences that will be high-lighted
in Section V. Also, our graph coloring algorithm
works at a finer level of granularity (cache line size instead
of cache size [6], [11]), and can avoid conflicts encountered
when either forming chains with the closest-is-best heuristic
[3] or dealing with subgraphs having a size larger than
the cache.
This paper is organized as follows. In Sections II and III
we describe our graph construction algorithms. Section II
describes an improved graph pruning technique. In Section
IV we report simulation results. Section V reviews
the IRG temporal analysis model and presents new methods
for characterizing program interaction.
II. Call Graph Ordering
Program layout may involve two steps: 1) constructing
a graph-based representation of the program and 2) using
the graph to perform layout of the program on the available
memory space. A Call Graph is a procedure graph
having edges between procedures that call each other. The
edges are weighted with the call/return frequency captured
in the program profile. Each procedure is mapped to a single
vertex, with all call paths between any two procedures
condensed into a single edge between the two vertices in
the graph. Edge weights can be derived from profiling information
or estimated from the program control flow [8],
[12]. In this paper we concentrate on profile-based edge
weights.
After constructing the Call Graph we lay out the program
using cache line coloring. We start by dividing the
cache into a set of colors, one color for each cache line. For
each procedure, we count the number of cache lines needed
to hold the procedure, record the cache colors used to map
the procedure, and keep track of the unavailable-set of colors
(i.e., the cache lines where the procedure should not be
mapped to).
We define the popular procedure set as those procedures
which are frequently visited. The popular edge set contains
the frequently traversed edges. The rest of the procedures
(and edges) will be called unpopular. Unpopular
procedures are pruned from the graph. Pruning reduces
the amount of work for placement, and allows us to focus
on the procedures most likely to encounter misses. A discussion
of the base pruning algorithm used can be found
in [2].
Note, there is a difference between popular procedures
and procedures that consume a noticeable portion of a pro-
gram's overall execution time. A time consuming procedure
may be labeled unpopular because it rarely switches
control flow to another procedure. If a procedure rarely
switches control flow, it causes a small number of conflicts
misses with the rest of procedures.
The algorithm sorts the popular edges in descending edge
weight order. We then traverse the sorted popular edge list,
inspect the state (i.e., mapped or unmapped) of the two
procedures forming the edge and map the procedures using
heuristics. Figure 1 provides a pseudo code description
of color mapping. A more complete description can be
found in [2]. This process is repeated until all of the edges
in the popular set have been processed. The unpopular
procedures fill the holes left from coloring using a simple
Input: graph G(P,E),
descending order based on weight;
Eliminate all threshold T;
FOR-EACH (remaining edge between procedures Pi and Pj) -
SELECT(state of Pi, state of Pj)
CASE I: (Pi is unmapped and Pj is unmapped) -
Arbitrarily map Pi and Pj, forming a
compound node Pij
CASE II: (both Pi and Pj are mapped, but they reside
in two different compound nodes Ci and Cj) -
Concatenate the two compound nodes Ci and Cj,
minimizing the distance between Pi and Pj;
If there is a color conflict, shift Ci in the
color space until there is no conflict;
If a conflict can not be avoided, return Ci to
its original position;
CASE III: (either Pi or Pj is mapped, but not both) -
same as CASE II
CASE IV: (both Pi and Pj are mapped and they belong
to the same compound node Ck) -
If there is no conflict, return them in position;
Else -
Move the procedure closest to an end of the
compound node Ck (Pi) to that end of Ck (outside
of the compound node);
If there is still a conflict, shift Pi in the color
space until no conflict occurs with Pj;
If conflict can not be avoided, leave Pi to its
original position;
Update the unavailable set of colors;
Fill in holes created by CASES II, III, and IV;
Fig. 1. Pseudo code for our cache line coloring algorithm.
depth-first traversal of the unpopular edges joining them.
The algorithm in Fig.1 assumes a direct-mapped cache
organization. For associative caches our algorithm breaks
up the address space into chunks, equal in size to
(number of cache sets cache line size). Therefore, the
number of sets represents the number of available colors in
the mapping. The modified color mapping algorithm keeps
track of the number of times each color (set) appears in the
procedure's unavailable-set of colors. Mapping a procedure
to a color (set) does not cause any conflicts as long as the
number of times that color (set) appears in the unavailable-
set of colors is less than the degree of associativity of the
cache.
Next, we look at how to efficiently eliminate a majority
of the work spent on coloring by using an aggressive graph
pruning algorithm.
A. Pruning Rules For Procedure Call Graphs
Pruning a call graph is done using a fixed threshold value
(selected edge weight) [2] . In this section we present pruning
rules that can reduce the size of the graph that is used
in cache coloring. They are specifically designed to reduce
number of lines in the cache;
number of procedures P (nodes) in graph G;
Sum of all incident edges on
procedure Pi in Graph G;
Sort procedures based on increasing
NodeWeight
DO-WHILE (at least one procedure is pruned) -
FOR-EACH unpruned procedure Pi -
number of neighbors of Pi;
number of cache lines comprising Pi;
sum of the sizes of the num-i
neighbors of Pi;
Prune procedure Pi and all edges
incident on Pi;
Fig. 2. Pseudo code for our C pruning rule.
the number of first-generation cache conflicts.
We assume we are using a direct-mapped cache containing
C cache lines. The program is represented as an undirected
graph (P,E) where nodes
and each edge (i; represents a procedure call in the
program. The number of cache lines spanned by each procedure
is size i
. For each edge (i; j), weight[i; j] is the number
of times procedures i and j follow one another in the
control flow (in either order).
A procedure mapping M is an assignment of each procedure
i to size[i] adjacent cache lines within the cache
(with wraparound). The cost of a procedure mapping is
the sum of all weight[i; j] for all procedures i and j, such
that (i; overlap in the cache. An optimal
mapping is one that is less costly than any other mapping.
Note that the cost of a mapping depends only on the
number of immediately adjacent procedures whose mappings
in the cache conflict. Conflicts between procedures
that do not call one another are not considered. Further-
more, the cost of assigning two adjacent procedures i and
j to conflicting cache lines is a constant, equal to the number
of conflicts, even though the actual number of replaced
cache lines may be smaller.
B. C Pruning Rule
Consider a cache mapping problem P. It is possible to
determine in some cases that a particular node i will be
able to be mapped to the cache without causing any con-
flicts, regardless of where in the cache all adjacent nodes
are eventually mapped. In this case i, and all edges connected
to i, can be deleted from the graph, creating a new
cache mapping problem P 0 with one less node. Figure 2
provides pseudo code for our pruning algorithm.
This pruning rule is a generalization of the rules described
in [13] to perform graph coloring. The graph coloring
problem is to assign one of K colors to the nodes of
the graph such that adjacent nodes are not assigned the
same color. If a node has neighbors it can be deleted
because, regardless of how its neighbors are eventually col-
ored, there will definitely be at least one color left over that
can be assigned to it. The deleted nodes are then colored
in the reverse order of their deletion.
The remaining (non-prunable) graph is passed to our coloring
algorithm. Once coloring has been performed, each
pruned node must be mapped. The nodes are laid out in
the opposite order of their deletion.
III. Conflict Miss Graphs
Next we consider cache misses which can occur between
procedures many procedures away in the call graph, as well
as on different call chains [14]. We capture temporal information
by weighting the edges of a procedure graph with
an estimation of the worst case number of conflict misses
that can occur between any two procedures. We then use
the graph to apply cache line coloring to place procedures
in the cache address space. We call this graph a Conflict
Miss Graph (CMG).
The complete algorithm is described in [14]. We summarize
it here and will contrast it with the CGO in Section V
using Inter-Reference Gap analysis.
A. Conflict Miss Graph Construction
The CMG is built using profile data. We assume a worst-case
scenario where procedures completely overlap in the
cache address space every time they interact. Given a cache
configuration we determine the size of a procedure P i in
cache lines. We also compute the number of unique cache
lines spanned by every basic block executed by a procedure,
l i
. We identify the first time a basic block is executed, and
label those references as globally unique accesses, gl i
The CMG is an undirected procedure graph with edges
being weighted according to our worst case miss model [14].
The edge weights are updated based on the contents of an
N-entry table, where N is the number of cache lines. The
table is fully-associative and uses an LRU replacement pol-
icy. Every entry (i.e., cache line) in the table is called
live. A procedure that has at least one live cache line is
also called live. When P i
is activated, we update the edge
weights between P i
and all procedures that have at least
one live cache line and were activated since the last activation
of
(if this last activation is captured in the LRU
table). The LRU table allows us to estimate the finite cache
effect.
We increment the CMG edge weight between P i
and a
live procedure P j
by the minimum of: (i) the accumulated
number of unique live cache lines of P last oc-
currence) and (ii) the number of unique cache lines of P i 's
current activation (excluding cold-start misses). A detailed
example of updating CMG edge weights can be found in
[14].
CMG edge weights are more accurate than CGO edge
weights because (i) CGO edge weights do not record the
number of cache lines that may conflict per call, and (ii)
interaction between procedures that do not directly call
each other is not captured.
IV. Experimental Results
We use trace-driven simulation to quantify the instruction
cache performance of the resulting layouts. Traces are
generated using ATOM [15] on a DEC 3000 AXP workstation
running Digital Unix V4.0. All applications are
compiled with the DEC C V5.2 and DEC C++ V5.5 com-
pilers. The same input is used to train the algorithm
and gather performance results. We simulated an 8KB,
direct-mapped, instruction cache with a 32-byte line size
(similar in design to the DEC Alpha 21064 and 21164
instruction caches). Our benchmark suite includes perl
from SPECINT95, flex (generator of lexical analyzers), gs
(ghostscript postscript viewer) and bison is a C parser gen-
erator. It also includes PC++2dep (C++ front-end written
in C/C++), f2dep (Fortran front-end written in C/C++),
dep2C++ (C/C++ program translating Sage internal representation
to C++ code) and ixx (IDL parser written in
C++).
Table
I presents the static and dynamic characteristics of
the benchmarks. Column 2 shows the input used to both
test and train our algorithms. Columns 3-5 list the total
number of instructions executed, the static size of the application
in kilobytes and the number of static procedures in
the program. Column 6 presents the percentage of the program
that contains popular procedures in the CMG while
column 7 contains the percentage of procedures that were
found to be popular (for CMG). The last column presents
the percentage of unactivated procedures used to fill in the
gaps left from the color mapping.
To prune the CMG graph, we form the popular set from
those procedures that are connected by edges that contribute
up to 80% of the total sum of edge weights in the
CMG [14]. Notice that the pruning algorithm reduces the
size of the CMG by 80-97%, and reduces the size of the executable
by 77.7-94.5% of the executable. This allows us to
concentrate on the important procedures in the program.
A. Simulation Results
We compare simulation results against the ordering produced
by the DEC compiler (static DFS ordering of proce-
dures) and CGO using a fixed threshold value for pruning
(no aggressive pruning was employed).
Table
II shows the instruction cache miss rates. In all
cases, the same inputs were used for both training and
testing. The first column denotes the application while
columns 2-4 (7-9) shows the instruction cache miss rates
(number of cache misses) for DFS, CGO and CMG respec-
tively. Columns 5 and 6 show the relative improvement of
CMG over DFS and CGO respectively.
As we can see from Table II, the average instruction
cache miss rate for CMG is reduced by 30% on average over
the DFS ordering, and by 21% on average over the CGO
ordering. CMG improves performance against both static
DFS and CGO over all benchmarks except bison, flex and
gs. Bison and flex already have a very low miss rate and
no further improvement can be achieved. Gs has a large
number of popular procedures that can not be mapped in
the cache with significant reduction of the miss rate.
Next, we apply the pruning C pruning rule to CGO for
four benchmarks, bison, flex, gs, and perl. We have also
tried to apply this approach to CMG, but found we were
unable to significantly reduce the size of the graph. As
shown in Tables III and IV, the pruning rule deletes all
125 nodes from bison and completely eliminates all first-generation
conflict misses. Similarly, most of the nodes
are pruned from the other benchmarks, accompanied by a
significant drop in the number of first-generation conflict
misses. However, this drop is most of the times followed
by an increase in the total number of misses. By deleting
nodes and edges that do not contribute to first-generation
conflict misses, the coloring algorithm is deprived of information
that can be used to prevent higher order misses.
In the case of the bison benchmark, the nodes are inserted
into the mapping with complete disregard for higher order
conflicts.
These results suggest that node pruning rules such as C
can be useful as part of a cache conflict reduction strategy,
but only when paired with other techniques that prevent
higher order cache conflicts from canceling out the benefits
of reducing first-order conflicts.
B. Input training sensitivity
Since our procedure reordering algorithm is profile-
driven, we tried different training and test input files as
shown in Table V. Column 2 (3) has the training (test) input
while column 4 shows the size of the traces in millions
of instructions for the test and the training inputs (the last
one in parentheses). The last three columns present the
miss ratios for each of the algorithms simulated.
As we can see from Table V, although the performance of
both the CGO and the CMG approach drops compared to
the simulations using the same inputs, the relative advantage
of CMG against CGO and static DFS still remains.
In fact the performance gain is of the same order for all
benchmarks, i.e. CGO and CMG achieve similar performance
for bison and gs, while CMG improves significantly
the miss ratio of ixx.
V. Temporal locality and procedure-based IRG
Next we characterize the temporal interaction exposed
by CGO, CMG and the Temporal Relationship Graph
using an extended version of the Inter-Reference
Gap (IRG) model [4].
A. Procedure-based IRG
In [4], Phalke and Gopinath define the IRG for an address
as the number of memory references between successive
references to that address. An IRG stream for an address
in a trace is the sequence of successive IRG values for
that address and can be used to characterize its temporal
locality. Similarly, we can measure the temporal locality
of larger program granules such as basic blocks, cache lines
or procedures. The accuracy of the newly generated IRG
stream depends on the interval granularity. In this work we
set the program unit under study to be a procedure while
we vary the interval definition.
Program Input Instr. Exe Size # Static Pop Procs Unpop Procs
in million in KB Procs % Exe Size % Procs % Exe Size
perl primes 12 512 671 4.9 (25.1K) 5.2 3.5 (18.1K)
flex fixit 19 112 170 14.8 (16.6K) 17.6 6.5 (7.3K)
bison objparse 56 112 158 22.4 (25.1K) 22.1 6.1 (6.9K)
pC++2dep sample 19 480 665 9.5 (45.7K) 16.3 10.3 (49.5K)
dep2C++ sample 31 560 1338 4.8 (27.1K) 1.7 1.7 (9.5K)
gs tiger 34 496 1410 12.9 (64.0K) 11.2 9.3 (46.3K)
ixx layout 48 472 1581 5.7 (27.2K) 5.1 2.4 (11.7K)
I
Attributes of traced applications. The attributes include the number of executed instructions, the application executable
size, the number of static procedures, the percentage of program's size occupied by popular procedures, the percentage of
procedures that were found to be popular and the percentage of unactivated procedures that were used to fill memory
gaps left after applying coloring.
I-Cache Miss Rate Reduction # I-Cache Misses
Program DFS CGO CMG DFS CGO DFS CGO CMG
perl 4.72% 4.60% 3.77% .95 .83 588,123 572,650 469,329
flex 0.53% 0.45% 0.45% .08 .00 100,488 85,538 85,478
bison 0.04% 0.04% 0.05% -.01 -.01 21,798 21,379 26,792
pC++2dep 4.72% 5.46% 3.68% 1.04 1.78 895,261 1,035,639 698,003
dep2C++ 3.92% 3.46% 3.11% .81 .35 1,205,076 1,063,682 957,102
gs 3.45% 2.09% 2.08% 1.37 .01 1,176,335 712,230 709,643
ixx 5.83% 4.42% 2.57% 3.26 1.85 2,843,330 2,154,747 1,251,022
Avg. 3.52% 3.15% 2.48%
II
Instruction cache performance for static DFS, CGO and CMG-based reordering. Column 1 lists the application. Columns
2-4 show the instruction miss rates. The next two show the percent improvement over each by our algorithm. The last
three columns show the number of instruction cache misses.
Program Input Visited Pruned Pruned Pruned
Procs. 1st pass 2nd pass 3rd pass
bison objparse 125 125 0 0
flex fixit 97 61 5 2
gs tiger 532 524 5 0
perl primes 209 113 4 0
III
The results of applying the C pruning rules to Call Graphs for four applications. Passes refer to pruning iterations over
the graph. The algorithm is finished when no more nodes can be pruned in the graph.
Program CGO first higher C* first higher
Miss Rate order order Miss Rate order order
bison .04 1316 19812 .14 0 79888
flex .45 55004 30280 .51 42467 55233
gs 2.09 530908 658066 2.39 25663 791567
perl 4.60 99327 473070 4.45 92914 462056
IV
The results of applying the C pruning rules to Call Graphs for four applications. Cache parameters are the same as
those used above.
Program Training Test Trace Static DFS CGO CMG
input input instr. Miss rate Miss rate Miss rate
bison objparse cparse 35.6M (56.1) 0.05% 0.04% 0.06%
flex fixit scan 23.2M (19.1) 0.49% 0.43% 0.38%
gs tiger golfer 15.5M (34.1) 3.39% 2.49% 2.51%
ixx layout widget 52.7M (48.7) 5.89% 4.45% 2.54%
perl primes jumble 71.9M (12.4) 4.36% 4.61% 4.16%
Miss ratios when using different test and training inputs. The inputs are described in columns 2-3 while the sizes of their
corresponding traces are presented in column 4. Columns 5-7 include the instruction cache miss ratios.
The original IRG model exploits the temporal locality
of a single procedure, but not the temporal interaction between
procedures. Therefore, we redefine the IRG value for
a procedure pair A; B as the number of unique activated
procedures between invocations of A and B. We will refer
to this value as the Inter-Reference Procedure Gap (IRPG).
The CGO edge weights record part of the IRPG stream
since they capture the IRPG values of length 1.
In the TRG every node represents a procedure and every
edge is weighted by the number of times procedure A
vice versa only when both of them are found
inside a moving time window. The window includes previously
invoked procedures and its size is proportional to the
size of the cache. The window's contents are managed as a
LRU queue. The temporal interaction recorded by a TRG
can be characterized by the Inter-Reference Intermediate
Line Gap (IRILG) whose elements are equal to the number
of unique cache lines activated between successive A
and B invocations. The decision of when to update a TRG
edge depends on the size of the window, or equivalently on
the values present in the IRILG stream. The edge weight
is simply the count of all IRILG elements with a value
less than the predefined window size. The TRG captures
temporal interaction at a more detailed level than CGO
because the IRILG stream is richer in content than the
IRPG stream.
The CMG edge weight between procedures A and B is
updated only when A and B follow one another inside a
moving time window proportional in size to the cache size.
A CMG edge weight is updated whenever the IRILG value
is less than the window size. In both the TRG and the
CMG, procedures interact as long as they are found inside
the time window. A CMG, however, replaces procedures
in the time window based on the age of individual lines in
a procedure [14], while a TRG manages replacement on an
entire procedure basis [5].
In addition, a CMG edge weight between A and B is
incremented by the minimum of the unique live cache lines
of the successive invocations of A and B. The TRG simply
counts the number of times A and B follow each other. We
define the Inter-Reference Active Line Set (IRALS) for a
procedure pair as the sequence of the number of unique live
cache lines referenced between any successive occurrences
of A and B. Each IRALS element value is computed according
to the Worst Case Miss analysis presented in Section
III. A CMG edge weight is equal to the sum of the
IRALS values whose corresponding IRILG values are less
than the window size.
Table
VI shows the IRPG, IRILG, and IRALS element
frequency distribution for two edges in the ixx benchmark.
The selected edges have the 4th and 12th highest calling
frequency in the CGO popular edge list and are labeled as
e 4 and e 12 respectively. We classify the stream values in
the ranges shown in columns 1,4 and 7, and present per
stream frequency distributions in columns 2-3, 5-6 and 8-
9. The numbers in parentheses indicate how closely CGO
approximates the temporal information captured by the
stream under consideration. For example, while 66.1% of
IRILG elements for e 12 lie in the range between 2 and 10
unique cache lines, only 62.4% of them are recorded in the
CGO.
The 3 different approaches to edge weighting have significant
impact on the global edge ordering and the final
procedure placement.
Ixx CMG and CGO popular edge set intersection
Edge index in CGO ordering
Edge
index
in
ordering
Fig. 3. Relative edge ordering for the intersection of the CMG and
CGO popular edge orderings in ixx.
Fig. 3 compares the ordering of the common popular
edges of CGO and CMG, in ixx. A point at location (3,4)
means that the popular edge under consideration was found
3rd in CGO and 4th in CMG ordering. Points on the
straight line correspond to edges with the same relative
position in both edge lists. Points lying above (below) the
straight line indicate edges with a higher priority in the
CGO (CMG) edge list. Notice that very few edges fall below
the straight line due to the artificially inflated edge
index in the CMG edge list (which is much larger than
IRPG 4th 12th IRILG 4th 12th IRALS 4th 12th
value
VI
Frequency distribution of the IRPG, IRILG and IRALS sequences for two procedure pairs (4th and 12th in calling
frequency) of the ixx benchmark.
Program CGO pop CMG pop CGO " CMG CGO pop CMG pop CGO " CMG
Procs Procs Procs Edges Edges Edges
bison 43
flex 28 28 26 94 23
gs 94 158 94 105 1478 105
perl 20
VII
Intersections of the CMG and CGO popular procedure and edge sets.
the CGO one) and the different pruning algorithms used.
Although a lot of highly weighted edges maintained their
relative positions, the significant performance improvement
for CMG came from edges that were promoted higher in
the edge list ordering.
Table
VII shows the intersection between the CMG and
CGO popular procedure and edge sets. Columns 2-4 (5-
list the CGO and CMG popular procedure (edge) sets
along with their intersection. The numbers shown in Table
VII are sensitive to the pruning algorithm, but they are
compared to better illustrate the differences between the
CMG and CGO approach. Although one procedure set
is always the superset of the other, the CMG edge list is
always larger than the CGO edge list.
VI.
Acknowledgments
We would like to acknowledge the contributions of H.
Hashemi and B. Calder to this work. This research was
supported by the National Science Foundation CAREER
Award Program, by IBM Research and by Microsoft Re-search
VII. Conclusions
The performance of cache-based memory systems is critical
in today's processors. Research has shown that compiler
optimizations can significantly reduce memory la-
tency, and every opportunity should be taken by the compiler
to do so.
In this paper we presented two profile-guided algorithms
for procedure reordering which take into consideration not
only the procedure size but the cache organization as well.
While CGO attempts to minimize first-generation conflicts,
CMG targets higher generation misses. Both approaches
use pruned graph models to guide procedure placement via
cache line coloring. The CMG algorithm improved instruction
cache miss rates on average by 30% over a static depth
first search of procedures, and by 21% over CGO.
We also introduced three new sequences (IRPG, IRILG
and IRALS) based on the IRG model, to better characterize
the contents of each graph model.
--R
"Evaluating associativity in cpu caches,"
"Efficient Procedure Mapping using Cache Line Coloring,"
"Profile-Guided Code Positioning,"
"An Inter-Reference Gap Model for Temporal Locality in Program Behavior,"
"Procedure Placement using Temporal Ordering Information,"
"Program Optimization for Instruction Caches,"
"Code Reorganization for Instruction Caches,"
"Procedure Mapping using Static Call Graph Estimation,"
"Achieving High Instruction Cache Performance with an Optimizing Compiler,"
"Optimizing instruction cache performance for operating system intensive workloads,"
"Compile-Time Instruction Cache Optimizations,"
"Predicting program behavior using real or estimated profiles,"
"Register allocation and spilling via graph coloring,"
"Temporal-based Procedure Reordering for improved Instruction Cache Performance,"
--TR
--CTR
Altman , David Kaeli , Yaron Sheffer, Guest Editors' Introduction: Welcome to the Opportunities of Binary Translation, Computer, v.33 n.3, p.40-45, March 2000
S. Bartolini , C. A. Prete, Optimizing instruction cache performance of embedded systems, ACM Transactions on Embedded Computing Systems (TECS), v.4 n.4, p.934-965, November 2005
S. Bartolini , C. A. Prete, A proposal for input-sensitivity analysis of profile-driven optimizations on embedded applications, ACM SIGARCH Computer Architecture News, v.32 n.3, p.70-77, June 2004
Mohsen Sharifi , Behrouz Zolfaghari, YAARC: yet another approach to further reducing the rate of conflict misses, The Journal of Supercomputing, v.44 n.1, p.24-40, April 2008 | temporal locality;instruction caches;conflict misses;program reordering;graph pruning;graph coloring |
297715 | Randomized Cache Placement for Eliminating Conflicts. | AbstractApplications with regular patterns of memory access can experience high levels of cache conflict misses. In shared-memory multiprocessors conflict misses can be increased significantly by the data transpositions required for parallelization. Techniques such as blocking which are introduced within a single thread to improve locality, can result in yet more conflict misses. The tension between minimizing cache conflicts and the other transformations needed for efficientparallelization leads to complex optimization problems for parallelizing compilers. This paper shows how the introduction of a pseudorandom element into the cache index function can effectively eliminate repetitive conflict misses and produce a cache where miss ratio depends solely on working set behavior. We examine the impact of pseudorandom cache indexing on processor cycle times and present practical solutions to some of the major implementation issues for this type of cache. Our conclusions are supported by simulations of a superscalar out-of-order processor executing the SPEC95 benchmarks, as well as from cache simulations of individual loop kernels to illustrate specific effects. We present measurements of Instructions committed Per Cycle (IPC) when comparing the performance of different cache architectures on whole-program benchmarks such as the SPEC95 suite. | Introduction
If the upward trend in processor clock frequencies during
the last ten years is extrapolated over the next ten years,
we will see clock frequencies increase by a factor of twenty
during that period [1]. However, based on the current 7%
per annum reduction in DRAM access times [2], memory
latency can be expected to reduce by only 50% in the next
ten years. This potential ten-fold increase in the distance
to main memory has serious implications for the design of
future cache-based memory hierarchies as well as for the
architecture of memory devices themselves.
Each block of main memory can be placed in exactly
one set of blocks in cache. The chosen set is determined
by the indexing function. Conventional caches typically
extract a field of m bits from the address and use this to
select one block from a set of 2 m . Whilst easy to imple-
ment, this indexing function is not robust. The principal
weakness is its susceptibility to repetitive conflict misses.
For example, if C is the number of cache sets and B is
the block size, then addresses A 1 and A 2 map to the same
cache set if j A 1 =B j C =j A 2 =B j C . If A 1 and A 2 collide
on the same cache set, then addresses A 1
A 2 +k also collide in cache, for any integer k , except when
There are two common
cases when this happens. Firstly, when accessing a stream
of addresses fA collides with A i+k ,
then there may be up to m\Gammak conflict misses in this stream.
Secondly, when accessing elements of two distinct arrays b 0
collides with b 1 [j], then b 0 [i+k] collides with
under the conditions outlined above. Set-associativity
can help to alleviate such conflicts, but is not
an effective solution for repetitive and regular conflicts.
One of the best ways to control locality in dense matrix
computations with large data structures is to use a tiled (or
algorithm. This is effectively a re-ordering of
the iteration space which increases temporal locality. How-
ever, previous work has shown that the conflicts introduced
by tiling can be a serious problem [3]. In practice, until
now, this has meant that compilers which tile loop nests
really ought to compute the maximal conflict-free tile size
for given values of B, major array dimension N and cache
capacity C. Often this will be too small to make it worth-while
tiling a loop, or perhaps the value of N will not be
known at compile time. Gosh et al. [4] present a framework
for analyzing cache misses in perfectly-nested loops with
affine references. They develop a generic technique for determining
optimum tile sizes, and methods for determining
array padding sizes to avoid conflicts. These methods require
solutions to sets of linear Diophantine equations and
depend upon there being sufficient information at compile
time to find such solutions.
Table
I highlights the problem of conflict misses with
reference to the SPEC95 benchmarks. The programs were
compiled with the maximum optimization level and instrumented
with the atom tool [5]. A data cache similar to
the first-level cache of the Alpha 21164 microprocessor was
simulated: 8 KB capacity, 32-byte lines, write-through and
no write allocate. For each benchmark we simulated the
first 2 operations. Because of the no-write-allocate
feature, the tables below refer only to load operations.
Table
I shows the miss ratio for the following cache or-
ganizations: direct-mapped, two-way associative, column-
associative [6], victim cache with four victim lines [7], and
two-way skewed-associative [8], [9].
Of these schemes, only the two-way skewed-associative
cache uses an unconventional indexing scheme, as proposed
by its author. For comparison, the miss ratio of a fully-associative
cache is shown in the penultimate column. The
miss ratio difference between a direct-mapped cache and
that of a fully-associative cache is shown in the right-most
column of table I, and represents the direct-mapped conflict
miss ratio (CMR) [2]. In the case of hydro2d and apsi
some organizations exhibit lower miss ratios than a fully-associative
cache, due to sub-optimality of lru replacement
in a fully-associative cache for these particular programs.
Effectively, the direct-mapped conflict miss ratio represents
DM 2W CA VC SA FA CMR
tomcatv 53.8 48.1 47.0 26.6 22.1 12.5 41.3
su2cor 11.0 9.1 9.3 9.5 9.6 8.9 2.1
hydro2d 17.6 17.1 17.2 17.0 17.1 17.5 0.1
mgrid 3.8 3.6 4.2 3.7 4.1 3.5 0.3
applu 7.6 6.4 6.5 6.9 6.7 5.9 1.7
turb3d 7.5 6.5 6.4 7.0 5.4 2.8 4.7
apsi 15.5 13.3 13.4 10.7 11.5 12.5 3.0
fpppp 8.5 2.7 2.7 7.5 2.2 1.7 6.8
wave 31.8 31.7 30.7 20.1 16.8 13.9 17.9
go 13.4 8.2 8.6 10.9 7.5 4.8 8.6
gcc 10.6 7.2 7.3 8.6 6.6 5.3 5.3
compress 17.1 15.8 16.3 16.2 14.3 13.0 4.1
li 8.6 5.4 5.5 7.2 4.9 3.8 4.8
ijpeg 4.1 3.3 3.1 2.3 1.9 1.2 2.9
perl 10.7 7.3 7.5 9.3 6.9 5.2 5.5
vortex 5.3 2.7 2.7 3.8 1.8 1.4 3.9
Ave
Ave (Int) 9.22 6.44 6.67 7.67 5.66 4.42 4.80
Average 15.9 13.8 13.6 11.3 8.66 6.80 9.14
I
Cache miss ratios for direct-mapped (DM), 2-way
set-associative (2W), column-associative (CA), victim cache
(VC), 2-way skewed associative (SA), and fully-associative
organizations. Conflict miss ratio (CMR) is also shown.
the target reduction in miss ratio that we hope to achieve
through improved indexing schemes. The other type of
misses, compulsory and capacity, will remain unchanged
by the use of randomized indexing schemes.
As expected, the improvement of a 2-way set-associative
cache over a direct-mapped cache is rather low. The
column-associative cache provides a miss ratio similar to
that of a two-way set-associative cache. Since the former
has a lower access time but requires two cache probes to
satisfy some hits, any choice between these two organizations
should take into account implementation parameters
such as access time and miss penalty. The victim
cache removes many conflict misses and outperforms a four-way
set-associative cache. Finally, the two-way skewed-
associative cache offers the lowest miss ratio. Previous
work has shown that it can be significantly more effective
than a four-way conventionally-indexed set-associative
cache [10].
In this paper we investigate the use of alternative index
functions for reducing conflicts and discuss some practical
implementation issues. Section II introduces the alternative
index functions, and section III evaluates their conflict
avoidance properties. In section IV we discuss a number
of implementation issues, such as the effect of novel indexing
functions on cache access time. Then, in section V,
we evaluate the impact of the proposed indexing scheme
on the performance of a dynamically-scheduled processor.
Finally, in section VI, we draw conclusions from this study.
II. Alternative indexing functions
The aim of this paper is to show how alternative cache
organizations can eliminate repetitive conflict misses. This
is analogous to the problem of finding an efficient hashing
function. For large secondary or tertiary caches it may
be possible to use the virtual address mapping to adjust
the location of pages in cache, as suggested by Bershad et
al. [11], thus avoiding conflicts dynamically. However, for
small first-level caches this effect can only be achieved by
using an alternative cache index function.
In the field of interleaved memories it is well known that
bank conflicts can be reduced by using bank selection functions
other than the simple modulo-power-of-two. Lawrie
and Vora proposed a scheme using prime-modulus functions
[12], Harper and Jump [13], and Sohi [14] proposed
skewing functions. The use of xor functions in parallel
memory systems was proposed by Frailong et al. [15], and
other pseudo-random functions were proposed by Raghavan
and Hayes [16], and Rau et al. [17], [18]. These schemes
each yield a more or less uniform distribution of requests
to banks, with varying degrees of theoretical predictability
and implementation cost. In principle each of these
schemes could be used to construct a conflict-resistant
cache by using them as the indexing function. However,
in cache architectures two factors are critical; firstly, the
chosen indexing function must have a logically simple im-
plementation, and secondly we would like to be able to
guarantee good behavior on all regular address patterns -
even those that are pathological under a conventional index
function.
In the commercial domain, the IBM 3033 [19] and the
Amdahl 470 [20] made use of xor-mapping functions in
order to index the TLB. The first generation HP Precision
Architecture processors [21] also used a similar technique.
The use of pseudo-random cache indexing has been suggested
by other authors. For example, Smith [22] compared
a pseudo-random placement against a set-associative place-
ment. He concluded that random indexing had a small advantage
in most cases, but that the advantages were not
significant. In this paper we show that for certain workloads
and cache organizations, the advantages can be very
large.
Hashing the process id with the address bits in order
to index the cache was evaluated in a multiprogrammed
environment by Agarwal in [23]. Results showed that this
scheme could reduce the miss ratio.
Perhaps the most well-known alternative cache indexing
scheme is the class of bitwise exclusive-OR functions proposed
for the skewed associative cache [8]. The bitwise xor
mapping computes each bit of the cache index as either one
bit of the address or the xor of two bits. Where two such
mappings are required different groups of bits are chosen
for xor-ing in each case. A two-way skewed-associative
cache consists of two banks of the same size that are accessed
simultaneously with two different hashing functions.
Not only does the associativity help to reduce conflicts but
the skewed indexing functions help to prevent repetitive
conflicts from occurring.
The polynomial modulus function was first applied to
cache indexing in [10]. It is best described by first considering
the unsigned integer address A in terms of its binary
representation
This is interpreted as the polynomial
defined over the field GF(2). The binary
representation of the m-bit cache index R is similarly
defined by the GF(2) polynomial R(x) of order less than
m such that Effectively R(x) is
is an irreducible polynomial
of order m and P (x) is such that x i mod P (x) generates
all polynomials of order lower than m. The polynomials
that fulfil the previous requirements are called Ipoly poly-
nomials. Rau showed how the computation of R(x) can be
accomplished by the vector-matrix product of the address
and an n \Theta m matrix H of single-bit coefficients derived
from P (x) [18]. In GF(2), this product is computed by
a network of and and xor gates, and if the H-matrix is
constant the and gates can be omitted and the mapping
then requires just m xor gates with fan-in from 2 to n. In
practice we may reduce the number of input address bits
to the polynomial mapping function by ignoring some of
the upper bits in A. This does not seriously degrade the
quality of the mapping function.
Ipoly mapping functions have been studied previously
in the context of stride-insensitive interleaved memories
(see [17], [18]), and have certain provable characteristics of
significant value for cache indexing. In [24] it was demonstrated
that a skewed Ipoly cache indexing scheme shows a
higher degree of conflict resistance than that exhibited by
conventional set-associativity or other (non-Ipoly) xor-based
mapping functions. Overall, the skewed-associative
cache using Ipoly mapping and a pure lru replacement
policy achieved a miss ratio within 1% of that achieved by
a fully-associative cache. Given the advantage of an Ipoly
function over the bitwise xor function, all results presented
in this paper use the Ipoly indexing scheme.
III. Evaluation of Conflict Resistance
The performance of both the integer and floating-point
SPEC95 programs has been evaluated for column-
associative, two-way set-associative (2W) and two-way
skewed-associative organizations using Ipoly indexing
functions. In all cases a single-level cache is assumed. The
miss ratios of these configurations are shown in table II.
Given a conventional indexing function, the direct-mapped
(DM) and fully-associative organizations display
respectively the lowest and the highest degrees of
conflict-resistance of all possible cache architectures. As
such they define the bounds within which novel indexing
schemes should be evaluated. Their miss ratios are shown
in the right-most two columns of table II.
The column-associative cache has access-time characteristics
similar to a direct-mapped cache but has some degree
of pseudo-associativity - each address can map to one of
Ipoly indexing mod 2 k
col. assoc. 2-way skewed indexing
spl lru 2W plru lru FA DM
su2cor 10.5 9.1 9.9 9.4 9.4 8.9 11.0
hydro2d 17.6 17.2 17.1 17.0 17.1 17.5 17.6
mgrid 5.1 4.2 3.8 4.5 4.1 3.5 3.8
applu 7.3 6.5 6.9 6.8 6.4 5.9 7.6
turb3d 8.1 6.0 4.8 4.5 4.2 2.8 7.5
apsi 12.2 11.2 11.4 11.0 10.6 12.5 15.5
fpppp 4.0 2.7 2.8 2.1 2.3 1.7 8.5
wave ? 14.6 13.8 14.2 13.9 13.7 13.9 31.8
go 9.6 6.6 8.6 7.5 6.7 4.8 13.4
gcc 8.2 6.3 7.2 6.7 6.1 5.3 10.6
compress 14.5 13.5 13.7 13.9 13.4 13.0 17.1
li 5.5 4.5 6.1 4.9 4.5 3.8 8.6
ijpeg 1.8 1.3 1.7 1.5 1.4 1.2 4.1
perl 8.5 6.7 8.8 7.1 6.4 5.2 10.7
vortex 2.7 1.7 2.0 1.8 1.6 1.4 5.3
Ave
Ave (Int) 6.68 5.22 6.26 5.55 5.09 4.42 9.22
Ave ? 13.2 11.4 12.3 11.6 11.3 11.4 47.3
Average 8.77 7.39 7.99 7.47 7.14 6.80 15.9
II
Miss ratios for Ipoly indexing on SPEC95 benchmarks.
two locations in the cache, but initially only one is probed.
The column labelled spl represents a cache which swaps
data between the two locations to increase the percentage
of a hit on the first probe. It also uses a realistic pseudo-lru
replacement policy. The cache reported in the column
labelled lru does not swap data between columns and uses
an unrealistic pure lru replacement policy [10].
It is to be expected that a two-way set-associative cache
will be capable of eliminating many random conflicts. How-
ever, a conventionally-indexed set-associative cache is not
able to eliminate pathological conflict behavior as it has
limited associativity and a naive indexing function. The
performance of a two-way set-associative cache can be improved
by simply replacing the index function, whilst retaining
all other characteristics. Conventional lru replacement
can still be used, as the indexing function has no impact
on replacement for this cache organization. For two
programs the two-way Ipoly cache has a lower miss ratio
than a fully-associative cache. This is again due to the
sub-optimality of lru replacement in the fully-associative
cache, and is a common anomaly in programs with negligible
conflict misses.
The final cache organization shown in table II is the
two-way skewed-associative cache proposed originally by
Seznec [8]. In its original form it used two bitwise
xorindexing functions. Our version uses Ipoly indexing
functions, as proposed in [10] and [24]. In this case two
distinct Ipoly functions are used to construct two distinct
cache indices from each address. Pure lru is difficult to implement
in a skewed-associative cache, so here we present
results for an cache which uses a realistic pseudo-lru policy
(labelled plru) and a cache which uses an unrealistic
pure lru policy (labelled lru). This organization produces
the lowest conflict miss ratio, down from 4.8% to 0.67% for
SPECint, and from 12.61% to 0.07% for SPECfp.
It is striking that the performance improvement is dominated
by three programs (tomcatv, swim and wave). These
effectively exhibit pathological conflict miss ratios under
conventional indexing schemes. Studies by Olukotun et
al. [25], have shown that the data cache miss ratio in tomcatv
wastes 56% and 40% of available IPC in 6-way and
2-way superscalar processors respectively.
Tiling will often introduce extra cache conflicts, the elimination
of which is not always possible through software.
Now that we have alternative indexing functions that exhibit
conflict avoidance properties we can use these to avoid
these induced conflicts. The effectiveness of Ipoly indexing
for tiled loops was evaluated by simulating the cache
behavior of a variety of tiled loop kernels. Here we present
a small sample of results to illustrate the general outcome.
Figures
show the miss ratios observed in two tiled
matrix multiplication kernels where the original matrices
were square and of dimensions 171 and 256 respectively.
Tile sizes were varied from 2 \Theta 2 up to 16 \Theta 16 to show
the effect of conflicts occurring in caches that are direct-mapped
(a1), 2-way set-associative (a2), fully-associative
(fa) and skewed 2-way Ipoly (Hp-Sk). The tiled working
set divided by cache capacity measures the fraction of the
cache occupied by a single tile. Cache capacity is 8 KBytes,
with 32-byte lines.
For dimension 171 the miss ratio initially falls for all
caches as tile size increases. This is due to increasing
spatial locality, up to the point where self conflicts begin
to occur in the conventionally-indexed direct-mapped and
two-way set-associative caches. The fully-associative cache
suffers no self-conflicts and its miss ratio decreases monotonically
to less than 1% at 50% loading. The behavior of
the skewed 2-way Ipoly cache tracks the fully-associative
cache closely. The qualitative difference between the Ipoly
cache and a conventional two-way cache is clearly visible.
For dimension 256 the product array and the multiplicand
array are positioned in memory so that cross-conflicts
occur in addition to self-conflicts. Hence the direct-mapped
and 2-way set associative caches experience little spatial
locality. However, the Ipoly cache is able to eliminate
cross-conflicts as well as self-conflicts, and it again tracks
the fully-associative cache.
IV. Implementation Issues
The logic of the GF(2) polynomial modulus operation
presented in section II defines a class of hash functions
which compute the cache placement of an address by combining
subsets of the address bits using xor gates. This
means that, for example, bit 0 of the cache index may be
Working Set / Capacity
Miss
ratio
0% 10% 20% 30% 40% 50%
Hp-Sk
Fig. 1. Miss ratio versus cache loading for 171 \Theta 171 matrix multiply.
Working Set / Capacity
Miss
ratio
0% 10% 20% 30% 40% 50%
Fig. 2. Miss ratio versus cache loading for 256 \Theta 256 matrix multiply.
computed as the xor of bits 0, 11, 14, and 19 of the original
address. The choice of polynomial determines which
bits are included in each set. The implementation of such
a function for a cache with an 8-bit index would require
just eight xor gates with fan-in of 3 or 4.
Whilst this appears remarkably simple, there is more
to consider than just the placement function. Firstly, the
function itself uses address bits beyond the normal limit
imposed by typical minimum page size restriction. Sec-
ondly, the use of pseudo-random placement in a multi-level
memory hierarchy has implications for the maintenance of
Inclusion. In [24] we explain these two issues in more depth,
and show how the virtual-real two-level cache hierarchy
proposed by Wang et al. [26] provides a clean solution to
both problems.
A cache memory access in a conventional organization
normally computes its effective address by adding two reg-
isters, or a register plus a displacement. Ipoly indexing
implies additional circuitry to compute the index from the
effective address. This circuitry consists of several xor
gates that operate in parallel and therefore the total delay
is just the delay of one gate. Each xor gate has a number
of inputs that depend on the particular polynomial being
used. For the experiments reported in this paper the number
of inputs is never higher than 5. The xor gating required
by the Ipoly mapping may increase the critical path
length within the processor pipeline. However, any delay
will be short since all bits of the index can be computed
in parallel. Moreover, we show later that even if this additional
delay induces a full cycle penalty in the cache access
time, the Ipoly mapping provides a significant overall performance
improvement. Memory address prediction can be
also used to avoid the penalty introduced by the xor delay
when it lengthens the critical path. Memory addresses have
been shown to be highly predictable. For instance, in [27]
it was shown that the addresses of about 75% of the dynamically
executed memory instructions from the SPEC95
suite can be predicted with a simple tabular scheme which
tracks the last address produced by a given instruction and
its last stride. A similar scheme, that could be used to give
an early prediction of the line that is likely to be accessed
by a given load instruction, is outlined below.
The processor incorporates a table indexed by the instruction
address. Each entry stores the last address and
the predicted stride for some recently executed load in-
struction. In the fetch stage, this table is accessed with
the program counter. In the decode stage, the predicted
address is computed and the xor functions are performed
to compute the predicted cache line. This can be done in
one cycle since the xor can be performed in parallel with
the computation of the most-significant bits of the effeec-
tive address. When the instruction is subsequently issued
to the memory unit it uses the predicted line number to access
the cache in parallel with the actual address and line
computation. If the predicted line turns out to be incor-
rect, the cache access is repeated with the actual address.
Otherwise, the data provided by the speculative access can
be loaded into the destination register.
A number of previous papers have suggested address prediction
as a means to reduce memory latency [28], [29],
[30], or to execute memory instructions and their dependent
instructions speculatively [31], [27], [32]. In the case
of a miss-speculation, a recovery mechanism similar to that
used by branch prediction schemes is then used to squash
miss-speculated instructions.
V. Effect of Ipoly indexing on IPC
In order to verify the impact of polynomial mapping
on realistic microprocessor architectures we have developed
a parametric simulator for a four-way superscalar
processor with out-of-order execution. Table III summarizes
the functional units and their latencies used in these
experiments. The reorder buffer contained 32 entries, and
there were two separate physical register files (FP and In-
teger), each with 64 physical registers. The processor had
a lockup-free data cache [33] that allowed 8 outstanding
misses to different cache lines. Cache capacities of 8 KB
and were simulated with 2-way associativity and
32-byte lines. The cache was write-through and no-write-
allocate. The cache had two ports, each with a two-cycle
time and a miss penalty of 20 cycles. This was connected
by a 64-bit data bus to an infinite level-two cache.
Data dependencies through memory were speculated using
a mechanism similar to the arb of the Multiscalar [34]
and the HP PA-8000 [35]. A branch history table with 2K
entries and 2-bit saturating counters was used for branch
prediction.
Functional unit Latency Repeat rate
simple FP 4 1
III
Functional units and instruction latencies
The memory address prediction scheme was implemented
by a direct-mapped table with 1K entries, indexed
by instruction address. To reduce cost the entries were
not tagged, although this increases interference in the ta-
ble. Each entry contained the last effective address of the
most recent load instruction to index into that table entry,
together with the last observed stride. In addition, each
entry contained a 2-bit saturating counter to assign confidence
to the prediction. Only when the most-significant
bit of the counter is set would the prediction be considered
correct. The address field was updated for each new reference
regardless of the prediction. However, the stride field
was updated only when the counter went below
after two consecutive mispredictions.
Table
IV shows the IPC and miss ratios for six configurations
1 . All IPC averages are computed using an equally-weighted
harmonic mean. The baseline configuration is an
8 KB cache with conventional indexing and no address prediction
(np, 3rd column). The average IPC for this configuration
is 1.27 from an average miss ratio of 16.53%. With
Ipoly indexing the average miss ratio falls to 9.68%. If the
xor gates are not in the critical path IPC rises to 1.33 (nx,
5th column). Conversely, if the xor gates are in the critical
path, and a one cycle penalty in the cache access time is
assumed, the resulting IPC is 1.29 (wx, 6th column). How-
ever, if memory address prediction is then introduced (wp,
7th columnn) IPC is the same as for a cache without the
xor gates in the critical path (nx). Hence, the memory address
prediction scheme can offset the penalty introduced
by the additional delay of the xor gates when they are in
the critical path, even under the conservative assumption
that whole cycle of latency is added to each load instruc-
tion. Finally, table IV also shows the performance of a
set-associative cache without Ipoly indexing
1 For each configuration we simulated 10 8 instructions after skipping
the first 2 \Theta 10 9 .
(2nd column). Notice that the addition of Ipoly indexing
to an 8 KB cache yields over 60% of the IPC increase that
can be obtained by doubling the cache size.
indexing Ipoly indexing
su2cor y 1.28 1.24 1.26 1.24 1.21 1.25
hydro2d y 1.14 1.13 1.15 1.13 1.11 1.15
mgrid y 1.63 1.61 1.63 1.57 1.55 1.59
applu y 1.51 1.50 1.53 1.50 1.46 1.52
turb3d y 1.85 1.80 1.82 1.81 1.78 1.82
apsi y 1.13 1.08 1.09 1.08 1.07 1.09
fpppp y 2.14 2.00 2.00 1.98 1.93 1.94
wave ? 1.37 1.26 1.28 1.51 1.48 1.54
go y 1.00 0.87 0.88 0.87 0.83 0.84
compress y 1.13 1.12 1.13 1.11 1.07 1.10
li y 1.40 1.30 1.32 1.33 1.26 1.31
ijpeg y 1.31 1.28 1.28 1.29 1.28 1.30
perl y 1.45 1.26 1.27 1.24 1.19 1.21
vortex y 1.39 1.27 1.28 1.30 1.25 1.27
Ave
Ave (Int) 1.29 1.19 1.20 1.20 1.15 1.17
Ave ? 1.28 1.11 1.13 1.46 1.42 1.49
Ave y 1.38 1.30 1.32 1.30 1.27 1.30
Average 1.36 1.27 1.28 1.33 1.29 1.33
IV
Comparative IPC measurements (simulated).
These IPC measurements exhibit small absolute differ-
ences, but this is because the benefit of Ipoly indexing
is perceived by a only small subset of the benchmark pro-
grams. Most programs in SPEC95 exhibit low conflict miss
ratios. In fact the SPEC95 conflict miss ratio for an 8 KB
2-way set-associative cache is less than 4% for all programs
except tomcatv, swim and wave5. The two penultimate
rows of table IV show independent IPC averages for the
benchmarks with high conflict miss ratios (Ave ?), and
those with low conflict miss ratios (Ave y). This highlights
the ability of polynomial mapping to reduce the miss ratio
and significantly boost the performance of problem cases.
One can see that the polynomial mapping provides a significant
27% improvement in IPC for the three bad programs
even if the xor gates are in the critical path and memory
address prediction is not used. With memory address
prediction Ipoly indexing yields an IPC improvement of
33% compared with that of a conventional cache of the
same capacity, and 16% higher than that of a conventional
cache with twice the capacity. Notice that the polynomial
mapping scheme with prediction is even better than the
organization without prediction where the xor gates do
not extend the critical path. This is due to the fact that
the memory address prediction scheme reduces by one cycle
the effective cache hit time when the predictions are
correct, since the address computation is overlapped with
the cache access (the computed address is used to verify
that the prediction was correct). However, the main benefits
observed come from the reduction in conflict misses.
To isolate the different effects we have also simulated an
organization with the memory address prediction scheme
and conventional indexing for an 8 KB cache (wp, column
4). If we compare the IPC of this organization with that in
column 3, we see that the benefits of the memory address
prediction scheme due solely to the reduction of the hit time
are almost negligible. This confirms that the improvement
observed in the Ipoly indexing scheme with address prediction
derives from the reduction in conflict misses. The
averages for the fifteen programs which exhibit low levels
of conflict misses show a small (1.7%) deterioration in average
IPC when Ipoly indexing is used and the xor gates
are in the critical path. This is due to a slight increase in
the average hit time rather than an overall increase in miss
ratio (which on average falls by 2%). For these programs
the reduction in aggregated miss penalty does not outweigh
the slight extension in critical path length.
VI. Conclusions
In this paper we have discussed the problem of cache conflict
misses and surveyed the options for reducing or eliminating
those conflicts. We have described pseudo-random
indexing schemes based on polynomial modulus functions,
and have shown them to be robust enough to virtually eliminate
the repetitive cache conflicts caused by bad strides
inherent in some SPEC95 benchmarks, as well as eliminating
those introduced into an application by the tiling of
loop nests.
We have highlighted the major implementation issues
that arise from the use of such novel indexing schemes.
For example, Ipoly indexing uses more address bits than
a conventional cache to compute the cache index. Also,
the use of different indexing functions at level-1 and level-2
caches results in the occasional eviction at level-1 simply
to maintain Inclusion. We have explained that both of
these problems can be solved using a two-level virtual-real
cache hierarchy. Finally, we have proposed a memory address
prediction scheme to avoid the penalty due to the
small potential delay in the critical path introduced by the
pseudo-random indexing function.
Detailed simulations of an out-of-order superscalar processor
have demonstrated that programs with significant
numbers of conflict misses in a conventional 8 KB 2-way
skewed-associative cache perceive IPC improvements of
33% (with address prediction) or 27% (without address
prediction). This is up to 16% higher than the IPC improvements
obtained simply by doubling the cache capac-
ity. Furthermore, from the programs we analyzed, those
that do not experience significant conflict misses on average
see only a 1.7% reduction in IPC when Ipoly indexing
appears on the critical path for computing the effective ad-
dress, and address prediction is used. If the indexing logic
does not appear on the critical path no deterioration in
overall average performance is experienced by those programs
We believe the key contribution of pseudo-random indexing
is the resulting predictability of cache behavior. In
our experiments we found that Ipoly indexing reduces the
standard deviation of miss ratios across SPEC95 from 18.49
to 5.16. This could be beneficial in real-time systems where
unpredictable timing, caused by the possibility of pathological
miss ratios, presents problems. If conflict misses
are eliminated, the miss ratio depends solely on compulsory
and capacity misses, which in general are easier to
predict and control. Conflict avoidance could also be beneficial
when iteration-space tiling is used to improve data
locality.
VII.
Acknowledgments
This work was supported in part by the European Commission
esprit project 24942, by the British Council
(grant 1016), and by the Spanish Ministry of Education
(Acci'on Integrada Hispano-Brit'anica 202 CYCIT TIC98-
0511). The authors would like to express their thanks to
Jose Gonz'alez and Joan Manuel Parcerisa for their help
with the simulation software, and to the anonymous referees
for their helpful comments.
--R
"The national technology roadmap for semiconductors,"
Computer Architecture: A Quantitative Approach.
"The cache performance and optimization of blocked algorithms,"
"Cache miss equations: An analytic representation of cache misses,"
"ATOM: A system for building customized program analysis tools,"
"Column-associative caches: A technique for reducing the miss rate of direct-mapped caches,"
"Improving direct-mapped cache performance by the addition of a small fully-associative cache and prefetch buffers,"
"A case for two-way skewed associative caches,"
"Skewed-associative caches,"
"Elimi- nating cache conflict misses through xor-based placement func- tions,"
"Avoiding cache conflict misses dynamically in large direct-mapped caches,"
"The prime memory system for array access,"
"Vector access performance in parallel memories,"
"Logical data skewing schemes for interleaved memories in vector processors,"
"xor-schemes: A flexible data organization in parallel memories,"
"On randomly interleaved memo- ries,"
"The Cydra 5 stride- insensitive memory system,"
"Pseudo-randomly interleaved memories,"
Amdahl Corp.
"Hardware design of the first HP Precision Architecture computers,"
"Cache memories,"
Analysis of Cache Performance for Operating Systems and Multiprogramming
"The design and performance of a conflict-avoiding cache,"
"The case for a single-chip multiprocessor,"
"Organization and performance of a two-level virtual-real cache hierarchy,"
"Speculative execution via address prediction and data prefetching,"
"Hardware support for hiding cache latency,"
"Streamlining data cache access with fast address calculation,"
"Zero-cycle loads: Microarchitecture support for reducing load latency,"
"Memory address prediction for data speculation,"
"The performance potential of data dependence speculation and collapsing,"
"Lockup-free instruction fetch/prefetch cache organi- zation,"
"ARB: A mechanism for dynamic reordering of memory references,"
"Advanced performance features of the 64-bit PA- 8000,"
--TR
--CTR
S. Bartolini , C. A. Prete, A proposal for input-sensitivity analysis of profile-driven optimizations on embedded applications, ACM SIGARCH Computer Architecture News, v.32 n.3, p.70-77, June 2004
K. Patel , E. Macii , L. Benini , M. Poncino, Reducing cache misses by application-specific re-configurable indexing, Proceedings of the 2004 IEEE/ACM International conference on Computer-aided design, p.125-130, November 07-11, 2004
Hans Vandierendonck , Philippe Manet , Jean-Didier Legat, Application-specific reconfigurable XOR-indexing to eliminate cache conflict misses, Proceedings of the conference on Design, automation and test in Europe: Proceedings, March 06-10, 2006, Munich, Germany
Hans Vandierendonck , Koen De Bosschere, XOR-Based Hash Functions, IEEE Transactions on Computers, v.54 n.7, p.800-812, July 2005
Mathias Spjuth , Martin Karlsson , Erik Hagersten, Skewed caches from a low-power perspective, Proceedings of the 2nd conference on Computing frontiers, May 04-06, 2005, Ischia, Italy
Wang , Nelson Passos, Improving cache hit ratio by extended referencing cache lines, Journal of Computing Sciences in Colleges, v.18 n.4, p.118-123, April
G. E. Suh , L. Rudolph , S. Devadas, Dynamic Partitioning of Shared Cache Memory, The Journal of Supercomputing, v.28 n.1, p.7-26, April 2004
Hans Vandierendonck , Koen De Bosschere, Highly accurate and efficient evaluation of randomising set index functions, Journal of Systems Architecture: the EUROMICRO Journal, v.48 n.13-15, p.429-452, May
G. Edward Suh , Srinivas Devadas , Larry Rudolph, Analytical cache models with applications to cache partitioning, Proceedings of the 15th international conference on Supercomputing, p.1-12, June 2001, Sorrento, Italy
Rui Min , Yiming Hu, Improving Performance of Large Physically Indexed Caches by Decoupling Memory Addresses from Cache Addresses, IEEE Transactions on Computers, v.50 n.11, p.1191-1201, November 2001
S. Bartolini , C. A. Prete, Optimizing instruction cache performance of embedded systems, ACM Transactions on Embedded Computing Systems (TECS), v.4 n.4, p.934-965, November 2005 | cache architectures;performance evaluation;conflict avoidance |
297718 | The Impact of Exploiting Instruction-Level Parallelism on Shared-Memory Multiprocessors. | AbstractCurrent microprocessors incorporate techniques to aggressively exploit instruction-level parallelism (ILP). This paper evaluates the impact of such processors on the performance of shared-memory multiprocessors, both without and with the latency-hiding optimization of software prefetching. Our results show that, while ILP techniques substantially reduce CPU time in multiprocessors, they are less effective in removing memory stall time. Consequently, despite the inherent latency tolerance features of ILP processors, we find memory system performance to be a larger bottleneck and parallel efficiencies to be generally poorer in ILP-based multiprocessors than in previous generation multiprocessors. The main reasons for these deficiencies are insufficient opportunities in the applications to overlap multiple load misses and increased contention for resources in the system. We also find that software prefetching does not change the memory bound nature of most of our applications on our ILP multiprocessor, mainly due to a large number of late prefetches and resource contention. Our results suggest the need for additional latency hiding or reducing techniques for ILP systems, such as software clustering of load misses and producer-initiated communication. | Introduction
Shared-memory multiprocessors built from commodity
microprocessors are being increasingly used to provide high
performance for a variety of scientific and commercial ap-
plications. Current commodity microprocessors improve
performance by using aggressive techniques to exploit high
levels of instruction-level parallelism (ILP). These techniques
include multiple instruction issue, out-of-order (dy-
namic) scheduling, non-blocking loads, and speculative ex-
ecution. We refer to these techniques collectively as ILP
techniques and to processors that exploit these techniques
as ILP processors.
Most previous studies of shared-memory multiproces-
This work is supported in part by an IBM Partnership award,
Intel Corporation, the National Science Foundation under Grant
No. CCR-9410457, CCR-9502500, CDA-9502791, and CDA-9617383,
and the Texas Advanced Technology Program under Grant No.
003604-025. Sarita Adve is also supported by an Alfred P. Sloan
Research Fellowship, Vijay S. Pai by a Fannie and John Hertz Foundation
Fellowship, and Parthasarathy Ranganathan by a Lodieska
Stockbridge Vaughan Fellowship.
This paper combines results from two previous conference papers
[11], [12], using a common set of system parameters, a more
aggressive MESI (versus MSI) cache-coherence protocol, a more aggressive
compiler (the better of SPARC SC 4.2 and gcc 2.7.2 for
each application, rather than gcc 2.5.8), and full simulation of private
memory references.
sors, however, have assumed a simple processor with single-
issue, in-order scheduling, blocking loads, and no specu-
lation. A few multiprocessor architecture studies model
state-of-the-art ILP processors [2], [7], [8], [9], but do not
analyze the impact of ILP techniques.
To fully exploit recent advances in uniprocessor technology
for shared-memory multiprocessors, a detailed analysis
of how ILP techniques affect the performance of such systems
and how they interact with previous optimizations for
such systems is required. This paper evaluates the impact
of exploiting ILP on the performance of shared-memory
multiprocessors, both without and with the latency-hiding
optimization of software prefetching.
For our evaluations, we study five applications using detailed
simulation, described in Section II.
Section III analyzes the impact of ILP techniques on
the performance of shared-memory multiprocessors without
the use of software prefetching. All our applications
see performance improvements from the use of current ILP
techniques, but the improvements vary widely. In partic-
ular, ILP techniques successfully and consistently reduce
the CPU component of execution time, but their impact
on the memory stall time is lower and more application-
dependent. Consequently, despite the inherent latency tolerance
features integrated within ILP processors, we find
memory system performance to be a larger bottleneck and
parallel efficiencies to be generally poorer in ILP-based
multiprocessors than in previous-generation multiproces-
sors. These deficiencies are caused by insufficient opportunities
in the application to overlap multiple load misses
and increased contention for system resources from more
frequent memory accesses.
Software-controlled non-binding prefetching has been
shown to be an effective technique for hiding memory
latency in simple processor-based shared memory systems
[6]. Section IV analyzes the interaction between software
prefetching and ILP techniques in shared-memory
multiprocessors. We find that, compared to previous-generation
systems, increased late prefetches and increased
contention for resources cause software prefetching to be
less effective in reducing memory stall time in ILP-based
systems. Thus, even after adding software prefetching,
most of our applications remain largely memory bound on
the ILP-based system.
our results suggest that, compared to previous-generation
2shared-memory systems, ILP-based systems
have a greater need for additional techniques to tolerate
or reduce memory latency. Specific techniques motivated
by our results include clustering of load misses in the applications
to increase opportunities for load misses to overlap
with each other, and techniques such as producer-initiated
communication that reduce latency to make prefetching
more effective (Section V).
II. Methodology
A. Simulated Architectures
To determine the impact of ILP techniques on multiprocessor
performance, we compare two systems - ILP and
Simple - equivalent in every respect except the processor
used. The ILP system uses state-of-the-art ILP processors
while the Simple system uses simple processors (Section II-
A.1). We compare the ILP and Simple systems not to suggest
any architectural tradeoffs, but rather, to understand
how aggressive ILP techniques impact multiprocessor per-
formance. Therefore, the two systems have identical clock
rates, and include identical aggressive memory and network
configurations suitable for the ILP system (Section II-A.2).
Figure
1 summarizes all the system parameters.
A.1 Processor models
The ILP system uses state-of-the-art processors that include
multiple issue, out-of-order (dynamic) scheduling,
non-blocking loads, and speculative execution. The Simple
system uses previous-generation simple processors with single
issue, in-order (static) scheduling, and blocking loads,
and represents commonly studied shared-memory systems.
Since we did not have access to a compiler that schedules
instructions for our in-order simple processor, we assume
single-cycle functional unit latencies (as also assumed by
most previous simple-processor based shared-memory stud-
ies). Both processor models include support for software-controlled
non-binding prefetching to the L1 cache.
A.2 Memory Hierarchy and Multiprocessor Configuration
We simulate a hardware cache-coherent, non-uniform
memory access (CC-NUMA) shared-memory multiprocessor
using an invalidation-based, four-state MESI directory
coherence protocol [4]. We model release consistency because
previous studies have shown that it achieves the best
performance [9].
The processing nodes are connected using a two-dimensional
mesh network. Each node includes a proces-
sor, two levels of caches, a portion of the global shared-memory
and directory, and a network interface. A split-transaction
bus connects the network interface, directory
controller, and the rest of the system node. Both caches
use a write-allocate, write-back policy. The cache sizes
are chosen commensurate with the input sizes of our ap-
plications, following the methodology described by Woo et
al. [14]. The primary working sets for our applications fit
in the L1 cache, while the secondary working sets do not fit
in the L2 cache. Both caches are non-blocking and use miss
Processor parameters
Clock rate 300 MHz
Fetch/decode/retire rate 4 per cycle
Instruction window (re-
order buffer) sizeMemory queue size
Outstanding branches 8
Functional units 2 ALUs, 2 FPUs, 2 address generation
units; all 1 cycle latency
Memory hierarchy and network parameters
MSHRs, 64-byte line
L2 cache 64 KB, 4-way associative, 1 port, 8
MSHRs, 64-byte line, pipelined
Memory 4-way interleaved, ns access time,
Bus 100 MHz, 128 bits, split transaction
Network 2D mesh, 150MHz, 64 bits, per hop
flit delay of 2 network cycles
Nodes in multiprocessor 8
Resulting contentionless latencies (in processor cycles)
Local memory 45 cycles
Remote memory 140-220 cycles
Cache-to-cache transfer 170-270 cycles
Fig. 1. System parameters.
status holding registers (MSHRs) [3] to store information
on outstanding misses and to coalesce multiple requests to
the same cache line. All multiprocessor results reported in
this paper use a configuration with 8 nodes.
B. Simulation Environment
We use RSIM, the Rice Simulator for ILP Multipro-
cessors, to model the systems studied [10]. RSIM is
an execution-driven simulator that models the processor
pipelines, memory system, and interconnection network
in detail, including contention at all resources. It takes
application executables as input. To speed up our
simulations, we assume that all instructions hit in the instruction
cache. This assumption is reasonable since all our
applications have very small instruction footprints.
C. Performance Metrics
In addition to comparing execution times, we also report
the individual components of execution time - CPU, data
memory stall, and synchronization stall times - to characterize
the performance bottlenecks in our systems. With
ILP processors, it is unclear how to assign stall time to specific
instructions since each instruction's execution may be
overlapped with both preceding and following instructions.
We use the following convention, similar to previous work
(e.g., [5]), to account for stall cycles. At every cycle, we
calculate the ratio of the instructions retired from the instruction
window in that cycle to the maximum retire rate
of the processor and attribute this fraction of the cycle to
the busy time. The remaining fraction of the cycle is attributed
as stall time to the first instruction that could not
be retired that cycle. We group the busy time and functional
unit (non-memory) stall time together as CPU time.
Henceforth, we use the term memory stall time to denote
the data memory stall component of execution time.
In the first part of the study, the key metric used to
Application Input Size
LU, LUopt 256x256 matrix, block 8
FFT, FFTopt 65536 points
Mp3d 50000 particles
Water 512 molecules
Fig. 2. Applications and input sizes.
evaluate the impact of ILP is the ratio of the execution
time with the Simple system relative to that achieved by
the ILP system, which we call the ILP speedup. For detailed
analysis, we analogously define an ILP speedup for each
component of execution time.
D. Applications
Figure
2 lists the applications and the input sets used in
this study. Radix, LU, and FFT are from the SPLASH-
suite [14], and Water and Mp3d are from the SPLASH
suite [13]. These five applications and their input sizes were
chosen to ensure reasonable simulation times. (Since RSIM
models aggressive ILP processors in detail, it is about 10
times slower than simple-processor-based shared-memory
simulators.) LUopt and FFTopt are versions of LU and
FFT that include ILP-specific optimizations that can potentially
be implemented in a compiler. Specifically, we use
function inlining and loop interchange to move load misses
closer to each other so that they can be overlapped in the
ILP processor. The impact of these optimizations is discussed
in Sections III and V. Both versions of LU are also
modified slightly to use flags instead of barriers for better
load balance.
Since a SPARC compiler for our ILP system does not ex-
ist, we compiled our applications with the commercial Sun
SC 4.2 or the gcc 2.7.2 compiler (based on better simulated
ILP system performance) with full optimization turned on.
The compilers' deficiencies in addressing the specific instruction
grouping rules of our ILP system are partly hidden
by the out-of-order scheduling in the ILP processor. 2
III. Impact of ILP Techniques on Performance
This section analyzes the impact of ILP techniques on
multiprocessor performance by comparing the Simple and
ILP systems, without software prefetching.
A. Overall Results
Figures
3 and 4 illustrate our key overall results. For
each application, Figure 3 shows the total execution time
and its three components for the Simple and ILP systems
(normalized to the total time on the Simple system). Ad-
ditionally, at the bottom, the figure also shows the ILP
speedup for each application. Figure 4 shows the parallel
efficiency 3 of the ILP and Simple systems expressed as a
percentage. These figures show three key trends:
ffl ILP techniques improve the execution time of all our ap-
plications. However, the ILP speedup shows a wide vari-
2 To the best of our knowledge, the key compiler optimization identified
in this paper (clustering of load misses) is not implemented in
any current superscalar compiler.
3 The parallel efficiency for an application on a system with N processors
is defined as Execution time on uniprocessor
Execution time on multiprocessor \Theta 1
ation (from 1.29 in Mp3d to 3.54 in LUopt). The average
ILP speedup for the original applications (i.e., not including
LUopt and FFTopt) is only 2.05.
ffl The memory stall component is generally a larger part
of the overall execution time in the ILP system than in the
Simple system.
ffl Parallel efficiency for the ILP system is less than that for
the Simple system for all applications.
We next investigate the reasons for the above trends.
B. Factors Contributing to ILP Speedup
Figure
3 indicates that the most important components
of execution time are CPU time and memory stalls. Thus,
ILP speedup will be shaped primarily by CPU ILP speedup
and memory ILP speedup. Figure 5 summarizes these
speedups (along with the total ILP speedup). The figure
shows that the low and variable ILP speedup for our applications
can be attributed largely to insufficient and variable
memory ILP speedup; the CPU ILP speedup is similar
and significant among all applications (ranging from 2.94
to 3.80). More detailed data shows that for most of our
applications, memory stall time is dominated by stalls due
to loads that miss in the L1 cache. We therefore focus on
the impact of ILP on (L1) load misses below.
The load miss ILP speedup is the ratio of the stall time
due to load misses in the Simple and ILP systems, and
is determined by three factors, described below. The first
factor increases the speedup, the second decreases it, while
the third may either increase or decrease it.
ffl Load miss overlap. Since the Simple system has blocking
loads, the entire load miss latency is exposed as stall
time. In ILP, load misses can be overlapped with other useful
work, reducing stall time and increasing the ILP load
miss speedup. The number of instructions behind which a
load miss can overlap is, however, limited by the instruction
window size; further, load misses have longer latencies
than other instructions in the instruction window. There-
fore, load miss latency can normally be completely hidden
only behind other load misses. Thus, for significant load
miss ILP speedup, applications should have multiple load
clustered together within the instruction window to
enable these load misses to overlap with each other.
Contention. Compared to the Simple system, the ILP
system can see longer latencies from increased contention
due to the higher frequency of misses, thereby negatively
affecting load miss ILP speedup.
ffl Change in the number of misses. The ILP system
may see fewer or more misses than the Simple system
because of speculation or reordering of memory ac-
cesses, thereby positively or negatively affecting load miss
ILP speedup.
All of our applications except LU see a similar number
of cache misses in both the Simple and ILP case. LU sees
2.5X fewer misses in ILP because of a reordering of accesses
that otherwise conflict. When the number of misses
does not change, the ILP system sees (? 1) load miss ILP
speedup if the load miss overlap exploited by ILP outweighs
any additional latency from contention. We illustrate the
d
execution
time
ILP
Simple ILP41.5
2.41X
Simple ILP39.0
2.56X
Simple ILP34.0
2.94X
Simple ILP28.2
3.54X
Simple ILP78.2
1.29X
Simple ILP75.3
Simple ILP44.9
2.30X
Memory
CPU
speedup
FFT FFTopt LU LUopt Mp3d Radix Water
Fig. 3. Impact of ILP on multiprocessor performance.
||||||%
Parallel
efficiency 86 82
FFT
LU
Water
Simple
ILP
Fig. 4. Impact of ILP on parallel efficiency.
Fig. 5. ILP speedup for total execution time, CPU time, and memory
stall time in the multiprocessor system.
effects of load miss overlap and contention using the two applications
that best characterize them: LUopt and Radix.
Figure
6(a) provides the average load miss latencies for
LUopt and Radix in the Simple and ILP systems, normalized
to the Simple system latency. The latency shown is
the total miss latency, measured from address generation
to data arrival, including the overlapped part (in ILP) and
the exposed part that contributes to stall time. The difference
in the bar lengths of Simple and ILP indicates the
additional latency added due to contention in ILP. Both
of these applications see a significant latency increase from
resource contention in ILP. However, LUopt can overlap
all its additional latency, as well as a large portion of the
base (Simple) latency, thus leading to a high memory ILP
speedup. On the other hand, Radix cannot overlap its additional
latency; thus, it sees a load miss slowdown in the
ILP configuration.
We use the data in Figures 6(b) and (c) to further investigate
the causes for the load miss overlap and contention-
related latencies in these applications.
Causes for load miss overlap. Figure 6(b) shows
the ILP system's L1 MSHR occupancy due to load misses
for LUopt and Radix. Each curve shows the fraction of
total time for which at least N MSHRs are occupied by
load misses, for each possible N (on the X axis). This figure
shows that LUopt achieves significant overlap of load
misses, with up to 8 load miss requests outstanding simultaneously
at various times. In contrast, Radix almost never
has more than 1 outstanding load miss at any time. This
difference arises because load misses are clustered together
in the instruction window in LUopt, but typically separated
by too many instructions in Radix.
Causes for contention. Figure 6(c) extends the data
of
Figure
6(b) by displaying the total MSHR occupancy for
both load and store misses. The figure indicates that Radix
has a large amount of store miss overlap. This overlap does
not contribute to an increase in memory ILP speedup since
store latencies are already hidden in both the Simple and
ILP systems due to release consistency. The store miss
overlap, however, increases contention in the memory hi-
erarchy, resulting in the ILP memory slowdown in Radix.
In LUopt, the contention-related latency comes primarily
from load misses, but its effect is mitigated since overlapped
load misses contribute to reducing memory stall time.
C. Memory stall component and parallel efficiency
Using the above analysis, we can see why the ILP system
generally sees a larger relative memory stall time component
Figure
and a generally poorer parallel efficiency
Figure
than the Simple system.
Since memory ILP speedup is generally less than CPU
ILP speedup, the memory component becomes a greater
fraction of total execution time in the ILP system than in
the Simple system. To understand the reduced parallel effi-
ciency, Figure 7 provides the ILP speedups for the uniprocessor
configuration for reference. The uniprocessor also
generally sees lower memory ILP speedups than CPU ILP
speedups. However, the impact of the lower memory ILP
speedup is higher in the multiprocessor because the longer
latencies of remote misses and increased contention result
in a larger relative memory component in the execution
time (relative to the uniprocessor). Additionally, the dichotomy
between local and remote miss latencies in a multiprocessor
often tends to decrease memory ILP speedup
(relative to the uniprocessor), because load misses must be
overlapped not only with other load misses but with load
misses with similar latencies 4 . Thus, overall, the multiprocessor
system is less able to exploit ILP features than the
corresponding uniprocessor system for most applications.
4 FFT and FFTopt see better memory ILP speedups in the multiprocessor
than in the uniprocessor because they overlap multiple
load misses with similar multiprocessor (remote) latencies. The section
of the code that exhibits overlap has a greater impact in the
multiprocessor because of the longer remote latencies incurred in this
section.
d
miss
latency
Simple ILP
Simple ILP
Radix209.2
overlapped
stall
|
||||||Utilization
Number of L1 MSHRS
Number of L1 MSHRS
|||||||Utilization
Number of L1 MSHRS
Number of L1 MSHRS
(a) Effect of ILP on average L1 miss latency (b) L1 MSHR occupancy due
to loads
(c) L1 MSHR occupancy due
to loads and stores
Fig. 6. Load miss overlap and contention in the ILP system.
Fig. 7. ILP speedup for total execution time, CPU time, and memory
stall time in the uniprocessor system.
Consequently, the ILP multiprocessor generally sees lower
parallel efficiency than the Simple multiprocessor.
IV. Interaction of ILP Techniques with
Software Prefetching
The previous section shows that the ILP system sees a
greater bottleneck from memory latency than the Simple
system. Software-controlled non-binding prefetching has
been shown to effectively hide memory latency in shared-memory
multiprocessors with simple processors. This section
evaluates how software prefetching interacts with ILP
techniques in shared-memory multiprocessors. We followed
the software prefetch algorithm developed by Mowry et
al.[6] to insert prefetches in our applications by hand,
with one exception. The algorithm in [6] assumes that
locality is not maintained across synchronization, and so
does not schedule prefetches across synchronization ac-
cesses. We removed this restriction when beneficial. For a
consistent comparison, the experiments reported are with
prefetches scheduled identically for both Simple and ILP;
the prefetches are scheduled at least 200 dynamic instructions
before their corresponding demand accesses. The impact
of this scheduling decision is discussed below, including
the impact of varying this prefetch distance.
A. Overall Results
Figure
graphically presents the key results from our
experiments (FFT and FFTopt have similar performance,
so only FFTopt appears in the figure). The figure shows
the execution time (and its components) for each application
on Simple and ILP, both without and with software
prefetching ( +PF indicates the addition of software
prefetching). Execution times are normalized to the time
for the application on Simple without prefetching. Figure 9
summarizes some key data.
Software prefetching achieves significant reductions in
execution time on ILP (13% to 43%) for three cases (LU,
Mp3d, and Water). These reductions are similar to or
greater than those in Simple for these applications. How-
ever, software prefetching is less effective at reducing memory
stalls on ILP than on Simple (average reduction of 32%
in ILP, ranging from 7% to 72%, vs. average 59% and range
of 21% to 88% in Simple). The net effect is that even after
prefetching is applied to ILP, the average memory stall
time is 39% on ILP with a range of 11% to 65% (vs. average
of 16% and range of 1% to 29% for Simple). For most ap-
plications, the ILP system remains largely memory-bound
even with software prefetching.
B. Factors Contributing to the Effectiveness of Software
Prefetching
We next identify three factors that make software
prefetching less successful in reducing memory stall time
in ILP than in Simple, two factors that allow ILP additional
benefits in memory stall reduction not available in
Simple, and one factor that can either help or hurt ILP. We
focus on issues that are specific to ILP systems; previous
work has discussed non-ILP specific issues [6]. Figure 10
summarizes the effects that were exhibited by the applications
we studied. Of the negative effects, the first two are
the most important for our applications.
Increased late prefetches. The last column of Figure
9 shows that the number of prefetches that are too
late to completely hide the miss latency increases in all
our applications when moving from Simple to ILP. One
reason for this increase is that multiple-issue and out-of-
order scheduling speed up computation in ILP, decreasing
the computation time with which each prefetch is over-
lapped. Simple also stalls on any load misses that are not
prefetched or that incur a late prefetch, thereby allowing
other outstanding prefetched data to arrive at the cache.
ILP does not provide similar leeway.
Increased resource contention. As shown in Section
III, ILP processors stress system resources more than
Simple. Prefetches further increase demand for resources,
resulting in more contention and greater memory latencies.
The resources most stressed in our configuration were cache
ports, MSHRs, ALUs, and address generation units.
Negative interaction with clustered misses. Optimizations
to cluster load misses for the ILP system, as in
LUopt, can potentially reduce the effectiveness of software
prefetching. For example, the addition of prefetching re-
dexecution
time
LU
Memory
CPU
|
||||||Normalized
execution
time
LUopt
Memory
CPU
||||||Normalized
execution
time
FFTopt
Memory
CPU
||||||Normalized
execution
time
Mp3d
Memory
CPU
|
||||||Normalized
execution
time
Radix
Memory
CPU
||||||Normalized
execution
time
Water
Memory
CPU
Fig. 8. Interaction between software prefetching and ILP.
duces the execution time of LU by 13% on the ILP system;
in contrast, LUopt improves by only 3%. (On the Simple
system, both LU and LUopt improve by about 10% with
prefetching.) LUopt with prefetching is slightly better than
LU with prefetching on ILP (by 3%). The clustering optimization
used in LUopt reduces the computation between
successive misses, contributing to a high number of late
prefetches and increased contention with prefetching.
Overlapped accesses. In ILP, accesses that are difficult
to prefetch may be overlapped because of non-blocking
loads and out-of-order scheduling. Prefetched lines in LU
and LUopt often suffer from L1 cache conflicts, resulting
in these lines being replaced to the L2 cache before being
used by the demand accesses. This L2 cache latency results
in stall time in Simple, but can be overlapped by the processor
in ILP. Since prefetching in ILP only needs to target
those accesses that are not already overlapped by ILP, it
can appear more effective in ILP than in Simple.
Fewer early prefetches. Early prefetches are those
where the prefetched lines are either invalidated or replaced
before their corresponding demand accesses. Early
prefetches can hinder demand accesses by invalidating or
replacing needed data from the same or other caches without
providing any benefits in latency reduction. In many
of our applications, the number of early prefetches drops
in ILP, improving the effectiveness of prefetching for these
applications. This reduction occurs because the ILP system
allows less time between a prefetch and its subsequent
demand access, decreasing the likelihood of an intervening
invalidation or replacement.
Speculative prefetches. In ILP, prefetch instructions
can be speculatively issued past a mispredicted branch.
Speculative prefetches can potentially hurt performance by
bringing unnecessary lines into the cache, or by bringing
needed lines into the cache too early. Speculative prefetches
can also help performance by initiating a prefetch for a
needed line early enough to hide its latency. In our appli-
cations, most prefetches issued past mispredicted branches
were to lines also accessed on the correct path.
App. % reduction
in
execution
time
reduction
in
memory
stall time
maining
memory
stall time
prefetches
that are
late
ple ple ple ple
Mp3d 43 43 78 59 29 62 1 12
Water
Average 14 14 59
Fig. 9. Detailed data on effectiveness of software prefetching. For the
average, from LU and LUopt, only LUopt is considered since it
provides better performance than LU with prefetching and ILP.
Factor LU LU FFT Mp3d Water Radix
opt opt
Late
prefetches
Resource
contention
Clustered
load
Overlapped
Early
prefetches
Speculative
prefetches
Fig. 10. Factors affecting the performance of prefetching for ILP.
C. Impact of Software Prefetching on Execution Time
Despite its reduced effectiveness in addressing memory
stall time, software prefetching achieves significant execution
time reductions with ILP in three cases (LU, Mp3d,
and Water) for two main reasons. First, memory stall time
contributes a larger portion of total execution time in ILP.
Thus, even a reduction of a small fraction of memory stall
time can imply a reduction in overall execution time similar
to or greater than that seen in Simple. Second, ILP
systems see less instruction overhead from prefetching compared
to Simple systems, because ILP techniques allow the
overlap of these instructions with other computation.
D. Alleviating Late Prefetches and Contention
Our results show that late prefetches and resource contention
are the two key limitations to the effectiveness of
prefetching on ILP. We tried several straightforward modifications
to the prefetching algorithm and the system to
address these limitations [12]. Specifically, we doubled and
quadrupled the prefetch distance (i.e., the distance between
a prefetch and the corresponding demand access), and increased
the number of MSHRs. However, these modifications
traded off benefits among late prefetches, early
prefetches, and contention, without improving the combination
of these factors enough to improve overall per-
formance. We also tried varying the prefetch distance for
each access according to the expected latency of that access
(versus a common distance for all accesses), and prefetching
only to the L2 cache. These modifications achieved their
purpose, but did not provide a significant performance benefit
for our applications [12].
V. Discussion
Our results show that shared-memory systems are limited
in their effectiveness in exploiting ILP processors due
to limited benefits of ILP techniques for the memory sys-
tem. The analysis of Section III implies that the key reasons
for the limited benefits are the lack of opportunity
for overlapping load misses and/or increased contention in
the system. Compiler optimizations akin to the loop interchanges
used to generate LUopt and FFTopt may be able to
expose more potential for load miss overlap in an applica-
tion. The simple loop interchange used in LUopt provides
a 13% reduction in execution time compared to LU on an
ILP multiprocessor. Hardware enhancements can also increase
load miss overlap; e.g., through a larger instruction
window. Targeting contention requires increased hardware
resources, or other latency reduction techniques.
The results of Section IV show that while software
prefetching improves memory system performance with
ILP processors, it does not change the memory-bound
nature of these systems for most of the applications because
the latencies are too long to hide with prefetching
and/or because of increased contention. Our results motivate
prefetching algorithms that are sensitive to increases
in resource usage. They also motivate latency-reducing
(rather than tolerating) techniques such as producer-
initiated communication, which can improve the effectiveness
of prefetching [1].
VI. Conclusions
This paper evaluates the impact of ILP techniques supported
by state-of-the-art processors on the performance of
shared-memory multiprocessors. All our applications see
performance improvements from current ILP techniques.
However, while ILP techniques effectively address the CPU
component of execution time, they are less successful in improving
data memory stall time. These applications do not
see the full benefit of the latency-tolerating features of ILP
processors because of insufficient opportunities to overlap
multiple load misses and increased contention for system
resources from more frequent memory accesses. Thus, ILP-based
multiprocessors see a larger bottleneck from memory
system performance and generally poorer parallel efficiencies
than previous-generation multiprocessors.
Software-controlled non-binding prefetching is a latency
hiding technique widely recommended for previous-generation
shared-memory multiprocessors. We find that
while software prefetching results in substantial reductions
in execution time for some cases on the ILP system, increased
late prefetches and increased contention for resources
cause software prefetching to be less effective in
reducing memory stall time in ILP-based systems. Even
after the addition of software prefetching, most of our applications
remain largely memory bound.
Thus, despite the latency-tolerating techniques integrated
within ILP processors, multiprocessors built from
ILP processors have a greater need for additional techniques
to hide or reduce memory latency than previous-generation
multiprocessors. One ILP-specific technique
discussed in this paper is the software clustering of load
misses. Additionally, latency-reducing techniques such as
producer-initiated communication that can improve the effectiveness
of prefetching appear promising.
--R
An Evaluation of Fine-Grain Producer- Initiated Communication in Cache-Coherent Multiprocessors
Adaptive and Integrated Data Cache Prefetching for Shared-Memory Multiprocessors
The SGI Origin
Tolerating Latency through Software-controlled Data Prefetching
Evaluation of Design Alternatives for a Multi-processor Microprocessor
The Case for a Single-Chip Multiprocessor
An Evaluation of Memory Consistency Models for Shared-Memory Systems with ILP Processors
RSIM Reference Manual
The Impact of Instruction Level Parallelism on Multiprocessor Performance and Simulation Methodology.
The Interaction of Software Prefetching with ILP Processors in Shared-Memory Systems
SPLASH: Stanford Parallel Applications for Shared-Memory
The SPLASH-2 Programs: Characterization and Methodological Considerations
--TR
--CTR
Vijay S. Pai , Sarita Adve, Code transformations to improve memory parallelism, Proceedings of the 32nd annual ACM/IEEE international symposium on Microarchitecture, p.147-155, November 16-18, 1999, Haifa, Israel
Manuel E. Acacio , Jos Gonzlez , Jos M. Garca , Jos Duato, Owner prediction for accelerating cache-to-cache transfer misses in a cc-NUMA architecture, Proceedings of the 2002 ACM/IEEE conference on Supercomputing, p.1-12, November 16, 2002, Baltimore, Maryland
Christopher J. Hughes , Praful Kaul , Sarita V. Adve , Rohit Jain , Chanik Park , Jayanth Srinivasan, Variability in the execution of multimedia applications and implications for architecture, ACM SIGARCH Computer Architecture News, v.29 n.2, p.254-265, May 2001
Christopher J. Hughes , Sarita V. Adve, Memory-side prefetching for linked data structures for processor-in-memory systems, Journal of Parallel and Distributed Computing, v.65 n.4, p.448-463, April 2005
Xian-He Sun , Surendra Byna , Yong Chen, Server-based data push architecture for multi-processor environments, Journal of Computer Science and Technology, v.22 n.5, p.641-652, September 2007 | instruction-level parallelism;performance evaluation;shared-memory multiprocessors;software prefetching |
297811 | Bayesian Function Learning Using MCMC Methods. | AbstractThe paper deals with the problem of reconstructing a continuous one-dimensional function from discrete noisy samples. The measurements may also be indirect in the sense that the samples may be the output of a linear operator applied to the function (linear inverse problem, deconvolution). In some cases, the linear operator could even contain unknown parameters that are estimated from a second experiment (joint identification-deconvolution problem). Bayesian estimation provides a unified treatment of this class of problems, but the practical calculation of posterior densities leads to analytically intractable integrals. In the paper it is shown that a rigourous Bayesian solution can be efficiently implemented by resorting to a MCMC (Markov chain Monte Carlo) simulation scheme. In particular, it is discussed how the structure of the problem can be exploited in order to improve computational and convergence performances. The effectiveness of the proposed scheme is demonstrated on two classical benchmark problems as well as on the analysis of IVGTT (IntraVenous Glucose Tolerance Test) data, a complex identification-deconvolution problem concerning the estimation of the insulin secretion rate following the administration of an intravenous glucose injection. | Introduction
The problem of reconstructing (learning) an unknown function from a set of experimental
data plays a fundamental role in engineering and science. In the present paper,
the attention is restricted to scalar functions of a scalar variable
the main ideas may apply to more general maps as well. In the most favourable case, it is
possible to directly sample the function. Rather frequently, however, only indirect measurements
are available which are obtained by sampling the output of a linear operator
applied to the function. For instance, in deconvolution problems what is sampled is the
convolution of the unknown function with a known kernel, see e.g. [1], [2], [3], [4], [5].
The approaches used to solve the function learning problem can be classified according
to three major strategies. The parametric methods assume that the unknown function
belongs to a set of functions which are parameterized by a finite-dimensional parameters
vector. For instance, the function can be modelled as the output of a multi-layer
perceptron which, for a given topology, is completely characterized by the values of its
weights [6]. Another possibility is to use a polynomial spline with fixed knots. In both
cases, function learning reduces to the problem of estimating the model parameters, a task
that can be performed by solving a (possibly nonlinear) least squares problem. If it were
true that the unknown function belongs to the given function space, statistical estimation
theory could be invoked in order to find minimum variance estimators and compute confidence
intervals [7]. Moreover, it would also be possible to compare parametric models
of increasing complexity using statistical tests (F-test) or complexity criteria (Akaike's
criterion).
The second strategy, namely regularization [8], [9], [10], [11], avoids introducing heavy
assumptions on the nature of F(\Delta) but rather classifies the potential solutions according
to their "regularity" (typically by using an index of smoothness such as the integral
of the squared k-th derivative of the function). The relative importance of the sum of
squared residuals against the regularity index is controlled by the so-called regularization
parameter. The key problem is finding an optimal criterion for the selection of the
regularization parameter, although empirical criteria such as ordinary cross validation
and generalized cross validation [12], [13] perform satisfactorily in many practical cases.
Moreover, regularization does not provide confidence intervals so that it is not possible
to assess the reliability of the reconstructed function.
The present paper deals with the third strategy which is based on Bayesian estimation.
The unknown function is seen as an element of a probability space whose probability
distribution reflects the prior knowledge. For instance, the prior knowledge that F(\Delta)
is smooth is translated in a probability distribution that assigns higher probabilities to
functions whose derivatives have "small" absolute values. A practical way to do that is
to describe F(\Delta) as a Gaussian stochastic process whose k-th derivative is a white noise
process with intensity - 2 . Provided that both - 2 and the variance oe 2 of the (Gaussian)
measurement noise are known, the Bayes formula can be used to work out the posterior
distribution of F(\Delta) given the data [2], [14], [15], [16]. The posterior provides a complete
description of our state of knowledge. In particular, the mean of the posterior can be used
as a point estimate (Bayes estimate) whereas the variance helps assessing the accuracy.
It is notable that, if the regularization parameter is taken equal to the ratio oe 2 =- 2 the
regularized estimate coincides with the Bayes one.
The main advantage of the Bayesian approach is the possibility to address the selection
of the regularization parameter in a rigourous probabilistic framework. In fact, when - 2
is not known, it can be modelled as a random variable and two different approaches are
possible. The simpler one is based on the following observation: if the prior distribution
of - 2 is very flat, the maximum of its posterior given the data is close to the maximum
likelihood estimate - 2
ML . Then, if the posterior of - 2 is very peaked around its max-
imum, it is reasonable to estimate F(\Delta) as if - 2
ML were the true value of - 2 [14], [17], [4].
MAGNI ET AL.: BAYESIAN FUNCTION LEARNING USING MCMC METHODS 3
However, if the posterior of - 2 is not "very peaked" (which is likely to happen especially
for medium and small data sets), neglecting the uncertainty on - 2 would lead to under-estimated
confidence margins for F(\Delta). The truly Bayesian approach, conversely, calls for
the computation of the posterior of F(\Delta) taking into account also the random nature of
. Since the involved integrals are analytically intractable, one has to resort to Monte
Carlo methods.
A first purpose of the present paper is to show how a truly Bayesian solution of the
function learning problem can be efficiently worked out. We discuss the various stages of
the procedure starting from the discretization of F(\Delta) to arrive at the practical implementation
of Markov chain Monte Carlo (MCMC) methods [18]. Our approach is similar to the
one proposed in [15] where, however, the case of indirect measurements (deconvolution
problem) is not treated.
A further issue addressed in the paper is the joint identification-deconvolution problem,
which arises when the convolution kernel is not a priori known but is to be identified by a
separate experiment. The standard (suboptimal) approach is to identify the convolution
kernel and then use it as if it were perfectly known in order to learn the unknown function
F(\Delta). As shown in the paper, the use of MCMC methods allows to learn both functions
jointly.
The paper is organized as follows. Section II contains the statement of the problem. In
Section III, after a concise review of MCMC methods, a numerical procedure for solving
the Bayesian function learning problem is worked out. In section Section IV the proposed
method is illustrated by means of simulated as well as real-world data coming from the
analysis of metabolic systems. Some conclusions (Section V) end the paper.
II. Problem statement
In this paper we consider the problem of reconstructing a function ~
discrete and noisy samples y k such that
~
denotes the measurement error and ~
L k is a linear functional. In increasing order
of generality, we have:
~
~
~
~
denote the sampling instants and t is the initial time.
The first definition of ~
corresponds to the function approximation problem based on
samples of the function itself. Using the second and third definitions, the problem (1)
becomes a deconvolution problem, or an integral equation of the first kind (also called
Fredholm equation), respectively. As for the noise v k , letting v := [v 1
assumed that is a known matrix and oe 2 is a (possibly
unknown) scalar. In the following, y := [y 1
~
f ~
~
non parametric estimator based on Tychonov regularization is given by
~
~
f
P is a suitable operator and k \Delta k is a norm in a
suitable function space. A typical choice for ~
is:
~
Then, if in addition ~
L k is as in (2) and "L 2 norm" is used, ~
turns out to be a smoothing
spline [13]. The basic idea behind (5) is to find a balance between data fit and smooth-
MAGNI ET AL.: BAYESIAN FUNCTION LEARNING USING MCMC METHODS 5
ness of the solution, the relative weight being controlled by the so-called regularization
parameter fl. Various criteria have been proposed to tune fl. Among them, we may mention
ordinary cross-validation [1], generalized cross-validation [12], and the L-curve [19].
It is worth noting that without the aid of the smoothness penalty flk ~
would be
impossible to learn the function ~
f (belonging to an infinite dimensional space) from the
finite data set fy k g.
Rather interestingly, (5) can also be interpreted as a Bayesian estimator. To this pur-
pose, assume that ~
f is a stochastic process such that ~
f is a white noise with intensity
~
w(t)]=0, 8t and E[ ~
w(t) ~
assuming that both v k and ~
f
are normally distributed and letting
, the regularized estimate ~
coincides with the
conditional expectation of ~
f given the observations y k , i.e. E[ ~
f fl . According to
the Bayesian paradigm, the probabilistic assumptions on the unknown function ~
f should
reflect our prior knowledge. For instance, if the operator ~
P is as in (6), then ~
f is the
double integral of a white noise: as such it is a relatively smooth signal (the smaller ~
the smoother ~
f will be).
The above considerations suggest that ~
f fl is the "optimal" estimator provided that
. Unfortunately, ~ - 2 (and sometimes also oe 2 ) is very unlikely to be known a priori.
In the literature [20], [14], [4] it has been proposed to regard ~
as an unknown parameter
and compute its maximum likelihood estimate:
~
In this way ~
f fl with
becomes a sub-optimal estimator.
In this paper, conversely, we pursue a rigourous Bayesian approach. The unknown
parameter ~ - 2 is modelled as a hyper-parameter having a suitable prior distribution, which
is taken into account in the computation of the posterior density
Z
~
f)p( ~
6 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
Since the analytic evaluation of the posterior is made intractable by the presence of
the hyper-parameter, the calculations are carried out by means of Monte Carlo sampling
algorithms. In this context it is convenient to discretize the original problem. Without
loss of generality, it is assumed that t 0 =0. Then, given a sufficiently small discretization
interval T , consider f
. For simplicity, it is
assumed that the sampling instants t k are multiples of T , i.e. t
the operator ~
P is suitably discretized. For instance, with approximated by:
Moreover, ~
f is approximated by Lf , where L is a suitable n \Theta N matrix:
if ~
L k is as in (2) then L kj
if ~
L k is as in (3) then L
R T~
if ~
L k is as in (4) then L
R T~
These approximations are obtained under the assumption that the unknown function ~
f
is constant in between the sampling instants. After the discretization, (5) is approximated
by:
f
denotes the usual norm in ! n ), whose closed form solution is:
MAGNI ET AL.: BAYESIAN FUNCTION LEARNING USING MCMC METHODS 7
In the Bayesian setting it is assumed that the entries of the vector are taken
from the realization of a (discrete-time) white noise with variance - 2 I, that is E[w]= 0,
A first purpose of the present paper is to investigate the applicability of Markov chain
Monte Carlo [18] sampling methods to estimate p(f j y) assuming that - 2 is a parameter
with suitable prior distribution. Another issue concerns the Bayesian solution of the
joint identification-deconvolution problem. In fact, in many deconvolution problems the
impulse response ~ h(-) that enters the convolution integral (3), depends on one or more
parameters -, that are estimated by a separate experiment: ~ putting
together the two experiments, we obtain:
where M(-) is a known function of -, z := [z is the data vector used to identify
- and ffl := is the corresponding measurement error.
The standard approach is to estimate -
using (10) and then estimate f from using -
as if it were the true value of -. On the other hand, a truly Bayesian approach describes
- as a random variable. Then, p(f j y) should be evaluated by considering (9) and (10)
simultaneously. As a particular case, it is possible to consider (9) alone with - modelled
as a random variable to allow for its uncertainty. Again, the standard "suboptimal"
approach is to compute f fl using the nominal value of - [4] and then assess the sensitivity
of the estimate with respect to parameters uncertainty.
III. Markov chain Monte Carlo Methods in Bayesian estimation
problems
Probabilistic inference involves the integration over possibly high-dimensional probability
distributions. Since this operation is often analytically intractable, it is common to
8 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
resort to Monte Carlo techniques, that requires sampling from the probability distribution
to be integrated. Unfortunately, sometimes it is impossible to extract samples directly
from that distribution. Markov chain Monte Carlo (MCMC) methods [18] provide a
unified framework to solve this problem.
MCMC methods are based on two steps: a Markov chain and a Monte Carlo integra-
tion. By sampling from suitable probability distributions, it is generated a Markov chain
that converges (in distribution) to the target distribution, i.e. the distribution to be in-
tegrated. Then, the expectation value is calculated through Monte Carlo integration over
the obtained samples.
The MCMC methods differ from each other in the way the Markov chain is generated.
However, all the different strategies proposed in the literature, are special cases of the
Metropolis-Hastings [21], [22] framework. Also the well-known Gibbs sampler [23] fits in
the Metropolis-Hastings scheme.
In the following sub-sections, we will describe the application of the Metropolis-Hastings
algorithm to Bayesian function learning as well as discuss about its possible variants.
A. The method
In order to describe the Metropolis-Hastings algorithm we will use the following notation
the vector of the model parameters
the vector of the data (i.e. the observations)
ffl \Theta i - the i th samples drawn
the target distribution (proportional to the posterior distribution)
where p(') is the prior distribution of the model parameters and p y
the likelihood.
MAGNI ET AL.: BAYESIAN FUNCTION LEARNING USING MCMC METHODS 9
The Markov chain derived by the Metropolis-Hastings method is obtained through the
following steps:
1. at each time t, a candidate sample \Theta is drawn from a proposal distribution
2. the candidate point \Theta is accepted with probability:
3. if the candidate point \Theta is accepted, the next sample of the Markov chain is \Theta
else the chain does not move and \Theta
It is important to remark that the stationary distribution of the chain (i.e. the distribution
to which the chain converges) is independent of the proposal distribution [18], and
coincides with the target distribution p ';y (').
Although any proposal distribution, on long-run, will deliver samples from the target
distribution, the rate of convergence to the stationary distribution of the generated
Markov chain crucially depends on the relationships between the proposal and the target
distributions; moreover, the number of samples necessary to perform the Monte Carlo
steps depends on the speed with which the algorithm "mixes" (i.e. spans the support of
the target distribution).
When the vector of the model parameters is large, it is often convenient to divide '
into K components and update the samples \Theta of these components one by one [24]. This
scheme is called Single-Component Metropolis-Hastings
Let \Theta (i)
t the i th component of \Theta at time t and \Theta (\Gammai)
\Theta (K)
t g, the Metropolis-Hastings scheme turns to:
1. at each time t, the next sample \Theta (i)
t+1 is derived by sampling a candidate point \Theta (i)
from a proposal distribution q (i)
2. the candidate point \Theta (i)
is accepted with probability:
ff(\Theta (i)
3. if the candidate point \Theta (i)
is accepted, the next sample of the Markov chain is \Theta (i)
\Theta (i)
, else the chain does not move and \Theta (i)
t .
The Gibbs sampler (GS) is just a special case of the single-component Metropolis-
Hastings. The GS scheme exploits the full conditional (the product of the prior distribution
and the likelihood) as the proposal distribution. In this case, it is easy to verify
that the candidate point is always accepted, so that the Markov chain moves at every
step. When the full conditionals are standard distributions (easy to sample from), the
GS represents a suitable choice. On the contrary, when it is not possible to draw samples
directly from the full-conditional distributions, it is convenient to resort to mixed schemes
Metropolis-Hastings). In this setting, a portion of the model parameters
is estimated using the Gibbs Sampler, while the other ones are treated using "ad-hoc"
proposal distributions.
These algorithms have been extensively used in the field of probabilistic graphical modelling
[25]. Using this kind of models a suitable partition (blocking) of the vector of model
parameters is naturally obtained and it is also easy to derive the full conditional distribu-
tion. The convergence rate and the strategies for choosing the proposal distribution are
described in [26], [27].
B. MCMC in Function Reconstruction
In this sub-section we will describe how the problems defined in Section II can be tackled
using MCMC methods. In order to explain the probabilistic models used in the different
sampling schemes, we will resort to a Bayesian Network (BN) representation. BNs are
Directed Acyclic Graphs (DAGs) in which nodes represent variables, while arcs express
direct dependencies between variables. These models are quantified by specifying the
conditional probability distribution of each node given its parents. They will help us in
MAGNI ET AL.: BAYESIAN FUNCTION LEARNING USING MCMC METHODS 11
expressing the conditional independence assumptions underlying the different function
reconstruction problems. For further details see [28], [29].
B.1 Function approximation based on direct sampling
Consider the function approximation problem based on samples of the function itself
(smoothing problem). Its formal definition is given in equations (1) and (2). Our goal is
to provide a Bayesian estimate of the vector f (i.e. the discretized unknown function).
As shown in Section II, the discretized form of the problem may be written as:
Referring to Section II, f :=P \Gamma1 w with E[w]=0, E[ww T
To apply the MCMC strategy described in Section III-A, we must assign a suitable
probabilistic model to the parameter set g. By exploiting a set of standard
choices in Bayesian estimation problems, it is assumed that w is normally distributed
given - 2 and that the precision parameter 1
has a Gamma distribution. More formally,
and
. Moreover, we suppose that the noise v has a Normal distribution,
covariance matrix oe 2 \Psi. This implies that the data model can
be written as: p(y
In this setting the target distribution becomes:
The model is described by the simple BN of Fig. 1
It is easy to see that, in order to apply MCMC integration, it is useful to adopt the
partition =wg. In this context, it is convenient to adopt the Gibbs
Sampler, since the full conditional distributions assume the following standard form:
y
I
Y
Fig. 1. Function approximation: probabilistic model of the smoothing problem, deconvolution problem
and Fredholm equation problem.
where
From the point estimate -
derived by the MCMC algorithm, it is trivial to
reconstruct the unknown function f as -
Moreover, having samples from the
joint posterior distribution, it is possible to derive any statistics of interest, including
confidence intervals (or more appropriately: Bayes intervals).
B.2 Function approximation in inverse problems
In deconvolution problems the unknown function has to be reconstructed on the basis
of indirect measurements: a convolution integral expresses the relationships between the
samples and the unknown function, see (1) and (3). The structure of the problem is
analogous when considering integral equations of the first kind (Fredholm equations), see
(1) and (4)
Again, our goal is to provide a Bayesian estimate of the (discretized) unknown f func-
MAGNI ET AL.: BAYESIAN FUNCTION LEARNING USING MCMC METHODS 13
tion. The discretized form of the problem still has the form (11). Since the functions ~ h(-)
and ~ h(t; -) that enter in the definitions (3) and (4) are assumed to be known, the matrix
L is completely specified. Also in this case, f := P \Gamma1 w with
so that the parameter set to be estimated is the same as in the smoothing problem:
g. Thus, the probabilistic model for the inverse problems (1),(3) and (1),(4) is
again described by the BN of Fig. 1. In fact, in our setting, the smoothing problem, the
deconvolution problem and the Fredholm equation problem differ only in the computation
of the matrix L.
B.3 Deconvolution problems with uncertain impulse response
An interesting extension of the problem described in the previous section is to relax the
assumption of complete knowledge of the impulse response ~ h(-) appearing in equation
(3). As anticipated in Section II, we suppose that ~ is a function of a set
of unknown parameters -, which have to be estimated from experimental data. In this
way, the problem becomes the simultaneous estimation of the unknown function f and
the parameter set -, given the model described by (9) and (10).
Once again, f := P \Gamma1 w with I. In this case, however, the
probabilistic model to be specified becomes more complex. The parameter set to be
estimated is now -g. The corresponding probabilistic model is assumed as:
- 0 and \Sigma - . Moreover, we suppose that the noise signals v and ffl are independent and
normally distributed with known covariance matrices, namely v - N(0; oe 2 \Psi) and ffl -
Then, the data model can be written as: p(y j w;
In this setting the target distribution becomes:
The resulting model is described by the BN of Fig. 2.
14 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
y
I
x
x
x
z ~ N(M(
1/ l
YFig. 2. Function approximation in deconvolution problems in presence of uncertainty on the impulse
response: probabilistic model.
In order to devise an MCMC scheme for estimating ', it is convenient to partition ' as
=-g. The full conditional distributions are as follows:
where
and
The full conditional (16) is clearly a non-standard distribution, so that it cannot be
sampled directly. In order to use the GS strategy it is necessary to apply sampling
algorithms for general distributions, like rejection sampling or adaptive rejection sampling
MAGNI ET AL.: BAYESIAN FUNCTION LEARNING USING MCMC METHODS 15
[30]. Unfortunately, such algorithms may impair the overall efficiency of the stochastic
simulation machinery.
A valuable alternative is to resort to a mixed MCMC scheme in which different proposal
distributions are used to extract samples from the different partitions of '.
ffl The proposal for ' (1) ; ' (2) are the full conditional distributions, as in the GS. In this
way the candidate point for these partitions is always accepted.
ffl The proposal distribution for ' (3) can be chosen as the prior distribution
In this case, the proposal distribution is independent from the past sample
drawn by the Markov chain, so that This scheme is also known as
independence sampler [31]. The acceptance probability for the candidate sample \Theta (3)
simplifies as:
ff(\Theta (3)
so that the acceptance rate depends only on the ratio of the likelihood in the candidate
point to the likelihood for the current one. On the basis of the specific problem, other
proposal distributions can be chosen in order to obtain the best performance in the
computational speed. For example, when possible, a good choice can be the use of a
standard distribution that is a good approximation of the full conditional distribution.
This is a way to preserve the advantages of the single-component Metropolis-Hastings,
avoiding the additional computational burden that would be entailed by the GS in
presence of non-standard full conditional distributions.
IV. Bayesian function learning at work
In this section we will show how the above presented methodology is able to cope with
three different benchmark problems taken from the literature.
A. Function approximation based on direct sampling
To test the performance of the MCMC function approximator in the smoothing problem,
we consider an example proposed by Wahba [13]. The function to be approximated is:
~
and the noisy samples y k are:
Since we are interested in reconstructing the function only in correspondence of the
measurement times, in equation (11) L is the identity matrix. Following Section III-B.1
we take the following prior
where I is the identity matrix. The choice of the prior distribution parameters for the- 2
reflects the absence of reliable prior information on the regularization parameter. In
fact with this choice - with probability of 0.9.
As in [13], we assume that the second derivative of the function is regular; taking into
account the discretization, the operator P is chosen as:
MAGNI ET AL.: BAYESIAN FUNCTION LEARNING USING MCMC METHODS 17
-0.50.5t (arbitrary units)
f
(arbitrary
units)
Fig. 3. Function approximation using the MCMC-smoother.: noisy data (stars), true function (dash-dot
line) and reconstructed function (continuous line).
The starting point of the Markov chain was extracted from the prior distribution
of the parameters fw; - 2 g. After a 1000 samples run of the MCMC scheme, convergence
of the estimates was verified by the method described in [26]. In particular, after
choosing the quantiles 0:975g to be estimated with precision
respectively with probability burn-in (N) of
samples and a number of required samples (M) of 870 were calculated. The results
are shown in Fig. 3.
Although the samples are rather noisy, the smoothed signal is close to the true function.
The RMSE (root mean square error) obtained is 0.073. Moreover, the performances of the
MCMC smoother are similar to the ones obtained in [13], where a cubic smoothing spline
was used and the regularization parameter was tuned through ordinary cross-validation
(OCV). Hence, the MCMC smoother is as good as OCV-tuned smoothing splines in
avoiding under- and over-smoothing problems. The main advantage of MCMC smoother is
that it provides also the a-posteriori sampling distribution of the regularization parameter
and the confidence intervals for the reconstructed function in a rigourous Bayesian setting.
B. Function approximation in deconvolution problems
In order to test the performances of the MCMC deconvolution scheme, we consider a
well-known benchmark problem [32], [2], [4]. The input signal given by:
~
has been convoluted with the impulse response:
Then, by adding measurement errors v k simulated as a zero-mean white Gaussian noise
sequence with variance equal to 9, 52 noisy samples are generated at time t
Fig. 4(a) shows the true function ~
while Fig. 4(b) depicts the convoluted
function together with the 52 noisy samples.
Our goal is to reconstruct the unknown function with a sufficiently fine resolution. This
means that we are interested in the function estimates not only in correspondence of the
measurement times, but also in other "out of samples" time points. In particular, we
consider a 208 points on an evenly-spaced time grid in the interval [0 1035]. Then, the
entries of L are
R T~
with
We take the following prior distributions, similar to the ones used in the previous section:- 2 - \Gamma(0:25; 5e \Gamma 7)
MAGNI ET AL.: BAYESIAN FUNCTION LEARNING USING MCMC METHODS 19
(a)
(arbitrary units)
f
(arbitrary
units)
(arbitrary units)
y
(arbitrary
units)
Fig. 4. Simulated deconvolution problem. Panel (a): the function to be reconstructed. Panel (b): the
convoluted function (dash-dot line) and the noisy samples (stars).
Moreover the operator P is selected as:
This choice corresponds to a penalty on the squared norm of the first derivative, (which
is approximated by the first difference of the discretized signal).
The bottleneck of the MCMC scheme is the computation of B \Gamma1 in equation (15)(this
matrix inverse has to be performed at each step of the MCMC scheme). However, by a
proper change of coordinates, it is possible reduce the size of the matrix has to be inverted
from N \Theta N to n \Theta n (i.e. from 208 \Theta 208 to 52 \Theta 52 in our problem). This goal can be
achieved through the following steps:
1. Let hence an n \Theta N matrix), and compute the
SVD (singular value n) and V (N \Theta N)
are orthogonal matrices (UU and D is an n \Theta N diagonal matrix
(D
2. In view of (11)
It is easy to verify that v is
distributed as N(0; I n ), and w as N(0; - 2 I N ).
3. We apply the same MCMC scheme described in the previous section to the reformulated
problem (17). In the new coordinates B is a block diagonal matrix, in which
the first one is an N \Theta N block and the other ones are 1 \Theta 1 blocks.
4. The final estimate is obtained by re-transforming the variables in the original co-
ordinates; in particular we need to compute:
The starting point of the Markov chain was extracted from the prior distribution of
the parameters fw; - 2 g. After 5000 steps of the MCMC scheme, the convergence of
the estimates was verified by using the method described in [26]. In particular after
choosing the quantiles 0:975g to be estimated with precision
respectively with probability burn-in (N) of
132 samples and a number of required samples (M) of 3756 were calculated. The results
are shown in Fig. 5.
The performance of our approach is comparable with the one proposed in [4], where the
regularization parameter is estimated according to a maximum likelihood criterion (see
Fig. 6, Fig. 7).
MAGNI ET AL.: BAYESIAN FUNCTION LEARNING USING MCMC METHODS 21
(arbitrary units)
f
(arbitrary
units)
(arbitrary units)
y
(arbitrary
units)
Fig. 5. Simulated deconvolution problem. Panel (a): the true function (dash-dot line) and the re-constructed
one (continuous line), with its 95% confidence interval (dashed line), obtained with the
MCMC scheme. Panel (b): the true noiseless output (dash-dot line), the estimated output (continu-
ous line) and the noisy samples (*).
The RMSE obtained with our approach is 0.065 while the one obtained by the method
of [4] is 0.059. Again, the advantage of the MCMC scheme is its ability to provide
the a-posteriori sampling distribution of the regularization parameter and the confidence
intervals for the reconstructed function in a rigourous Bayesian setting.
C. Deconvolution with uncertain impulse response
In this subsection, the MCMC scheme is applied to a real-world problem taken from
[33]; in particular, we demonstrate that deconvolution and impulse response identification
can be addressed jointly, as described in Section III-B.3.
The goal is to quantify the Insulin Secretion Rate (ISR) in humans after a glucose stim-
ulus; the experimental setting is related to the so-called IntraVenous Glucose Tolerance
22 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
(arbitrary units)
f
(arbitrary
units)
Fig. 6. Simulated deconvolution problem. Comparison of the MCMC scheme with the maximum
likelihood one (see text). True function (dashed line), MCMC estimate (continuous line), maximum
likelihood estimate (thick line).
Test (IVGTT), where an impulse dose of glucose is administered in order to assess the
subject capability of bringing Blood Glucose Levels within normal ranges through the
endogenous release of Insulin, the main glucoregolatory hormone.
The ISR cannot be directly measured since insulin is secreted by the pancreas into the
portal vein which is not accessible in vivo. It is possible measure only the effect of the
secretion in the circulation (the plasma concentration of insulin). But, because of the large
liver extraction, the plasma insulin concentration reflects only the post-hepatic delivery
rate into the circulation. This problem can be circumvented by measuring C-peptide (CP)
concentration in plasma. The CP is co-secreted with insulin on an equimolar basis, but is
not extracted by liver, so that it directly reflects the pancreatic ISR. Thus, the problem
turns into the estimation of the ISR on the basis of the (noisy) measurements of CP in
plasma. Since the CP kinetics can be described by a linear model, we obtain the following
MAGNI ET AL.: BAYESIAN FUNCTION LEARNING USING MCMC METHODS 23
samples
lambda^2
samples
of
lambda^2
(b)
Fig. 7. Simulated deconvolution problem. Panel (a): Markov chain of the - 2 parameter. Panel (b):
frequency histogram proportional to the posterior distribution of the parameter - 2 estimated. The
derived with the maximum likelihood approach (see text) was 0.00665.
~
where y k are the CP plasma measurements (pmol/ml), ~
f(\Delta) is the ISR (pmol/min), ~ h(\Delta)
is the CP impulse response (ml \Gamma1 ) and v k is the measurement error (pmol/ml). The CP
impulse response is [33]:
~ h(-) =X
where the parameters A i s and ff i s have to be estimated through an ad-hoc experiment; in
particular an intravenous bolus of biosynthetic CP is delivered to the patient and a number
of plasma measurements of CP are collected (a somatostatin infusion is administered in
order to avoid the endogenous pancreatic secretion). The impulse response parameters
depend on the single patient but are considered constant over time for a specific patient.
We can apply the MCMC scheme of Section III-B.3 in order to jointly perform the CP
impulse response identification and the ISR reconstruction.
Following Section II the problem can be written as:
is the matrix obtained from the discretization of
the deconvolution integral, z are the noisy measurements of the CP plasma concentration
during the impulse response identification experiment, v and ffl are the measurements
errors and are the sampling instants of the
identification experiment. L(-) has elements:
In this case, some prior knowledge on the signal is available [33]: the ISR is known
to exhibit a spike just after the external glucose stimulus and a more regular profile
thereafter. This knowledge can be modelled by considering two different regularization
parameters for the two phases.
For computational reasons, a nonuniform discretization for f has been adopted. Ac-
cordingly, the regularization operator P has been chosen so as to satisfy:
-T
Then:
MAGNI ET AL.: BAYESIAN FUNCTION LEARNING USING MCMC METHODS 25
y
~ N(0, L)
x
x
~ N( ,S x
x
z ~ N(M(
1/ l
e
s
Fig. 8. Probabilistic model for the reconstruction of Insulin Secretion Rate
Moreover, the variances oe 2
v and oe 2
ffl of the measurements errors v and ffl are only imprecisely
known and must be estimated as well. The complete model is shown in Fig. 8.
The derived sampling distributions for the MCMC scheme described in Section III-B.3
are:
26 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
where
In the above formulas N 1 is the number of point used in the regularization of the first
region (the spike) while N 2 is the number of point used for the second region
the vector w is such that
the number of measurements in the identification experiment. is a diagonal matrix with
the first N 1 elements equal to - 2
1 and with the other N 2 elements equal to - 2
2 . Finally,
\Psi and \Psi ffl are diagonal matrices with elements y 2 and z 2 respectively; in this way, the
parameters oe v and oe ffl represent the CV (coefficient of variation) of the measurements
errors in the two experiments.
To completely specify the MCMC scheme the following prior distributions must be
3 0:25 0:125 0:05] T and \Sigma - is a diagonal matrix with elements
such that \Sigma - (i;
MAGNI ET AL.: BAYESIAN FUNCTION LEARNING USING MCMC METHODS 27
The value of - 0 was derived on the basis of the prior knowledge on the dynamics of the
impulse response, while the value of the hyper-parameters for oe v and oe ffl are assessed by
knowing that the measurements error has a CV that ranges from 4% to 6%. The priors
for the -s reflect the knowledge on the signal shape in the two response phases.
We perform our test on the data set described in [34], [33]. The data set used for the
impulse response identification are collected after a CP bolus of 49650 pmol, while the
data set used for the ISR reconstruction are taken after an intravenous glucose bolus of
0:5 g=kg. The basal value of the ISR is estimated on the basis of the five CP measurements
taken before the glucose bolus. Our goal is to reconstruct the ISR in correspondence of
the sampling instants in which CP measurements are taken (in this case N=n).
As in [33], we take N corresponding to a first phase ranging
16. The data of the two experiments are shown in Fig. 9.
After a 2500 samples run of the MCMC scheme (convergence was verified by using the
method described in [26] as the same assumption on q, r, s previous reported), the results
shown in Fig. 10 were obtained.
Fig. 10(a) shows the estimate of the CP plasma levels in the IVGTT experiment: the
estimated curve is slightly smoother than the measurements. Fig. 10(b) depicts the ISR
curve as estimated after deconvolution by the MCMC scheme: the reconstructed ISR
reproduces the expected physiological shape, characterized by two regions with different
regularities. The results obtained are comparable with those obtained in [33], where
deconvolution and impulse response identification are treated separately. Fig. 10(c) shows
the estimated CP impulse response identification. It is easy to notice the good quality
of the fit. In Fig. 11 the frequency histograms of the samples generated by the MCMC
estimator for the six impulse response parameters are reported.
The proposed MCMC scheme is able to jointly perform the identification of the impulse
response and the deconvolution of the ISR. In the classical approach [33], the two experi-
28 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE
(a)
time (min)
in
plasma
(pmol/ml)
time (min)
in
plasma
(pmol/ml)
Fig. 9. The data set for the ISR (Insulin Secretion Rate) reconstruction. Panel (a): impulse response
identification experiment, consisting of 32 noisy samples of the CP (C-Peptide concentration) in
plasma, collected after a CP intravenous bolus at time 0 min. Panel (b): Intravenous Glucose
Tolerance Test, consisting of 27 noisy samples of CP in plasma, collected around an intravenous
glucose bolus at time 0 min.
ments are treated in a separate fashion: in the first step, the impulse response is identified,
using the measurements of the "identification set", and only in the second step, the data
of the "deconvolution set" is used to reconstruct the unknown function. The uncertainty
on the impulse response is possibly taken into account only after the deconvolution step.
On the contrary, our scheme combines together the information coming from the two ex-
periments, and uses it in order to provide "optimal" point estimates as well as posterior
moments and confidence intervals.
V. Conclusions
MCMC methods constitute a set of emerging techniques, that have been successfully
applied in a variety of contexts, from statistical physics to image analysis [23], and from
MAGNI ET AL.: BAYESIAN FUNCTION LEARNING USING MCMC METHODS 29
time (min)
in
plasma
(pmol/ml)
(b)
time (min)
ISR
(c)
time (min)
in
plasma
(pmol/ml)
Fig. 10. Solution of the joint deconvolution and impulse response identification problem. Panel (a):
estimated level of CP in plasma during the IVGTT (continuous line) and the collected samples
(stars). Panel (b): the ISR reconstructed (with 95% confidence interval) by the MCMC-estimator.
Panel (c): the estimated impulse response ~ h(\Delta) (continuous line) and the samples (stars) collected
during the identification experiment.
medical monitoring [25] to genetics [35] and archaeology[36]. The powerfulness of MCMC
methodologies lies in their inherent generality that enables the analyst to deal with the
full complexity of real world problems.
In this paper, we have exploited such generality to propose a unified Bayesian framework
for the reconstruction of functions from direct or indirect measurements. In particular,
by using the same conceptual scheme we easily coped with problems that had been previously
solved with ad-hoc methods. The obtained results are, in all cases, at least as
samples
of
(a)
samples
of
alpha2
(b)
samples
of
alpha3
(c)
value (1/ml)
samples
of
(d)
samples
of
samples
of
A3
Fig. 11. Frequency histograms proportional to the posterior distribution of estimated impulse response
parameters.
good as the previously proposed solutions. In addition, since our approach is able to
soundly estimate the posterior probability distribution of the reconstructed function, the
information provided at the end of the estimation procedure is richer than in all other
methods: first and second moments, confidence intervals and posterior distributions are
obtained as a by-product. Finally, our framework has been exploited to implement a new
strategy for the joint estimation of a deconvoluted signal and its impulse response. The
previous approaches were based on a two-step procedure, which is not able to optimally
combine all the information available in the data.
The main limits of MCMC approach is the time required to converge to the posterior
distribution and the difficulty to choose the best sampling scheme. These limitations force
to use MCMC methods only for off-line reconstructon.
In summary, MCMC methods have been shown to play a crucial role in the off-line
MAGNI ET AL.: BAYESIAN FUNCTION LEARNING USING MCMC METHODS 31
function learning problem, since they provide a flexible and relatively simple strategy,
able to provide optimal results in a Bayesian sense.
Acknowledgements
The authors would like to thank Antonietta Mira for her methodological support in
designing the MCMC scheme and Claudio Cobelli and Giovanni Sparacino for having
provided the experimental data for the Insulin Secretion Rate reconstruction problem.
They thank also the anonymous reviewers for their useful suggestions.
--R
"Practical approximate solutions to linear operator equations when the data are noisy,"
"The deconvolution problem: Fast algorithms including the preconditioned conjugate-gradient to compute a map estimator,"
"Linear inverse problems and ill-posed problems,"
"Nonparametric input estimation in physiological systems: Problems, methods, case studies,"
"Blind deconvolution via sequential imputations,"
Bayesian Learning for Neural Networks
Parameter Estimation in Engineering and Science
"A technique for the numerical solution of certain integral equations of the first kind,"
"On the numerical solution of Fredholm integral equations of the first kind by the inversion of the linear system produced by quadrature,"
Solutions of Ill-Posed Problems
"Networks for approximation and learning,"
"Smoothing noisy data with spline functions: Estimating the correct degree of smoothing by the method of generalized cross-validation,"
Spline Models for Observational Data
"Bayesian interpolation,"
"Gaussian processes for regression,"
"Automatic bayesian curve fitting,"
"Nonparametric spline regression with prior information,"
Markov Chain Monte Carlo in Practice
"Numerical tools for analysis and solution of Fredholm integral equation of the first kind,"
"A time series approach to numerical differenti- ation,"
"Equations of state calculations by fast computing machine,"
"Monte Carlo sampling methods using Markov Chain and their applications,"
"Stochastic relaxation, Gibbs distributions, and the bayesian restoration of images,"
"Likelihood analysis of non-gaussian measurment time series,"
"A unified approach for modeling longitudinal and failure time data, with application in medical monitoring,"
"Implementing MCMC,"
"Inference and monitoring convergence,"
Probabilistic Reasoning in Intelligent Systems
"Dynamic probabilistic networks for modelling and identifying dynamic systems: a MCMC approach,"
"Adaptive rejection sampling for Gibbs sampling,"
"Markov Chains for posterior distributions (with discussion),"
"The inverse problem of radiography,"
"A stochastic deconvolution method to reconstruct insulin secretion rate after a glucose stimulus,"
"Peripheral insulin parallels changes in insulin secretion more closely than C-peptide after bolus intravenous glucose administration,"
"Censored survival models for genetic epidemi- ology: a Gibbs sampling approach,"
"An archaeological example: radiocarbon dating,"
--TR
--CTR
Gianluigi Pillonetto , Claudio Cobelli, Brief paper: Identifiability of the stochastic semi-blind deconvolution problem for a class of time-invariant linear systems, Automatica (Journal of IFAC), v.43 n.4, p.647-654, April, 2007
Xudong Jiang , Wee Ser, Online Fingerprint Template Improvement, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.8, p.1121-1126, August 2002
Ranjan K. Dash , Erkki Somersalo , Marco E. Cabrera , Daniela Calvetti, An efficient deconvolution algorithm for estimating oxygen consumption during muscle activities, Computer Methods and Programs in Biomedicine, v.85 n.3, p.247-256, March, 2007 | system identification;inverse problems;Markov chain Monte Carlo methods;dynamic systems;smoothing;bayesian estimation |
297817 | Bayesian Classification With Gaussian Processes. | AbstractWe consider the problem of assigning an input vector to one of m classes by predicting P(c|${\schmi x}$) for m. For a two-class problem, the probability of class one given ${\schmi x}$ is estimated by (y(${\schmi x}$)), where Gaussian process prior is placed on y(${\schmi x}$), and is combined with the training data to obtain predictions for new ${\schmi x}$ points. We provide a Bayesian treatment, integrating over uncertainty in y and in the parameters that control the Gaussian process prior; the necessary integration over y is carried out using Laplace's approximation. The method is generalized to multiclass problems (m > 2) using the softmax function. We demonstrate the effectiveness of the method on a number of datasets. | Introduction
We consider the problem of assigning an input vector x to one out of m classes by predicting P (cjx) for
classic example of this method is logistic regression. For a two-class problem, the probability
of class 1 given x is estimated by oe(w T x+ b), where \Gammay ). However, this method is not at all
"flexible", i.e. the discriminant surface is simply a hyperplane in x-space. This problem can be overcome, to
some extent, by expanding the input x into a set of basis functions fOE(x)g, for example quadratic functions of
the components of x. For a high-dimensional input space there will be a large number of basis functions, each
one with an associated parameter, and one risks "overfitting" the training data. This motivates a Bayesian
treatment of the problem, where the priors on the parameters encourage smoothness in the model.
Putting priors on the parameters of the basis functions indirectly induces priors over the functions that
can be produced by the model. However, it is possible (and we would argue, perhaps more natural) to put
priors directly over the functions themselves. One advantage of function-space priors is that they can impose
a general smoothness constraint without being tied to a limited number of basis functions. In the regression
case where the task is to predict a real-valued output, it is possible to carry out non-parametric regression
using Gaussian Processes (GPs); see, e.g. [25], [28]. The solution for the regression problem under a GP
prior (and Gaussian noise model) is to place a kernel function on each training data point, with coefficients
determined by solving a linear system. If the parameters ' that describe the Gaussian process are unknown,
Bayesian inference can be carried out for them, as described in [28].
The Gaussian Process method can be extended to classification problems by defining a GP over y, the
input to the sigmoid function. This idea has been used by a number of authors, although previous treatments
typically do not take a fully Bayesian approach, ignoring uncertainty in both the posterior distribution of y
given the data, and uncertainty in the parameters '. This paper attempts a fully Bayesian treatment of the
problem, and also introduces a particular form of covariance function for the Gaussian process prior which,
we believe, is useful from a modelling point of view.
The structure of the remainder of the paper is as follows: Section 2 discusses the use of Gaussian processes
for regression problems, as this is essential background for the classification case. In Section 3 we describe
the application of Gaussian processes to two-class classification problems, and extend this to multiple-class
problems in section 4. Experimental results are presented in section 5, followed by a discussion in section 6.
This paper is a revised and expanded version of [1].
Gaussian Processes for regression
It will be useful to first consider the regression problem, i.e. the prediction of a real valued output y
for a new input value x , given a set of training data ng. This is of relevance because
our strategy will be to transform the classification problem into a regression problem by dealing with the
input values to the logistic transfer function.
A stochastic process prior over functions allows us to specify, given a set of inputs, x the distribution
over their corresponding outputs
We denote this prior over functions as P (y), and similarly, P (y ; y) for the
joint distribution including y . If we also specify P (tjy), the probability of observing the particular values
actual values y (i.e. a noise model) then we have that
Z
Z
Z
Hence the predictive distribution for y is found from the marginalization of the product of the prior and
the noise model. Note that in order to make predictions it is not necessary to deal directly with priors over
function space, only n- or n + 1-dimensional joint densities. However, it is still not easy to carry out these
calculations unless the densities involved have a special form.
If P (tjy) and P (y ; y) are Gaussian then P (y jt) is a Gaussian whose mean and variance can be calculated
using matrix computations involving matrices of size n \Theta n. Specifying P (y ; y) to be a multidimensional
Gaussian (for all values of n and placements of the points x means that the prior over functions
is a Gaussian process. More formally, a stochastic process is a collection of random variables fY (x)jx 2 Xg
indexed by a set X . In our case X will be the input space with dimension d, the number of inputs.
A GP is a stochastic process which can be fully specified by its mean function its
covariance function C(x; x any finite set of Y -variables will have a
joint multivariate Gaussian distribution. Below we consider GPs which have -(x) j 0.
If we further assume that the noise model P (tjy) is Gaussian with mean zero and variance oe 2 I , then the
predicted mean and variance at x are given by
y
e.g. [25]).
2.1 Parameterizing the covariance function
There are many reasonable choices for the covariance function. Formally, we are required to specify functions
which will generate a non-negative definite covariance matrix for any set of points From a
modelling point of view we wish to specify covariances so that points with nearby inputs will give rise to
similar predictions. We find that the following covariance function works well:
l
where x l is the lth component of x and is the vector of parameters
that are needed to define the covariance function. Note that ' is analogous to the hyperparameters in a
neural network. We define the parameters to be the log of the variables in equation (4) since these are
positive scale-parameters. This covariance function can be obtained from a network of Gaussian radial basis
functions in the limit of an infinite number of hidden units [27].
The w l parameters in equation 4 allow a different length scale on each input dimension. For irrelevant
inputs, the corresponding w l will become small, and the model will ignore that input. This is closely related
to the Automatic Relevance Determination (ARD) idea of MacKay [10] and Neal [15]. The v 0 variable
specifies the overall scale of the prior. v 1 specifies the variance of a zero-mean offset which has a Gaussian
distribution.
The Gaussian process framework allows quite a wide variety of priors over functions. For example, the
Ornstein-Uhlenbeck process (with covariance function C(x; x very rough sample paths
which are not mean-square differentiable. On the other hand the squared exponential covariance function of
equation 4 gives rise to an infinitely m.s. differentiable process. In general we believe that the GP method
is a quite general-purpose route for imposing prior beliefs about the desired amount of smoothness. For
reasonably high-dimensional problems, this needs to be combined with other modelling assumptions such as
ARD. Another modelling assumption that may be used is to build up the covariance function as a sum of
covariance functions, each one of which may depend on only some of the input variables (see section 3.3 for
further details).
2.2 Dealing with parameters
Given a covariance function it is straightforward to make predictions for new test points. However, in
practical situations we are unlikely to know which covariance function to use. One option is to choose
a parametric family of covariance functions (with a parameter vector ') and then either to estimate the
parameters (for example, using the method of maximum likelihood) or to use a Bayesian approach where a
posterior distribution over the parameters is obtained.
These calculations are facilitated by the fact that the log likelihood l = log P (Dj') can be calculated
analytically as
log 2-; (5)
where ~
Kj denotes the determinant of ~
K. It is also possible to express analytically the
partial derivatives of the log likelihood with respect to the parameters
@l
~
~
(see, e.g. [11]).
Given l and its derivatives with respect to ' it is straightforward to feed this information to an optimization
package in order to obtain a local maximum of the likelihood.
In general one may be concerned about making point estimates when the number of parameters is large
relative to the number of data points, or if some of the parameters may be poorly determined, or if there
may be local maxima in the likelihood surface. For these reasons the Bayesian approach of defining a prior
s
Figure
1: -(x) is obtained from y(x) by "squashing" it through the sigmoid function oe.
distribution over the parameters and then obtaining a posterior distribution once the data D has been seen is
attractive. To make a prediction for a new test point x one simply averages over the posterior distribution
Z
For GPs it is not possible to do this integration analytically in general, but numerical methods may be used.
If ' is of sufficiently low dimension, then techniques involving grids in '-space can be used.
If ' is high-dimensional it is very difficult to locate the regions of parameter-space which have high
posterior density by gridding techniques or importance sampling. In this case Markov chain Monte Carlo
methods may be used. These work by constructing a Markov chain whose equilibrium distribution
is the desired distribution P ('jD); the integral in equation 7 is then approximated using samples from the
Markov chain.
Two standard methods for constructing MCMC methods are the Gibbs sampler and Metropolis-Hastings
algorithms (see, e.g., [5]). However, the conditional parameter distributions are not amenable to Gibbs
sampling if the covariance function has the form given by equation 4, and the Metropolis-Hastings algorithm
does not utilize the derivative information that is available, which means that it tends to have an inefficient
random-walk behaviour in parameter-space. Following the work of Neal [15] on Bayesian treatment of neural
networks, Williams and Rasmussen [28] and Rasmussen [17] have used the Hybrid Monte Carlo (HMC)
method of Duane et al [4] to obtain samples from P ('jD). The HMC algorithm is described in more detail
in
Appendix
D.
3 Gaussian Processes for two-class classification
For simplicity of exposition we will first present our method as applied to two-class problems; the extension
to multiple classes is covered in section 4.
By using the logistic transfer function to produce an output which can be interpreted as -(x), the
probability of the input x belonging to class 1, the job of specifying a prior over functions - can be transformed
into that of specifying a prior over the input to the transfer function, which we shall call the activation, and
denote by y (see Figure 1). For the two-class problem we can use the logistic function
\Gammay ). We will denote the probability and activation corresponding to input x i by - i and y i
respectively. Fundamentally, the GP approaches to classification and regression problems are similar, except
that the error model which is t - N(y; oe 2 ) in the regression case is replaced by t - Bern(oe(y)). The choice
of v 0 in equation 4 will affect how "hard" the classification is; i.e. if -(x) hovers around 0:5 or takes on the
extreme values of 0 and 1.
Previous and related work to this approach is discussed in section 3.3.
As in the regression case there are now two problems to address (a) making predictions with fixed
parameters and (b) dealing with parameters. We shall discuss these issues in turn.
3.1 Making predictions with fixed parameters
To make predictions when using fixed parameters we would like to compute -
R
requires us to find P (- for a new input x . This can be done by finding the distribution
is the activation of - ) and then using the appropriate Jacobian to transform the distribution.
Formally the equations for obtaining P (y jt) are identical to equations 1, 2, and 3. However, even if we use a
GP prior so that P (y ; y) is Gaussian, the usual expression for
classification
data (where the t's take on values of 0 or 1), means that the marginalization to obtain P (y jt) is no longer
analytically tractable.
Faced with this problem there are two routes that we can follow: (i) to use an analytic approximation
to the integral in equations 1-3 or (ii) to use Monte Carlo methods, specifically MCMC methods, to approximate
it. Below we consider an analytic approximation based on Laplace's approximation; some other
approximations are discussed in section 3.3.
In Laplace's approximation, the integrand P (y ; yjt; ') is approximated by a Gaussian distribution
centered at a maximum of this function with respect to y ; y with an inverse covariance matrix given by
\Gammarr log P (y ; yjt; '). Finding a maximum can be carried out using the Newton-Raphson iterative method
on y, which then allows the approximate distribution of y to be calculated. Details of the maximization
procedure can be found in Appendix A.
3.2 Integration over the parameters
To make predictions we integrate the predicted probabilities over the posterior P ('jt) / P (tj')P ('), as we
saw in 2.2. For the regression problem P (tj') can be calculated exactly using P
R P (tjy)P (yj')dy,
but this integral is not analytically tractable for the classification problem. Let
Using log
log 2-: (8)
By using Laplace's approximation about the maximum ~
y we find that
log
log 2-:
We denote the right-hand side of this equation by log P a (tj') (where a stands for approximate).
The integration over '-space also cannot be done analytically, and we employ a Markov Chain Monte
Carlo method. Following Neal [15] and Williams and Rasmussen [28] we have used the Hybrid Monte Carlo
(HMC) method of Duane et al [4] as described in Appendix D. We use log P a (tj') as an approximation for
log P (tj'), and use broad Gaussian priors on the parameters.
3.3 Previous and related work
Our work on Gaussian processes for regression and classification developed from the observation in [15]
that a large class of neural network models converge to GPs in the limit of an infinite number of hidden
units. The computational Bayesian treatment of GPs can be easier than for neural networks. In the
regression case an infinite number of weights are effectively integrated out, and one ends up dealing only
with the (hyper)parameters. Results from [17] show that Gaussian processes for regression are comparable
in performance to other state-of-the-art methods.
Non-parametric methods for classification problems can be seen to arise from the combination of two
different strands of work. Starting from linear regression, McCullagh and Nelder [12] developed generalized
linear models (GLMs). In the two-class classification context, this gives rise to logistic regression. The other
strand of work was the the development of non-parametric smoothing for the regression problem. Viewed
as a Gaussian process prior over functions this can be traced back at least as far as the work of Kolmogorov
and Wiener in the 1940s. Gaussian process prediction is well known in the geostatistics field (see, e.g. [3])
where it is known as "kriging". Alternatively, by considering "roughness penalties" on functions, one can
obtain spline methods; for recent overviews, see [25] and [8]. There is a close connection between the GP
and roughness penalty views, as explored in [9]. By combining GLMs with non-parametric regression one
obtains what we shall call a non-parametric GLM method for classification. Early references to this method
include [21] and [16], and discussions can also be found in texts such as [8] and [25].
There are two differences between the non-parametric GLM method as it is usually described and a
Bayesian treatment. Firstly, for fixed parameters the non-parametric GLM method ignores the uncertainty
in y and hence the need to integrate over this (as described in section 3.1).
The second difference relates to the treatment of the parameters '. As discussed in section 2.2, given
parameters ', one can either attempt to obtain a point estimate for the parameters or to carry out an
integration over the posterior. Point estimates may be obtained by maximum likelihood estimation of ',
or by cross-validation or generalized cross-validation (GCV) methods, see e.g. [25, 8]. One problem with
CV-type methods is that if the dimension of ' is large, then it can be computationally intensive to search
over a region/grid in parameter-space looking for the parameters that maximize the criterion. In a sense
the HMC method described above are doing a similar search, but using gradient information 1 , and carrying
out averaging over the posterior distribution of parameters. In defence of (G)CV methods, we note Wahba's
comments (e.g. in [26], referring back to [24]) that these methods may be more robust against an unrealistic
prior.
One other difference between the kinds of non-parametric GLM models usually considered and our method
is the exact nature of the prior that is used. Often the roughness penalties used are expressed in terms of a
penalty on the kth derivative of y(x), which gives rise to a power law power spectrum for the prior on y(x).
There can also be differences over parameterization of the covariance function; for example it is unusual to
find parameters like those for ARD introduced in equation 4 in non-parametric GLM models. On the other
hand, Wahba et al [26] have considered a smoothing spline analysis of variance (SS-ANOVA) decomposition.
In Gaussian process terms, this builds up a prior on y as a sum of priors on each of the functions in the
decomposition
ff
y ff
The important point is that functions involving all orders of interaction (from univariate functions, which
on their own give rise to an additive model) are included in this sum, up to the full interaction term which
is the only one that we are using. From a Bayesian point of view questions as to the kinds of priors that are
appropriate is an interesting modelling issue.
There has also been some recent work which is related to the method presented in this paper. In section
3.1 we mentioned that it is necessary to approximate the integral in equations 1-3 and described the use of
Laplace's approximation.
Following the preliminary version of this paper presented in [1], Gibbs and MacKay [7] developed an alternative
analytic approximation, by using variational methods to find approximating Gaussian distributions
that bound the marginal likelihood P (tj') above and below. These approximate distributions are then used
to predict P (y jt; ') and thus -
-(x ). For the parameters, Gibbs and MacKay estimated ' by maximizing
their lower bound on P (tj').
It is also possible to use a fully MCMC treatment of the classification problem, as discussed in the recent
paper of Neal [14]. His method carries out the integrations over the posterior distributions of y and '
simultaneously. It works by generating samples from P (y; 'jD) in a two stage process. Firstly, for fixed ',
each of the n individual y i 's are updated sequentially using Gibbs sampling. This ``sweep'' takes time O(n 2 )
once the matrix K \Gamma1 has been computed (in time O(n 3 )), so it actually makes sense to perform quite a few
Gibbs sampling scans between each update of the parameters, as this probably makes the Markov chain mix
faster. Secondly, the parameters are updated using the Hybrid Monte Carlo method. To make predictions,
one averages over the predictions made by each
It would be possible to obtain derivatives of the CV-score with respect to ', but this has not, to our knowledge, been used
in practice.
4 GPs for multiple-class classification
The extension of the preceding framework to multiple classes is essentially straightforward, although notationally
more complex.
Throughout we employ a one-of-m class coding scheme 2 , and use the multi-class analogue of the logistic
function-the softmax function-to describe the class probabilities. The probability that an instance labelled
by i is in class c is denoted by - i
c , so that an upper index to denotes the example number, and a lower index
the class label. Similarly, the activations associated with the probabilities are denoted by y i
c . Formally, the
link function relates the activations and probabilities through
c
which automatically enforces the constraint
1. The targets are similarly represented by t i
c , and are
specified using a one-of-m coding.
The log likelihood takes the form
c , which for the softmax link function gives
c
As for the two class case, we shall assume that the GP prior operates in activation space; that is we specify
the correlations between the activations y i
c .
One important assumption we make is that our prior knowledge is restricted to correlations between the
activations of a particular class. Whilst there is no difficulty in extending the framework to include inter-class
correlations, we have not yet encountered a situation where we felt able to specify such correlations.
Formally, the activation correlations take the form,
hy i
c (12)
where K i;i 0
c is the element of the covariance matrix for the cth class. Each individual correlation matrix
K c has the form given by equation 4 for the two-class case. We shall use a separate set of parameters for
each class. The use of m independent processes to perform the classification is redundant, but forcing the
activations of one process to be (say) zero would introduce an arbitrary asymmetry into the prior.
For simplicity, we introduce the augmented vector notation,
where, as in the two-class case, y
c denotes the activation corresponding to input x for class c; this notation
is also used to define t + and -+ . In a similar manner, we define y, t and - by excluding the values
corresponding to the test point x . Let y
With this definition of the augmented vectors, the GP prior takes the form,
ae
oe
where, from equation 12, the covariance matrix K + is block diagonal in the matrices, K
m . Each
individual matrix K
c expresses the correlations of activations within class c.
As in the two-class case, to use Laplace's approximation we need to find the mode of P (y jt). The
procedure is described in Appendix C. As for the two-class case, we make predictions for -(x ) by averaging
the softmax function over the Gaussian approximation to the posterior distribution of y . At present, we
simply estimate this integral using 1000 draws from a Gaussian random vector generator.
That is, the class is represented by a vector of length m with zero entries everywhere except for the correct component
which contains 1.
5 Experimental results
When using the Newton-Raphson algorithm, - was initialized each time with entries 1=m, and iterated until
the mean relative difference of the elements of W between consecutive iterations was less than 10 \Gamma4 .
For the HMC algorithm, the same step size " is used for all parameters, and should be as large as possible
while keeping the rejection rate low. We have used a trajectory made up of leapfrog steps, which gave
a low correlation between successive states. The priors over parameters were set to be Gaussian with a mean
of \Gamma3 and a standard deviation of 3. In all our simulations a step size produced a low rejection rate
(! 5%). The parameters corresponding to the w l 's were initialized to \Gamma2 and that for v 0 to 0. The sampling
procedure was run for 200 iterations, and the first third of the run was discarded; this "burn-in" is intended
to give the parameters time to come close to their equilibrium distribution. Tests carried out using the
R-CODA package 3 on the examples in section 5.1 suggested that this was indeed effective in removing the
transients, although we note that it is widely recognized (see, e.g. [2]) that determining when the equilibrium
distribution has been reached is a difficult problem. Although the number of iterations used is much less
than typically used for MCMC methods it should be remembered that (i) each iteration involves
leapfrog steps and (ii) that by using HMC we aim to reduce the "random walk" behaviour seen in methods
such as the Metropolis algorithm. Autocorrelation analysis for each parameter indicated, in general, that
low correlation was obtained after a lag of a few iterations.
The MATLAB code which we used to run our experiments is available from
ftp://cs.aston.ac.uk/neural/willicki/gpclass/.
5.1 Two classes
We have tried out our method on two well known two class classification problems, the Leptograpsus crabs
and Pima Indian diabetes datasets 4 . We first rescale the inputs so that they have mean of zero and unit
variance on the training set. Our Matlab implementations for the HMC simulations for both tasks each
take several hours on a SGI Challenge machine (200MHz R10000), although good results can be obtained in
much less time. We also tried a standard Metropolis MCMC algorithm for the Crabs problem, and found
similar results, although the sampling by this method is somewhat slower than that for HMC.
The results for the Crabs and Pima tasks, together with comparisons with other methods (from [20] and
[18]) are given in Tables 1 and 2 respectively. The tables also include results obtained for Gaussian processes
using (a) estimation of the parameters by maximizing the penalised likelihood (found using 20 iterations of
a scaled conjugate gradient optimiser) and (b) Neal's MCMC method. Details of the set-up used for Neal's
method are given in Appendix E.
In the Leptograpsus crabs problem we attempt to classify the sex of crabs on the basis of five anatomical
attributes, with an optional additional colour attribute. There are 50 examples available for crabs of each sex
and colour, making a total of 200 labelled examples. These are split into a training set of 20 crabs of each sex
and colour, making 80 training examples, with the other 120 examples used as the test set. The performance
of our GP method is equal to the best of the other methods reported in [20], namely a 2 hidden unit
neural network with direct input to output connections, a logistic output unit and trained with maximum
likelihood (Network(1) in Table 1). Neal's method gave a very similar level of performance. We also found
that estimating the parameters using maximum penalised likelihood (MPL) gave similar performance with
less than a minute of computing time.
For the Pima Indians diabetes problem we have used the data as made available by Prof. Ripley, with
his training/test split of 200 and 332 examples respectively [18]. The baseline error obtained by simply
classifying each record as coming from a diabetic gives rise to an error of 33%. Again, ours and Neal's
GP methods are comparable with the best alternative performance, with an error of around 20%. It is
encouraging that the results obtained using Laplace's approximation and Neal's method are similar 5 . We
also estimated the parameters using maximum penalised likelihood, rather than Monte Carlo integration.
The performance in this case was a little worse, with 21.7% error, but for only 2 minutes computing time.
3 Available from the Comprehensive R Archive Network at http://www.ci.tuwien.ac.at.
4 Available from http://markov.stats.ox.ac.uk/pub/PRNN.
5 The performance obtained by Gibbs and MacKay in [7] was similar. Their method made 4 errors in the crab task (with
colour given), and 70 errors on the Pima dataset.
Method Colour given Colour not given
Neural Network(1) 3 3
Neural Network(2) 5 3
Linear Discriminant 8 8
Logistic regression 4 4
PP regression (4 ridge
Gaussian Process (Laplace 3 3
Approximation, HMC)
Gaussian Process (Laplace 4 3
Approximation, MPL)
Gaussian Process (Neal's method) 4 3
Table
1: Number of test errors for the Leptograpsus crabs task. Comparisons are taken from from Ripley (1996)
and Ripley (1994) respectively. Network(2) used two hidden units and the predictive approach (Ripley, 1993) which
uses Laplace's approximation to weight each network local minimum.
Method Pima Indian diabetes
Neural Network 75+
Linear Discriminant 67
Logistic Regression 66
PP regression (4 ridge functions) 75
Gaussian Mixture 64
Gaussian Process (Laplace 68
Approximation, HMC)
Gaussian Process (Laplace 69
Approximation, MPL)
Gaussian Process (Neal's method) 68
Table
2: Number of test errors on the Pima Indian diabetes task. Comparisons are taken from from Ripley
(1996) and Ripley (1994) respectively. The neural network had one hidden unit and was trained with
maximum likelihood; the results were worse for nets with two or more hidden units (Ripley, 1996).
Analysis of the posterior distribution of the w parameters in the covariance function (equation
be informative. Figure 5.1 plots the posterior marginal mean and 1 standard deviation error bars for each
of the seven input dimensions. Recalling that the variables are scaled to have zero mean and unit variance,
it would appear that variables 1 and 3 have the shortest lengthscales (and therefore the most variability)
associated with them.
5.2 Multiple classes
Due to the rather long time taken to run our code, we were only able to test it on relatively small problems,
by which we mean only a few hundred data points and several classes. Furthermore, we found that a full
Bayesian integration over possible parameter settings was beyond our computational means, and we therefore
had to be satisfied with a maximum penalised likelihood approach. Rather than using the potential and its
gradient in a HMC routine, we now simply used them as inputs to a scaled conjugate gradient optimiser
(based on [13]) instead, attempting to find a mode of the class posterior, rather than to average over the
posterior distribution.
We tested the multiple class method on the Forensic Glass dataset described in [18]. This is a dataset
of 214 examples with 9 inputs and 6 output classes. Because the dataset is so small, the performance is
Figure
2: Plot of the log w parameters for the Pima dataset. The circle indicates the posterior marginal mean
obtained from the HMC run (after burn-in), with one standard deviation error bars. The square symbol
shows the log w-parameter values found by maximizing the penalized likelihood. The variables are 1. the
number of pregnancies, 2. plasma glucose concentration, 3. diastolic blood pressure, 4. triceps skin fold
thickness, 5. body mass index, 6. diabetes pedigree function, 7. age. For comparison, Wahba et al (1995)
using generalized linear regression, found that variables 1, 2 5 and 6 were the most important.
estimated from using 10-fold cross validation. Computing the penalised maximum likelihood estimate of our
multiple GP method took approximately 24 hours on our SGI Challenge and gave a classification error rate
of 23.3%. As we see from Table 3, this is comparable to the best of the other methods. The performance of
Neal's method is surprisingly poor; this may be due to the fact that we allow separate parameters for each
of the y processes, while these are constrained to be equal in Neal's code. There are also small but perhaps
significant differences in the specification of the prior (see Appendix E for details).
6 Discussion
In this paper we have extended the work of Williams and Rasmussen [28] to classification problems, and have
demonstrated that it performs well on the datasets we have tried. We believe that the kinds of Gaussian
Method Forensic Glass
Neural Network (4HU) 23.8%
Linear Discriminant 36%
PP regression (5 ridge functions) 35%
Gaussian Mixture 30.8%
Decision Tree 32.2%
Gaussian Process (LA, MPL) 23.3%
Gaussian Process (Neal's method) 31.8%
Table
3: Percentage of test error for the Forensic Glass problem. See Ripley (1996) for details of the methods.
process prior we have used are more easily interpretable than models (such as neural networks) in which
the priors are on the parameterization of the function space. For example, the posterior distribution of the
ARD parameters (as illustrated in Figure 5.1 for the Pima Indians diabetes problem) indicates the relative
importance of various inputs. This interpretability should also facilitate the incorporation of prior knowledge
into new problems.
There are quite strong similarities between GP classifiers and support-vector machines (SVMs) [23]. The
SVM uses a covariance kernel, but differs from the GP approach by using a different data fit term (the
maximum margin), so that the optimal y is found using quadratic programming. The comparison of these
two algorithms is an interesting direction for future research.
A problem with methods based on GPs is that they require computations (trace, determinants and linear
solutions) involving n \Theta n matrices, where n is the number of training examples, and hence run into problems
on large datasets. We have looked into methods using Bayesian numerical techniques to calculate the trace
and determinant [22, 6], although we found that these techniques did not work well for the (relatively) small
size problems on which we tested our methods. Computational methods used to speed up the quadratic
programming problem for SVMs may also be useful for the GP classifier problem. We are also investigating
the use of different covariance functions and improvements on the approximations employed.
Acknowledgements
We thank Prof. B. Ripley for making available the Leptograpsus crabs, Pima Indian diabetes and Forensic
Glass datasets. This work was partially supported by EPSRC grant GR/J75425, Novel Developments in
Learning Theory for Neural Networks, and much of the work was carried out at Aston University. The
authors gratefully acknowledge the hospitality provided by the Isaac Newton Institute for Mathematical
Sciences (Cambridge, UK) where this paper was written up. We thank Mark Gibbs, David MacKay and
Radford Neal for helpful discussions, and the anonymous referees for their comments which helped improve
the paper.
Appendix
Maximizing case
We describe how to find iteratively the vector y + so that P (y + jt) is maximized. This material is also
covered in [8] x5.3.3 and [25] x9.2. We provide it here for completeness and so that the terms in equation 9
are well-defined.
the complete set of activations. By Bayes' theorem log
log As P (t) does not depend on y + (it is just
a normalizing factor), the maximum of P (y + jt) is found by maximizing \Psi + with respect to y + . Using
log
log 2- (14)
where K+ is the covariance matrix of the GP evaluated at x . \Psi is defined similarly in equation
8. K+ can be partitioned in terms of an n \Theta n matrix K, a n \Theta 1 vector k and a scalar k , viz.
As y only enters into equation 14 in the quadratic prior term and has no data point associated with it,
maximizing with respect to y + can be achieved by first maximizing \Psi with respect to y and then doing
the further quadratic optimization to determine y . To find a maximum of \Psi we use the Newton-Raphson
iteration y new Differentiating equation 8 with respect to y we find
where the 'noise' matrix is given by This results in the iterative equation,
To avoid unnecessary inversions, it is usually more convenient to rewrite this in the form
Note that \Gammarr\Psi is always positive definite, so that the optimization problem is convex.
Given a converged solution ~ y for y, y can easily be found using y
-), as
is the W with a
zero appended in the (n diagonal position. Given the mean and variance of y it is then easy to find
R
the mean of the distribution of P (- jt). In order to calculate the Gaussian integral over
the logistic sigmoid function, we employ an approximation based on the expansion of the sigmoid function
in terms of the error function. As the Gaussian integral of an error function is another error function, this
approximation is fast to compute. Specifically, we use a basis set of five scaled error functions to interpolate
the logistic sigmoid at chosen points 6 . This gives an accurate approximation (to to the desired integral
with a small computational cost.
The justification of Laplace's approximation in our case is somewhat different from the argument usually
put forward, e.g. for asymptotic normality of the maximum likelihood estimator for a model with a finite
number of parameters. This is because the dimension of the problem grows with the number of data points.
However, if we consider the "infill asymptotics" (see, e.g. [3]), where the number of data points in a bounded
region increases, then a local average of the training data at any point x will provide a tightly localized
estimate for -(x) and hence y(x) (this reasoning parallels more formal arguments found in [29]). Thus we
would expect the distribution P (y) to become more Gaussian with increasing data.
Appendix
B: Derivatives of log P a (tj') wrt '.
For both the HMC and MPL methods we require the derivative of l a log P a (tj') with respect to components
of ', for example ' k . This derivative will involve two terms, one due to explicit dependencies of l a =
log 2- on ' k , and also because a change in ' will cause a change in ~ y. However,
as ~ y is chosen so that r\Psi(y)j y= ~
@l a
\Gamma2
@ log jK
The dependence of jK \Gamma1 +W j on ~ y arises through the dependence of W on ~
-, and hence ~ y. By differentiating
~
-), one obtains
@~ y
and hence the required derivative can be calculated.
Appendix
Maximizing
Multiple-class case
The GP prior and likelihood, defined by equations 13 and 11, define the posterior distribution of activations,
jt). As in Appendix A we are interested in a Laplace approximation to this posterior, and therefore
need to find the mode with respect to y Dropping unnecessary constants, the multi-class analogue of
equation 14 is
c
exp y i
6 In detail, we used the basis functions erf(-x)) for These were used to interpolate oe(x) at
By the same principle as in Appendix A, we define \Psi by analogy with equation 8, and first optimize \Psi with
respect to y, afterwards performing the quadratic optimization of \Psi + with respect to y .
In order to optimize \Psi with respect to y, we make use of the Hessian given by
where K is the mn \Theta mn block-diagonal matrix with blocks K c , m. Although this is in the same
form as for the two class case, equation 17, there is a slight change in the definition of the 'noise' matrix,
W . A convenient way to define W is by introducing the matrix \Pi which is a mn \Theta n matrix of the form
Using this notation, we can write the noise matrix in the form of a
diagonal matrix and an outer product,
As in the two-class case, we note that \Gammarr\Psi is again positive definite, so that the optimization problem is
convex.
The update equation for iterative optimization of \Psi with respect to the activations y then follows the
same form as that given by equation 18. The advantage of the representation of the noise matrix in equation
23 is that we can then invert matrices and find their determinants using the identities,
and
As A is block-diagonal, it can be inverted blockwise. Thus, rather than
requiring determinants and inverses of a mn \Theta mn matrix, we only need to carry out expensive matrix
computations on n \Theta n matrices. The resulting update equations for y are then of the same form as given
in equation 18, where the noise matrix and covariance matrices are now in their multiple class form.
Essentially, these are all the results needed to generalize the method to the multiple-class problem.
Although, as we mentioned above, the time complexity of the problem does not scale with the m 3 , but
rather m (due to the identities in equations 24, 25), calculating the function and its gradient is still rather
expensive. We have experimented with several methods of mode finding for the Laplace approximation. The
advantage of the Newton iteration method is its fast quadratic convergence. An integral part of each Newton
step is the calculation of the inverse of a matrix M acting upon a vector, i.e. M \Gamma1 b . In order to speed up
this particular step, we used a conjugate gradient (CG) method to solve iteratively the corresponding linear
system b. As we repeatedly need to solve the system (because W changes as y is updated), it saves
time not to run the CG method to convergence each time it is called. In our experiments the CG algorithm
was terminated when 1=n
The calculation of the derivative of log P a (tj') wrt ' in the multiple-class case is analogous to the two-class
case described in Appendix B.
Appendix
D: Hybrid Monte Carlo
HMC works by creating a fictitious dynamical system in which the parameters are regarded as position
variables, and augmenting these with momentum variables p. The purpose of the dynamical system is to
give the parameters "inertia" so that random-walk behaviour in '-space can be avoided. The total energy,
H , of the system is the sum of the kinetic energy, and the potential energy, E. The potential
energy is defined such that p('jD) / exp(\GammaE), i.e. We sample from the joint
distribution for ' and p given by P ('; p) / exp(\GammaE \Gamma K); the marginal of this distribution for ' is the
required posterior. A sample of parameters from the posterior can therefore be obtained by simply ignoring
the momenta.
Sampling from the joint distribution is achieved by two steps: (i) finding new points in phase space
with near-identical energies H by simulating the dynamical system using a discretised approximation to
Hamiltonian dynamics, and (ii) changing the energy H by Gibbs sampling the momentum variables.
Hamilton's first order differential equations for H are approximated using the leapfrog method which
requires the derivatives of E with respect to '. Given a Gaussian prior on ', log P (') is straightforward to
differentiate. The derivative of log P a (tj') is also straightforward, although implicit dependencies of ~
y (and
hence ~
-) on ' must be taken into account as described in Appendix B. The calculation of the energy can be
quite expensive as for each new ', we need to perform the maximization required for Laplace's approximation,
equation 9. This proposed state is then accepted or rejected using the Metropolis rule depending on the
final energy H (which is not necessarily equal to the initial energy H because of the discretization).
Appendix
E: Simulation set-up for Neal's code
We used the fbm software available from
http://www.cs.utoronto.ca/~radford/fbm.software.html. For example, the commands used to run the
Pima example are
model-spec pima1.log binary
gp-gen pima1.log fix 0.5 1
mc-spec pima1.log repeat 4 scan-values 200 heatbath hybrid 6 0.5
gp-mc pima1.log 500
which follow closely the example given in Neal's documentation.
The gp-spec command specifies the form of the Gaussian process, and in particular the priors on the
parameters v 0 and the w's (see equation 4). The expression 0.05:0.5 specifies a Gamma-distribution prior
on v 0 , and x0.2:0.5:1 specifies a hierarchical Gamma prior on the w's. Note that a ``jitter'' of 0:1 is also
specified on the prior covariance function; this improves conditioning of the covariance matrix.
The mc-spec command gives details of the MCMC updating procedure. It specifies 4 repetitions of 200
scans of the y values followed by 6 HMC updates of the parameters (using a step-size adjustment factor of
0.5). gp-mc specifies that this is sequence is carried out 500 times.
We aimed for a rejection rate of around 5%. If this was exceeded, the stepsize reduction factor was
reduced and the simulation run again.
--R
Statistics for Spatial Data.
Hybrid Monte Carlo.
Bayesian Data Analysis.
Efficient Implementation of Gaussian Processes.
Variational Gaussian Process Classifiers.
Nonparametric regression and generalized linear models.
A correspondence between Bayesian estimation of stochastic processes and smoothing by splines.
Bayesian Methods for Backpropagation Networks.
Maximum likelihood estimation for models of residual covariance in spatial regression.
Generalized Linear Models.
A scaled conjugate gradient algorithm for fast supervised learning.
Monte Carlo Implementation of Gaussian Process Models for Bayesian Regression and Classification.
Bayesian Learning for Neural Networks.
Automatic Smoothing of Regression Functions in Generalized Linear Models.
Evaluation of Gaussian Processes and Other Methods for Non-linear Regres- sion
Pattern Recognition and Neural Networks.
Statistical aspects of neural networks.
Flexible Non-linear Approaches to Classification
Density Ratios
Bayesian numerical analysis.
The Nature of Statistical Learning Theory.
A Comparison of GCV and GML for Choosing the Smoothing Parameter in the Generalized Spline Smoothing Problem.
Spline Models for Observational Data.
Classification
Computing with infinite networks.
Gaussian processes for regression.
A Comparison of Kriging with Nonparametric Regression Methods.
--TR
--CTR
Christopher K. I. Williams, On a Connection between Kernel PCA and Metric Multidimensional Scaling, Machine Learning, v.46 n.1-3, p.11-19, 2002
S. S. Keerthi , K. B. Duan , S. K. Shevade , A. N. Poo, A Fast Dual Algorithm for Kernel Logistic Regression, Machine Learning, v.61 n.1-3, p.151-165, November 2005
Mrio A. T. Figueiredo, Adaptive Sparseness for Supervised Learning, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.9, p.1150-1159, September
Koji Tsuda, Propagating distributions on a hypergraph by dual information regularization, Proceedings of the 22nd international conference on Machine learning, p.920-927, August 07-11, 2005, Bonn, Germany
W. D. Addison , R. H. Glendinning, Robust image classification, Signal Processing, v.86 n.7, p.1488-1501, July 2006
Hyun-Chul Kim , Daijin Kim , Zoubin Ghahramani , Sung Yang Bang, Appearance-based gender classification with Gaussian processes, Pattern Recognition Letters, v.27 n.6, p.618-626, 15 April 2006
Yasemin Altun , Alex J. Smola , Thomas Hofmann, Exponential families for conditional random fields, Proceedings of the 20th conference on Uncertainty in artificial intelligence, p.2-9, July 07-11, 2004, Banff, Canada
Wei Chu , Zoubin Ghahramani, Preference learning with Gaussian processes, Proceedings of the 22nd international conference on Machine learning, p.137-144, August 07-11, 2005, Bonn, Germany
Balaji Krishnapuram , Alexander J. Hartemink , Lawrence Carin , Mario A. T. Figueiredo, A Bayesian Approach to Joint Feature Selection and Classifier Design, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.9, p.1105-1111, September 2004
Wei Chu , S. Sathiya Keerthi , Chong Jin Ong, Bayesian trigonometric support vector classifier, Neural Computation, v.15 n.9, p.2227-2254, September
Yasemin Altun , Thomas Hofmann , Alexander J. Smola, Gaussian process classification for segmenting and annotating sequences, Proceedings of the twenty-first international conference on Machine learning, p.4, July 04-08, 2004, Banff, Alberta, Canada
Hyun-Chul Kim , Jaewook Lee, Clustering Based on Gaussian Processes, Neural Computation, v.19 n.11, p.3088-3107, November 2007
Bart Bakker , Tom Heskes, Task clustering and gating for bayesian multitask learning, The Journal of Machine Learning Research, 4, p.83-99, 12/1/2003
Mark Girolami , Simon Rogers, Variational Bayesian multinomial probit regression with Gaussian process priors, Neural Computation, v.18 n.8, p.1790-1817, August 2006
Liefeng Bo , Ling Wang , Licheng Jiao, Feature Scaling for Kernel Fisher Discriminant Analysis Using Leave-One-Out Cross Validation, Neural Computation, v.18 n.4, p.961-978, April 2006
Volker Tresp, The generalized Bayesian committee machine, Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining, p.130-139, August 20-23, 2000, Boston, Massachusetts, United States
Malte Kuss , Carl Edward Rasmussen, Assessing Approximate Inference for Binary Gaussian Process Classification, The Journal of Machine Learning Research, 6, p.1679-1704, 12/1/2005
Lehel Csat , Manfred Opper, Sparse on-line Gaussian processes, Neural Computation, v.14 n.3, p.641-668, March 2002
Manfred Opper , Ole Winther, Gaussian Processes for Classification: Mean-Field Algorithms, Neural Computation, v.12 n.11, p.2655-2684, November 2000
Michael Lindenbaum , Shaul Markovitch , Dmitry Rusakov, Selective Sampling for Nearest Neighbor Classifiers, Machine Learning, v.54 n.2, p.125-152, February 2004
Volker Tresp, Scaling Kernel-Based Systems to Large Data Sets, Data Mining and Knowledge Discovery, v.5 n.3, p.197-211, July 2001
Charles A. Micchelli , Massimiliano A. Pontil, On Learning Vector-Valued Functions, Neural Computation, v.17 n.1, p.177-204, January 2005
Michael E. Tipping, Sparse bayesian learning and the relevance vector machine, The Journal of Machine Learning Research, 1, p.211-244, 9/1/2001
Balaji Krishnapuram , Lawrence Carin , Mario A. T. Figueiredo , Alexander J. Hartemink, Sparse Multinomial Logistic Regression: Fast Algorithms and Generalization Bounds, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.6, p.957-968, June 2005
T. Van Gestel , J. A. K. Suykens , G. Lanckriet , A. Lambrechts , B. De Moor , J. Vandewalle, Bayesian framework for least-squares support vector machine classifiers, Gaussian processes, and kernel fisher discriminant analysis, Neural Computation, v.14 n.5, p.1115-1147, May 2002
Zhihua Zhang , James T. Kwok , Dit-Yan Yeung, Model-based transductive learning of the kernel matrix, Machine Learning, v.63 n.1, p.69-101, April 2006
Gavin C. Cawley , Nicola L. C. Talbot, Preventing Over-Fitting during Model Selection via Bayesian Regularisation of the Hyper-Parameters, The Journal of Machine Learning Research, 8, p.841-861, 5/1/2007
Matthias Seeger, Pac-bayesian generalisation error bounds for gaussian process classification, The Journal of Machine Learning Research, 3, p.233-269, 3/1/2003
Arnulf B. A. Graf , Felix A. Wichmann , Heinrich H. Blthoff , Bernhard H. Schlkopf, Classification of Faces in Man and Machine, Neural Computation, v.18 n.1, p.143-165, January 2006
Ralf Herbrich , Thore Graepel , Colin Campbell, Bayes point machines, The Journal of Machine Learning Research, 1, p.245-279, 9/1/2001
Liam Paninski , Jonathan W. Pillow , Eero P. Simoncelli, Maximum Likelihood Estimation of a Stochastic Integrate-and-Fire Neural Encoding Model, Neural Computation, v.16 n.12, p.2533-2561, December 2004
Alexander J. Smola , Bernhard Schlkopf, Bayesian kernel methods, Advanced lectures on machine learning, Springer-Verlag New York, Inc., New York, NY, | gaussian processes;Markov chain Monte Carlo;classification problems;hybrid Monte Carlo;bayesian classification;parameter uncertainty |
297877 | A Positive Acknowledgment Protocol for Causal Broadcasting. | AbstractCausal broadcasting has been introduced to reduce the asynchrony of communication channels inside groups of processes. It states that if two broadcast messages are causally related by the happened-before relation, these messages are delivered in their sending order to each process of the group. Even though protocols implementing causal broadcasting do not add control messages, they suffer from the typical pitfall of the timestamping technique: To ensure causal ordering, application messages have to piggyback a vector time of counters whose range of variation is unbounded. In this paper, we investigate such a range and define the concept of causal window of a process in which all counters of a vector time of a just arrived message at that process fall. We prove that, by using a causal broadcasting (one-to-all) protocol that follows a positive acknowledgment method, the width of the causal window of each process is limited. This allows a modulo k implementation of vector times when considering k greater than the width of the causal window of each process. The protocol is applicable to data link or transport layers using acknowledge messages to ensure reliable transfer of data. The paper also proposes two variants of the protocol based on causal windows. Both of them increase the concurrency of the protocol at the expense of wider causal windows. | Introduction
Asynchrony of communication channels is one of the major causes of nondeterminism in distributed
systems. The concept of causal ordering of messages has been introduced in the context of broadcasting
communication by Birman and Joseph [8] in order to reduce such an asynchrony. Causal
ordering means if two broadcast messages are causally related [14], they are delivered in their sending
order to each process. In light of this, when a message is delivered to a process, all messages
that causally precede it have been already delivered to that process.
To master asynchrony, other communication modes have been defined such as FIFO, Rendezvous
and logical instantaneous ordering [20]. From the user viewpoint, causal ordering increases
the control of a distributed application compared to a simple FIFO ordering, at the cost of a reduction
of the potential concurrency of the distributed application. Compared with logically instantaneous
communication, causal ordering provides more concurrency and simplicity of implementation.
Moreover, causal ordering is not prone to deadlock as Rendezvous, being an asynchronous paradigm
of communication. Actually, causal ordering extends the concept of FIFO channels connecting one
sender and one receiver to systems connecting several senders and one receiver.
Causal ordering has been proved to be very useful in taking snapshots of distributed applications,
in controlling distributed applications, in managing replicated data and in allowing consistent
observations of distributed computations [10, 18]. Recently, extensions of causal ordering have
been proposed to cope with mobile computing environments [16] and with unreliable networks and
distributed applications whose messages have limited time validity [5, 6]. Moreover, the concept of
causal ordering is not limited to message-passing environments. In the context of shared-memory
systems, a causal memory has been introduced by Ahamad et al. in [1] as a consistency criterion.
Such a criterion does not introduce latencies while executing read and write operations.
Even though several interesting protocols implementing causal ordering appeared in the literature
[8, 9, 18, 21], this communication mode is not yet widely used in commercial platforms because
such protocols suffer from the typical pitfall of the timestamping (logical or physical) technique:
to ensure causal ordering, in the context of broadcasting, application messages have to piggyback
a vector time of unbounded integers (counters) whose size is given by the number of processes [15],
this vector represents actually the control information of a protocol.
However, data-link and transport layers of communication systems use messages, called acknowledgments
(acks, for short), to indicate the successful reception of data. Ack messages, produced
by such layers, are actually a source of information about the causal relations among application
messages that could be used to reduce the amount of control information of causal ordering pro-
tocols. In this paper we introduce the notion of causal window and propose a causal broadcasting
protocol which exploits the implicit information provided by ack messages 1 . A causal window of
a process represents the range of variation in which all counters of a vector time of a just arrived
message at that process fall. We prove that, by using a causal broadcasting protocol that follows
a positive acknowledgment (PAK) method [17, 23], the width of the causal window of each process
is bounded. This allows a modulo k implementation of vector times when considering k greater
than the width of the causal window of each process. We first propose a PAK causal broadcasting
protocol in which a process can send a message only when acks of the previous message, sent by
the same process, have been received. We analyze then the general case in which a credit ct - 1
is associated with each sending process; in this case, a process can send ct consecutive messages
before receiving the corresponding acks. Finally, we investigate the case in which a process employs
a positive/negative (PAK/NAK) scheme, i.e., a process can send a sequence of ct messages
without being acknowledged and then an ack message is required from the other processes after the
receipt of the ct-th message sent by the same process. Credits and the use of PAK/NAK scheme
allow an increase of the concurrency of the protocol (decreasing the number of the internal protocol
synchronizations), but enlarge the dimension of the causal window.
The protocol we propose could be employed as a part of the flow control of a transport layer 2
providing causal communication to the above layer. For example, current group communication systems
(e.g., ISIS [10]) implement causal protocols on the top of a FIFO flow control by piggybacking
on each application message a vector of unbounded integer.
The remainder of this paper is organized as follows. In Section 2 the general model of a
distributed computation, the concept of causal relation among events, vector times and the causal
ordering communication mode are introduced. Section 3 presents the causal window notion. Section
4 shows a causal broadcasting protocol based on causal windows when considering the credit of the
sender equal to one. At this end, in the same section, the positive acknowledgment method and
the modulo k implementation of vector times are introduced. Section 5 proposes the two variations
of the protocol of Section 4 based on a credit and a PAK/NAK scheme respectively.
1 Some interesting algorithms that exploit the implicit information provided by ack messages to guarantee FIFO
and reliable channels can be found in [7, 11, 22].
2 Examples of transport layers that use ack messages for data transferring are, among others, TCP, OSI/TP4,
VTMP and Delta-t [13].
2 Model of Distributed Computations
2.1 Distributed System
A distributed system is a finite set P of n processes fP that communicate only by
broadcasting messages 3 . The underlying system, where processes execute, is composed of n processors
(for simplicity's sake, we assume one process per processor) that can exchange messages.
We assume that each pair of processes is connected by a reliable 4 , asynchronous and FIFO logical
channel (transmission delays are unpredictable). Processors do not have a shared memory and
there is no bound for their relative speeds.
2.2 Distributed executions
Execution of a process P i produces a sequence of events which can be classified as: broadcast (bcast)
events, deliver (dlv) events and internal events. An internal event may change only local variables,
broadcast or delivery events involve communication. In particular, each broadcast event produces
delivery events, one for each process. Let a and b be two events occurred in a process P i , a
precedes b in P i , denoted aOE i b, iff a has been produced before b. Let m be a message and a and b
two events, a precedes b, denoted a OE m b, iff a is the bcast(m) event and b is the dlv(m) event.
A distributed computation can be represented as a partial order of events b
E is the set of all events and ! is the happened-before relation [14]. This relation is the transitive
closure of the union of OE i n) and OE m , it is denoted by !, i.e.,
Hereafter, we call M( b
E) the set of all messages exchanged in b
E and we do not consider internal
events as they do not affect the interprocess ordering of events. Let us, finally, introduce the notion
of the immediate predecessor of a message m.
Definition 2.1 A message m 1 is an immediate predecessor of a message
paper, we consider a message the atomic unit of data movement in the system. Results of the following
Sections apply even though we consider packets or byte streams as atomic data unit.
4 A detailed description of the protocol proposed in Section 4 in the case of unreliable channel is out of the aims
of this paper. The interested reader can refer to [3] for such a description.
E);
It is to be noted that a message m can have n immediate predecessors one for each process.
As an example the message m 1 depicted in Figure 4 is not an immediate predecessor of m 2 due to
message m x .
2.3 Vector Times
To capture the causality relation between relevant events of a distributed computation, vector times
were introduced simultaneously and independently by Fidge [12] and Mattern [15]. A vector time
for a process P i , denoted V T i , is a vector of counters whose dimension is equal to the number of
knowledge of the number of relevant events produced by P j .
Each relevant event a is associated with a vector time (V T a ) and a process P i updates its vector
time according to the following rules:
1. When P i starts its execution, each component of V T i is initialized to zero;
2. When a relevant event is produced by
3. When a message m is sent by
A copy of V T i is piggybacked on message m (denoted V Tm );
4. When a message m, sent by P j , arrives at P i , it updates its vector time in the following way:
Let a and b be two relevant events and V T a and V T b the vector times associated with, according
to properties of vector times [12, 15], we have:
when considering V T a
As we are interested in a broadcast environment, in the following, we assume broadcast events as
the only relevant events, actually P i 's knowledge of the number of messages
broadcast by P j . In this particular setting, and when considering causal communication, the above
protocol can be simplified as shown in the next section.
2.4 Causal Ordering
Causal ordering states that the order in which messages are delivered to the application cannot
violate the happened-before relation of the corresponding broadcast events [8]. More formally,
Definition 2.2 A distributed computation b
respects causal ordering if for any two broadcast messages
E) we have:
ng :: dlv(m 1
A first implementation of such an abstraction has been embedded in the ISIS system [8, 10].
It consists of adding a protocol over a reliable underlying system such that events of a distributed
computation be causally ordered at the process level 5 . At this end, deliveries are done by delaying,
by means of a delivery condition, those messages arrived too early at the underlying system.
A simple broadcast protocol, similar to the one presented by Birman et al. in [9], is shown in
Figure
1. It shows the behavior of process P i when sending and upon the arrival of a message.
The algorithm includes some vector time management rules (line S1,S2,S3 and R2) plus a delivery
condition DC(m) (line R1) associated with a message m. A message m is delivered to a process P i
as soon as the vector time, it carries (V Tm ) does not contain knowledge of messages sent to, but
not delivered by, process P i 6 . Formally,
ng
3 The Causal Window
In this section we investigate the range of variation of the values stored in the counters of vector
times during the evaluation of the delivery condition of a generic destination process. Upon the
arrival of a message m at process P i , the value of vector time counters, V Tm and V T i , involved in
the delivery condition DC(m) generate three cases:
1. V Tm
1. There are
consecutive messages, sent by process P j , that
causally precede m and that have not arrived at P i as shown in Figure 2.a. Message m, if
delivered, violates causal ordering.
5 Other interesting point-to-point implementations of causal ordering can be found in [18, 21].This fact makes a part of the rule 3 of Section 2.3 (i.e., (8h 6= useless in
a causal broadcasting protocol.
init for each h 2
procedure BCAST(m; is the message, P i is the sender %
begin
for each h 2 ng do send (m; V Tm) to P h ; od % event bcast(m) % (S2)
end.
when (m;V Tm) arrives at P i from
begin
ng
event dlv(m) % (R3)
end.
Figure
1: A simple causal broadcasting protocol
2.
1. There are CO j
consecutive messages, sent by process P j and
delivered to P i , that are concurrent to message m, as shown in Figure 2.b. Message m can
be delivered to P i without violating causal ordering.
3. sent by P j and delivered to P i , is an immediate
predecessor of m as shown in Figure 2.c. Message m can then be delivered to P i without
violating causal ordering.
(a) (b)
CO j
messages
messages
(c)
Figure
2: Values of vector times in the delivery condition.
Figure
3: The Causal Window CW j
If all counters of V Tm fall either in the case 3 or in the case 2, message m can be delivered.
Hence, upon the arrival of message m at process P i , counters of V Tm fall in a range of variation
that spans between V T
A causal window CW i is composed of a set of windows CW j
i one for each process P j . The
number
i represents the width of the window CW j
Figure
3). To implement a
causal broadcasting protocol employing modulo k vector times, we have to show the boundedness
of the width of the causal windows. In the general setting, as the one described in the previous
section, where transmission times are unpredictable, the width of CW i are non-limited.
4 A PAK Protocol based on Causal Windows
4.1 Positive Acknowledgement Method
To get a limited causal window, we assume processes follow a stop-and-wait approach. A process
broadcasts a message ( bcast(m) event) and waits for all the acks (n a:ack(m) events) before
executing any other broadcast event. Once all acks have arrived, such a broadcast message is said
to be "fully acknowledged" (f:ack(m) event). On the other side, each time a process receives a
broadcast message (arr(m) and then dlv(m) events), it sends an ack (s:ack(m)). Hence, at the
underlying system level six types of events occur and only bcast and dlv events are visible to the
application. So the processing of a broadcast message m, sent by P i , produces the following poset
of events, denoted PO(m):
OEm arr(m) OE1 dlv(m) OE1 s:ack(m) OEm
OEm arr(m) OEn dlv(m) OEn s:ack(m) OEm
the stop-and-wait approach implies a send condition SC 1 between any two successive messages sent
by the same process P i . Formally,
This synchronization is local and can be easily implemented by a boolean variable
processing broadcast (initialised to FALSE) in each process. The value TRUE indicates that the
process has broadcast a message m and it is waiting for f:ack(m) event. As soon as the event
occurs processing broadcast toggles enabling other broadcast of messages.
A remark on group communication. In a group communication system (e.g. ISIS [10],
TRANSIS [2] etc.) the occurrence of the event fully:ack of a message m corresponds to the
notion of stability of that message [4, 9], i.e., the sender of m learns that all the members of the
group have delivered m. In fact the use of ack messages is one of the methods to diffuse stability
information in a group of processes (other methods employ "gossiping" and piggybacking). The
notion of stability is a key point in many group communication problems such as security [19] and
large scale settings [4] just to name a few. So, informally, the condition SC 1 can be restated as
follows: a process cannot broadcast a message in a group till the previous one, it sent, is declared
stable. This condition is very conservative and implies a strong synchronization between each pair
of successive messages sent by the same process. In Section 5 we present two variations that weaken
that synchronization.
4.2 Causal Windows with Limited Width
In this subsection we prove that, the causal window of a PAK causal broadcasting protocol based
on the send condition SC 1 is limited:
Lemma 4.1 Let b
E be a distributed computation and m 1 and
E) be messages such that
protocol based on SC 1 ensures
if there is a causal ordering violation between m 1 and m 2 then
Proof (by contradiction) As shown in Figure 4, suppose there is a causal ordering violation between
two messages m 1 and m 2 sent by P i and P j (i 6= respectively (i.e., bcast(m 1
that is, there exists a message m x sent by P i such
that
From PO(m 1 ), PO(m x ) and the send condition SC 1 (m
dlv(m x ). Due to SC 1 (m Due to
assumption bcast(m x
mx
Figure
4: Proof of Lemma 3.1.
From the previous Lemma and from the definition of the width of a causal window given in
Section 3 we have:
Theorem 4.2 In a PAK causal broadcasting protocol based on SC 1 ,
i of CW j
i is equal to 1 for
any P i and P j .
Proof
i represents the number of consecutive messages m
, sent by process P j
(i.e., bcast(m x
which have not arrived at process
causally precede a message m just arrived at P i (i.e., bcast(m x
would violate causal ordering with each message of the
sequence
. From the Lemma 4.1 if there is a causal ordering violation between m 0
and so the number of consecutive messages, sent by P j , that may violate causal
ordering is at most one. Hence the claim follows. 2
Now, by considering the FIFO property of channels, we have the following Lemma:
Lemma 4.3 Let b
E be a distributed computation and m 2 M( b
E) be a message sent by process P i .
There is at most one message m x 2 M( b
E) concurrent to m for each process P j (i 6= j).
Proof If m and m x are concurrent follows that bcast(m x
From PO(m x ), PO(m) and the channel FIFO property, the following sequence of events occurs
in P j and P i as shown in Figure 5.a:
Suppose there is another message m x 0 sent by P j and concurrent to m. From the send condition
and from the definition of concurrent messages given above we have: bcast(m x 0
(a) (b)
ackm
ackm
mx
ackmx
ackmx
Figure
5: Proof of Lemma 3.3.
shown in Figure 5.b. From FIFO property and PO(m x 0 ),
on process process P j we have: dlv(m) OE j a:ack(m x 0 ), it follows that: bcast(m x
which contradicts the send condition SC 1 (m x at most one message, sent by each distinct
process, can be concurrent to m. 2
From the previous Lemma and the definition of the width of a causal window given in Section
3 we have:
Theorem 4.4 In a PAK causal broadcasting protocol based on SC 1 , CO j
i is equal to 1 for any P i
and P j .
Proof CO 2
i represents the number of consecutive messages m
, sent by process P j
which have been delivered to P i and are concurrent to a message m just arrived at P i . From
Lemma 4.3, there is at most one message m x concurrent to m for each process P j (i 6= j), so
is equal to one and the claim follows. 2
Hence, the range of variation of all the causal windows is limited and the step by which each
vector time counter increases is 1 (due to the FIFO property of channels). So we have the following
invariant:
This allows a modulo k implementation of counters of vector times by choosing k greater than
the maximum difference between any two counters i.e., k - 3. An example of such a window is
shown in Figure 6 with k equal to 4.
A remark on sliding windows. From Figure 6 it can been devised that a causal window is a
particular type of sliding window. The sliding window is a technique widely used for flow-control
in point-to-point data transfer protocols to avoid loss of messages and to ensure FIFO deliveries
message delivered
message delayed
Figure
over asynchronous and unreliable communication systems (e.g., TCP) [11, 17, 23]. This technique
induces a closed loop between sender and receiver which allows not to overload buffer spaces of the
receiver and to avoid network congestion by controlling the transmission rate of the sender. So the
interest of a causal window lies also in the fact that it could be used as a part of a flow-control
layer of a group communication system to provide causal communication to the above layer.
4.3 The Protocol
The behavior of a process P i when executing the PAK protocol is described in Figure 7. When
requesting to broadcast a message m, process P i first waits till a previous broadcast message, if any,
is fully acknowledged (line S1) and then sets the variable processing broadcast to TRUE, stores
the current vector time V T i in V Tm and sends m with attached V Tm as an atomic action (lines
S2-S3). Afterwards, it waits till message m be fully acknowledged (i.e., an ack m message arrives
from each member of P ) and, then, the local timestamp V T i [i] is increased by one module k (line
S5) and finally successive broadcast messages are enabled by resetting processing broadcast (line
S6).
Upon arrival at process P i of message m, its delivery is determined by its delivery condition
ng
In this particular case, CP h
i is equal to one. Message m is delivered as soon as the predicate
DC(m) is true (line R1). When a broadcast message m, sent by P j , is delivered (line R2) to P i ,
the vector time V T i [j] is updated (line R3) and an ack m message is sent to P j (line R4). Lines
up to R4 are executed atomically.
7 Note that (\Gammah) mod and that by definition of causal window k ? h.
init processing broadcast := FALSE; for each h 2
procedure BCAST(m; is the message, P i is the sender %
begin
wait (:processing broadcast); (S1)
processing broadcast := TRUE; V Tm
for each h 2 ng do send (m; V Tm) to P h ; od % event bcast(m) % (S3)
wait (for each h 2 ng do (ack m ) arrives from P h ; od); (S4)
processing broadcast := FALSE; % event f:ack(m) % (S6)
end.
when (m;V Tm) arrives at P i from event arr(m) %
begin
ng
event s:ack(m) % (R4)
end.
Figure
7: The PAK broadcasting protocol based on causal windows
4.4 Correctness Proof
Theorem 4.5 Delivery events respect causal ordering (Safety).
Proof Let us consider two messages m 1 and m 2 sent by processes P i and P j respectively and
delivered to P h out of causal order (i.e., bcast(m 1
Lemma 4.1 we have that line of the protocol, we get: V Tm2
Upon the delivery of m 2 to P h , the delivery condition DC(m 2 ) (line R1) requires one of the
following conditions be true:
1.
[i], that is V Tm 1
2. V Tm2 k, that is V Tm1
By definition and line R2, V T h [i] contains the number of messages sent by P i and delivered
to P h . Successive messages, sent by P i , are delivered in FIFO order by the assumption on FIFO
channels and by the send condition SC 1 .
As m 1 is delivered (by hypothesis) we have either V Tm1
mod k. In both cases, by considering conditions 1 and 2, m 1 has already been delivered; this
contradicts the hypothesis that m 1 was delivered after m 2 . 2
Theorem 4.6 Each message will be eventually delivered (Liveness).
Proof Let m x be the x-th message sent by P i and arrived at P j but never delivered. Given the
delivery condition DC(m x ) of line R1, it follows:
Two cases have to be considered:
From the send condition SC(m upon the arrival of message
m x at process P j , all messages m sent by P i were delivered to P j . So by line
R2, after the delivery of m is equal to x \Gamma 1 which contradicts (P1).
There must be at least one message m, causally preceding m x , sent by P k that either has
not arrived at process P j or is arrived and delayed, so the delivery of m x would violate the
causal ordering. From Lemma 4.1, we have Considering the reliable and broadcast
nature of channels and messages respectively, sooner or later m arrives at P j . Now, two cases
are possible:
8 This part of the proof is similar to the one in [9].
1. m is delivered. This causes, by line R2, the delivery of m x .
2. m is delayed. The same argument can be applied to a message m x 0 , sent by Pw (with
h), such that m x 0 ;m. Due to the finite number of processes and messages,
sooner or later we fall either in case 1 or in the case we have a contradiction.5 Variants of the Protocol
This section shows two variants of the previous protocol. The first one allows a process to have a
certain number of outstanding unacknowledged messages at any time (credit). The second assumes
that only a subset of messages be acknowledged (positive/negative acknowledgement approach).
The aim of both variants is to reduce the number of local synchronizations of the protocol due to
the send condition.
5.1 A PAK Broadcast Protocol using Credits
Here we suppose that processes have a credit of ct - 1, i.e., a process P i can send up to ct - 1
consecutive broadcast messages m before receiving the corresponding acks. So, the send
condition SC 1 can be extended as follows:
ct (m
Credits potentially reduce the number of synchronization in the send condition, but increase
the width of the causal window 9 . Indeed, as shown in Figure 8.a, upon the arrival of a message m
sent by P j , there could be at most ct consecutive messages sent by P k that causally
precede m (bcast(m x ct \Gamma 1g)) and such that dlv(m) OE i dlv(m x ) (with
ct \Gamma 1g). Let us assume m ct be a message sent by P k that causally precedes message m and
that dlv(m) OE i dlv(m ct ). It follows dlv(m ct ), bcast(m ct
and, because of, dlv(m) OE i dlv(m 0 ), we have bcast(m ct ct ), an absurdity. So upon the
arrival of message m at process P i , we have V Tm
ct. In figure 8.a, for clarity's
sake, only the ack messages that produce the fully:ack events are depicted.
9 The local synchronization SC ct only weakens SC1 . In fact the number of local synchronizations is the same. How-
ever, if the credit is appropriately chosen (as a function of the network latency), broadcast of very few messages will
be prohibited because acks will be received before the credit is exhausted. So the number of "real" synchronizations
is actually reduced.
ct
ct
(a) (b)
ct ct
Figure
8: An example of message scheduling of a protocol with credits.
On the other hand, as shown in Figure 8.b, upon the arrival of a message m sent by P j , there can
be at most ct consecutive messages sent by P k and such that bcast(m y
(with y ct \Gamma 1g). Indeed, due to the FIFO property, message m will be delivered to P k
before the arrival at P j of the ack message related to m 0 . So bcast(m) ! bcast(m ct ). Hence, upon
the arrival of message m at process P i , we have
Hence, concerning the width of the causal windows, we have the following invariant:
So a modulo k implementation of vector times with k - 2ct allowed and the size of such
vectors is ndlog bits.
To manage credits the protocol of Figure 7 needs some modification. In particular, to implement
the send condition SC ct , the boolean variable processing broadcast becomes an integer one
(initialized to zero) and lines S1 and S2 should be replaced with the following lines:
wait (processing broadcast ! ct);
processing broadcast := processing broadcast
and lines S6 becomes:
processing broadcast := processing broadcast \Gamma
Finally, for the delivery condition, line R1 will be replaced
ng
ct
ct
Figure
9: Impossibility of a causal violation with more than ct consecutive messages.
A remark on memory requirements. Up to now we have considered the causal window width
as a function of the credit of the sender. Thus, the protocol delays all messages arrived too early
at a process (the maximum number of pending messages is ct(n \Gamma 1)). If the buffer of the receiver
has not enough space, it overflows dropping incoming messages. This situation can be mastered by
associating a credit with the receiver. Let us define for a process P i a width of a causal window
ct (being wt the credit of the receiver) such that 1 - wr - ct. An arriving message m,
whose V Tm [j] fall in the interval [V T will be stored, delayed, delivered and
acknowledged by P i . If its V Tm [j] falls in the interval [V T will be
discarded by P i without sending the ack message. Managing a credit associated with the receiver
requires, then, that each receiver has mechanisms to remove message duplication and each sender
has a timer which triggers retransmission of messages if no ack is received within a deadline. A
discussion about the use of previous mechanisms to support causal windows can be found in [3].
5.2 A PAK/NAK Broadcast Protocol
The protocol of Section 4.3 can be easily adapted for a solution using a PAK/NAK acknowledgment
reducing the number of synchronization among messages due to the send condition compared to
a PAK one. We assume that a process P i can send a sequence of ct messages without being
acknowledged and then an ack message is required from the other processes after the receipt of the
ct-th message sent by P i . A process is allowed to send the ct 1-st message only when the ct-st
message has been fully acknowledged. In this case, the send condition becomes:
if ct
As for the protocol using credits, the width of the causal windows is 2ct. A violation of causal
ordering will be always in the current range of the causal window since a message m sent by P i
can create a causality violation in process P k at most with ct consecutive messages sent by P j . The
acks required by the (i mod ct ct \Gamma 1)-st messages avoid a causality violation including more than
ct consecutive messages. In particular, Figure 9 shows that if a message m, sent by P j , creates a
causality violation in process P k with ct consecutive messages sent by P i then an absurdity follows,
(depicted by thick arrows).
Using the same argument of Section 5.1, no more of ct messages can be concurrent to any
message of the computation due to FIFO property of channels. To implement a PAK/NAK protocol,
the delivery condition is the same as the one of the protocol of Section 5.1 and the protocol of Figure
7 needs the following modifications. Lines S1 and S2 should be replaced with the following ones:
(:processing broadcast);
ct processing broadcast := TRUE;
and line S6 becomes:
processing broadcast := FALSE;
6 Conclusion
In this paper a PAK causal broadcasting protocol based on causal windows have been proposed.
A causal window actually represents the range of variation of vector time counters in the delivery
condition of a causal ordering protocol. This protocol allow a modulo k implementation of vector
times when considering k greater than the width of any causal window. This has been achieved by
exploiting the causal information implicitly carried by ack messages.
Compared with protocols that does not use control messages [5, 9, 18], the cost we pay is a little
computational overhead and the presence of local synchronizations between messages, sent by the
same process, due to the send condition (which reduces the potential concurrency of the protocol).
To reduce the number of local synchronizations, we have discussed two variations of the pro-
tocol. In both variations the reduction of the number of local synchronizations is payed by wider
causal windows. The first variation allows a process to transmit a certain number of successive
messages before receiving the corresponding acknowledgements (credit). This solution only potentially
reduces the number of local synchronizations. However, if the credit is appropriately chosen,
broadcast of very few messages will be prohibited because acks will be received before the credit
is exhausted. At the same time, if the credit is not too high, the difference between sending n
integer as a vector clok and ndlog ke bits as vector clock will be significant. So a proper choice
of a credit value will lead to overhead reduction and insignificant loss of concurrency. The second
variation employs a positive/negative method, i.e., it requires a local synchronizations between
a message m x and the succesive message, sent by the same process, only if x is a multiple of a
predefined parameter. This solution reduces the number of local synchronization and the message
traffic generated by the protocol. Compared to the first variation, this seems to be well suited for
high latencies networks.
In the paper we also showed how the notion of causal window is related to the one of sliding
windows used for FIFO flow control, a local synchronization due to the send condition is strictly
connected to the concept of stability of a message in a group of processes and how this protocol
can be adapted to avoid buffer overflow.
The interested reader can find a causal broadcasting protocol based on causal window well suited
for unreliable network in [3]. The description includes additional data structures and mechanisms,
a process has to endow, in order to avoid lost of messages and message duplications.
Acknowledgments
The author would like to thank Ken Birman, Bruno Ciciani, Roy Friedman, Achour Mostefaoui,
Michel Raynal, Ravi Prakash, Mukesh Singhal and Robbert Van Renesse for comments and many
useful conversations on the work described herein. The author also thanks the anonymous referees
for their detailed comments and suggestions that improved the content of the paper.
--R
"Causal Memory: Definitions, Implementation, and Programming"
"Transis; a Communication Subsystem for High Availability"
"A Positive Acknowledgement Protocol for Causal Broadcasting"
"The Hierarchical Daisy Architecture for Causal Delivery"
"Causal Deliveries of Messages with Real-Time Data in Unreliable Networks"
"Efficient \Delta-causal Broadcasting"
"A Note on Reliable Full-Duplex Transmission Over Half-Duplex Links"
"Reliable Communication in the Presence of Failures"
"Lightweight Causal Order and Atomic Group Multicast"
"Reliable Distributed Computing with the ISIS Toolkit"
"A Protocol for Packet Network Interconnection"
"Logical Time in Distributed Computing Systems"
"A Survey of Light-Weight Protocols for High-Speed Networks"
"Time, Clocks and the Ordering of Events in a Distributed System"
"Virtual Time and Global States of Distributed Systems"
"AnAdaptive Causal Ordering Algorithm Suited to Mobile Computing Environments"
"Networks and Distributed Computation"
"The Causal Ordering Abstraction and a Simple Way to Implement It"
"Securing Causal Relationship in Distributed Systems"
"Logically Instantaneous Message-Passing in Asynchronous Distributed Systems"
"A New Algorithm Implementing Causal Ordering"
"A Data Transfer Protocol"
"Computer Networks"
--TR
--CTR
Roberto Baldoni, Response to Comment on "A Positive Acknowledgment Protocol for Causal Broadcasting", IEEE Transactions on Computers, v.53 n.10, p.1358, October 2004
Giuseppe Anastasi , Alberto Bartoli , Giacomo Giannini, On Causal Broadcasting with Positive Acknowledgments and Bounded-Length Counters, IEEE Transactions on Computers, v.53 n.10, p.1355-1358, October 2004 | sliding windows;causal broadcasting;happened-before relation;distributed systems;group communication;asynchrony;vector times |
298300 | Algebraic and Geometric Tools to Compute Projective and Permutation Invariants. | AbstractThis paper studies the computation of projective invariants in pairs of images from uncalibrated cameras and presents a detailed study of the projective and permutation invariants for configurations of points and/or lines. Two basic computational approaches are given, one algebraic and one geometric. In each case, invariants are computed in projective space or directly from image measurements. Finally, we develop combinations of those projective invariants which are insensitive to permutations of the geometric primitives of each of the basic configurations. | Introduction
Various visual or visually-guided robotics tasks may be carried out using only a
projective representation which show the importance of projective informations
at different steps in the perception-action cycle. We can mention here the obstacle
detection and avoidance [14], goal position prediction for visual servoing
or 3D object tracking [13]. More recently it has been shown, both theoretically
and experimentally, that under certain conditions an image sequence taken
with an uncalibrated camera can provide 3-D Euclidean structure as well. The
latter paradigm consists in recovering projective structure first and then upgrading
it into Euclidean structure [16,4]. Additionally, we believe that computing
structure without explicit camera calibration is more robust than using calibration
because we need not make any (possibly incorrect) assumptions about the
Euclidean geometry (remembering that calibration is itself often erroneous).
All these show the importance of the projective geometry as well in computer
vision than in robotics, and the various applications show that projective informations
can be useful at different steps in the perception-action cycle. Still the
study of every geometry is based on the study of properties which are invariant
under the corresponding group of transformations, the projective geometry is
characterized by the projective invariants.
This paper is dedicated to the study of the various configurations of points
and/or lines in 3D space; it gives algebraic and geometric methods to compute
projective invariants in the space and/or directly from image measurements. In
the 3D case, we will suppose that we have an arbitrary three-dimensional projective
representation of the object obtained by explicit projective reconstruction.
In the image case, we will suppose that the only information we have is the image
measurements and the epipolar geometry of the views and we will compute
three-dimensional invariants without any explicit reconstruction.
First we show that arbitrary configurations of points and/or lines can be decomposed
into minimal sub-configurations with invariants, and these invariants
characterize the the original configuration. This means that it is sufficient to
study only these minimal configurations (six points, four points and a line, three
points and two lines, and four lines). For each configuration we show how to compute
invariants in 3D projective space and in the images, using both algebraic
and geometric approaches.
As these invariants generally depend on the order of the points and lines, we
will also look for features which are both projective and permutation invariants.
Projective Invariants
Definition 1. Suppose that p is a vector of parameters characterizing a geometric
configuration and T is a group of linear transformations, such that
ae are y are vectors of homogeneous coordinates.
A function I(p) is invariant under the action of the group T if
is the value of I after the transformation T.
If I 1 are n invariants, any f(p) = f(I
variant. So if we have several invariants, it is possible that not all of them are
functionally independent. The maximum number of independent invariants for
a configuration is given by the following proposition [5,6]:
Proposition 2. If S is the space parameterizing a given geometric configuration
(for example six points, four lines) and T is a group of linear transformations,
the number of functionally independent invariants of a configuration p of S under
the transformations of T is:
where Tp is the isotropy sub-group of the configuration p defined as
aepg.
Generally, minp2S (dim(Tp certain types of configurations have
non trivial isotropy sub-groups. For example, for Euclidean transformations,
the distance between two 3D points is an invariant and dim(S) \Gamma dim(T
Consequently, minp2S (dim(Tp ))) 6= 0. Indeed, the sub-group of
rotations about the axis defined by two points, leaves both of the points fixed.
The most commonly studied transformation groups in computer vision are
the Euclidean, affine and projective transformations. Since we want to work with
weakly calibrated cameras, we will use projective transformations. As projective
transformations of 3D space have points 3 and lines 4,
using proposition 2, we can easily see that we need at least six points, four points
and a line, two points and three lines, three points and two lines or four lines to
produce some invariants. We will say that these configurations are minimal.
Taking a non-minimal configuration of points and lines, we can decompose it
into several minimal configurations. It is easy to show that the invariants of the
sub configurations characterize the invariants of the original configuration. This
means that we only need to be able to compute invariants for minimal configu-
rations. For example, consider a configuration of seven points denoted M i;i=1::7 .
From proposition 2 there are 3 \Theta 7 \Gamma independent invariants. To obtain a
set of six independent invariants characterizing the configuration it is sufficient to
compute the three independent invariants - i;i=1::3 of the configuration M i;i=1::6
and the three independent invariants - i;i=1::3 of the configuration M i;i=2::7 .
We only discuss configurations of points and lines, not planes. Invariants of
configurations of planes and/or lines can be computed in the same way as those
of points and/or lines, by working in the dual space [12,2]. For the same reasons,
we do not need to consider in detail configurations of two points and three lines.
Indeed, these configurations define six planes which correspond to configurations
of six points in the dual space [2]. The other four minimal configurations we will
study in detail are: six points, four points and a line , three points and two lines
and four lines.
1.1 Projective Invariants Using Algebraic Approach
Consider eight points in space represented by their homogeneous coordinates
compute the following ratio of determinants:
k denotes the value oe(k) for an arbitrary permutation oe of f1;
The invariant I can be computed also from a pair of images using only image
measurements and the fundamental matrix using the Grassmann-Cayley, also
called the double algebra as below [1,3]:
and (fi ?
stands for the expression sign(fi ?
Note that if we change the bases in the images such that ff
are the image coordinates in the new bases, the corresponding fundamental
matrix is F
so the quantities
are independent of the bases chosen.
1.2 Six Point Configurations
Now consider a configuration of six points A i;i=1::6 in 3D projective space. From
proposition 2, there are 3 independent invariants for this configuration. Using
(2) and (3) we can deduce the following three invariants:
I
13\Gamma26
34\Gamma25
I
14\Gamma36
I
To show that they are independent, change coordinates so that A i;i=1::5 become
a standard basis. Denote the homogeneous coordinates of A 6 in this basis by
Computing I j;j=1::3 in this basis we obtain I
s and
I
s . These invariants are clearly independent.
Alternatively, one can also take a geometric approach to compute six point
invariants. The basic idea is to construct cross ratios using the geometry of
the configuration. In this case we will give two different methods that compute
independent projective invariants.
The first method constructs a pencil of planes using the six points. Taking
two of the points, for example A 1 and A 2 , the four planes defined by A 1 ; A 2 and
A k;k=3::6 belong to the pencil of planes through the line A 1 A 2 . Their cross ratio
is an invariant of the configuration. Taking other pairs of points for the axis of
the pencil gives further cross ratios. The relation between these and I j;j=1::3 is:
The second method [5] consists in constructing six coplanar points from the
six general ones and computing invariants in the projective plane. For exam-
ple, if take the plane A 1 A 2 A 3 and cut it by the three lines A 4 A 5 , A 4 A 6 and
A 5 A 6 obtaining the intersections M 1 , M 2 and M 3 coplanar with A i;i=1:3 . Five
coplanar points, for example A i;i=1::3 ; M i;i=1;2 , give two cross ratios
Any other set of five, for example A i;i=1::3 ; M i=1;3 , gives two further cross
only three of the four cross ratios are independent. Indeed
The relation between - i;i=1::3 and
I j;j=1::3 are -
I3 and -
So we have several methods of computing geometric invariants in 3D space.
To do this, we need an arbitrary projective reconstruction of the points. However,
the invariants can be also be computed directly from the images by using the
fact that the cross ratio is preserved under perspective projections.
First consider the case of the pencil of planes. We know that the cross ratio
of four planes of a pencil is equal to the cross ratio of their four points of
intersections with an arbitrary transversal line. So if we are able to find the image
of the intersection point of a line and a plane we can also compute the required
cross ratio in the image. The coplanar point method uses the same principle,
computing the intersection of a line and a plane from image measurements.
We want to compute the images of the intersection of a line A 3 A 4 and a plane
only the projections of the five points in two images a i;i=1::5 and
a 0
i;i=1::5 and the fundamental matrix between the two images. Take a point p in
the first image and a point p 0 in the second one. These points are the projections
we are looking for if and only if
3 \Theta a 0
verify (details in [2]):
(a 0
2 \Theta a 0
1 \Theta (Fp \Theta (a 0
3 \Theta a 0
F
(a 2 \Theta a 5 ) \Theta (a 1 \Theta p)
The first equation is linear. The second one is quadratic, but it is shown in [2]
that it can be decomposed into two linear components, one of which is irrelevant
(zero only when p belongs the epipolar line of a 0
finally we obtain two linear
equations which give the solution for p and
3 \Theta a 0
Another way to compute the intersection of a plane and a line from image
measurements is to use the homography induced between the images by the
plane. Let us denote the homography induced by the 3D plane \Pi by H. The
image n 0 of the intersection N of a line L and the plane \Pi is then
To compute the homography H of the plane A 1 A 2 A 5 , we use the fact that
ae i a 0
We denote a
a 0
and the
coefficients of the matrix H by h ij . Then for each j 2 f1; 2; 5; eg, we have:
As H is defined only up to a scale factor, it has only eight independent degrees
of freedom and can be computed from the eight equations of (5).
1.3 Configurations of One Line and Four Points
Denoting the four points by A i;i=1::4 and the line by L we obtain (cf. (2) and
(3)) the following invariant:
arbitrary two distinct points on
the line L, ff are the projections of A i;i=1::4 and L in the two images,
Using the geometric approach, the four planes LA i;i=1::4 belong to a pencil
so they define an invariant cross ratio -. Another approach, given by Gros in [5]
is to consider the four planes defined by the four possible triplets of points and
cut the line L with them. This gives another cross ratio - 0 for the configuration.
Of course, we only have one independent invariant, so there are relations between
I , - and - 0 . Indeed, we have
The method of computing - and - 0 directly from the images is basically the
same as for the configuration of six points .
1.4 Configurations of Three Points and Two Lines
The following two ratios are independent invariants for configurations of three
points A i;i=1::3 and two lines L k;k=1;2 in 3D:
I
I
These invariants can also be obtained as follows. Cut the two lines with the plane
defined by the three points to give R 1 and R 2 . The five coplanar points
define a pair of invariants, for example
Another way to compute a pair of independent invariants for this configuration
is to consider the three planes L 1 A i;i=1::3 and the plane A 1 A 2 A 3 and cut
them by L 2 This gives a cross ratio - 0
-2 . Changing the role of L 1 and L 2
gives another cross ratio - 0
. The cross ratios - i;i=1;2 and - 0
i;i=1;2 can
be computed directly in the images in the same way as the cross ratios of the
configuration of six points (finding images of intersections of lines and planes).
Configurations of Four Lines
Consider four lines L i;i=1::4 in 3D projective space. This configuration has
rameters, so naively we might expect to have independent invariant.
However, the configuration has a 1D isotropy subgroup, so there are actually
two independent invariants [6,5,2].
The existence of two independent cross ratios can also be shown geometri-
cally. Assume first that the lines are in general position, in the sense that they
are skew and none of them can be expressed as a linear combination of three
others. Consider the first three lines L i;i=1::3 . As no two of them are coplanar,
there is a one parameter family K of lines meeting all three of them. K sweeps
out a quadric surface in space, ruled by the members of K and also by a complementary
family of generators L, to which L 1 , L 2 and L 3 belong [11,6]. Members
of K are mutually skew, and similarly for L, but each member of K intersect each
member of L exactly once. Another property of generators is that all members
of each family can be expressed as a linear combination of any three of them.
By our independence assumptions, the fourth line L 4 does not belong to either
family (if L 4 belonged to L it would be a linear combination of the L i;i=1::3
and if L 4 was in K it would cut each of the lines L i;i=1::3 ). Hence, L 4 cuts the
surface in two real or imaginary points, A 4 and B 4 (these may be identical if
L 4 is tangent to the surface). For each point of the surface there is a unique line
of each family passing through it. Denote the lines of K passing through A 4 and
respectively. Let these lines cut the L i;i=1::3 in A i;i=1::3 and
respectively. In this way, we obtain two cross ratios:
Before continuing, consider the various degenerate cases.
- Provided that the first three lines are mutually skew, they still define the two
families K and L. The fourth line can then be degenerate in the following
ways:
ffl If L 4 intersects one (L i ) or two (L of the three lines, we have the
same solution as in the general case with A
respectively.
ffl If L 4 2 K intersects all three lines L i;i=1::3 , the two transversals T 1 and
are equal to L 4 and the cross ratios are no longer defined.
ffl If L 4 2 L (linear combination of the lines L i;i=1::3 ) every line belonging
to K cuts the four lines L i;i=1::4 and there is only one characteristic cross
ratio.
Given two pairs of coplanar lines, say L there are two
transversals intersecting all four lines. One is the line between the intersection
of L 1 and L 2 and the intersection of L 3 and L 4 . In this case the cross ratio
1. The other is the line of intersection of the planes defined by
defining a second cross ratio [5].
- If three of the lines are coplanar, say L i;i=1::3 , the plane \Pi containing them
intersects L 4 in a point A 4 . If the lines L i;i=1::3 belong to a pencil with center
the lines L i;i=1::3 and BA 4 define one cross ratio. Otherwise, we
have no invariants.
Finally, if all of the lines lie in the same plane \Pi, the possible cases are that
they belong to a pencil and define one cross ratio or they do not belong to
a pencil and there are no invariants.
As in the preceding cases, we can also compute the invariants using the
algebraic
I
I
where l i and l 0
i are the projections of the L i;i=1::4 in the images, fi
l i . The relation between I i;i=1;2 and - j;j=1;2 is I
and I conversely, given I 1 and I 2 , - 1 and - 2 are the solutions of
the equation As I 1 and I 2 are real, - 1 and - 2 are
either real or a complex conjugate pair.
Computation of - 1 and - 2 in 3D: First, we suppose that we know a projective
representation of the lines in some projective basis. They can be represented
by their Pl-ucker coordinates L
We look for lines T that intersect the L i;i=1::4 . If
it represents a transversal line if and only if:
ae
l (i)
This is a homogeneous system in the six unknowns p i;i=1::6 , containing four
linear equations and one quadratic one. It always has two solutions:
(p k
k=1;2 defined up to a scale factor, which may be real or
complex conjugates. These lines cut the L i;i=1::4 in A i;i=1::3 and B i;i=1::3 which
Computation of - 1 and - 2 from a Pair of Views: Define F to be the
fundamental matrix and l
i ) to be the
projections of the lines L i;i=1::4 into the images. Consider a line l in the first
image, with intersections x l with the lines l i;i=1::4 . If the intersections
i of the epipolar lines with the l 0
i;i=1::4 in the second image all lie on
some line l 0 , then l and l 0 are the images of a transversal line L. The condition
of this can be written:
ae
This system of equations is difficult to deal with directly. To simplify the
computation we make a change of projective basis, so that the new basis satisfies
The fundamental matrix
then becomes
and the two pencils of epipolar lines take the form
Using this parameterization, and after simplifications [2] we obtain:
complex conjugates, are the significant 1 solutions
of the system:
ae
with
Projective and Permutation Invariants
The invariants of the previous sections depend on the ordering of the underlying
point and line primitives. However, if we want to compare two configurations,
we often do not know the correct relative ordering of the primitives. If we work
with order dependent invariants, we must compare all the possible permutations
of the elements. For example, if I k;k=1::3 are the three invariants of a set of
points A i;i=1::6 and we would like to know if another set B i;i=1::6 is projectively
equivalent, we must compute the triplet of invariants for each of the 6!=720
1 The other two solutions are (1; 0;
our parameterization we can consider without lost of generality that t i
possible permutations of the B i . This is a very costly operation, which can be
avoided if we look for permutation-invariant invariants. 3
Definition 3. A function I is a projective and permutation invariant for a configuration
formed by elements only if we have for all projective
transformations T and for all permutations - 2 S k
The projective invariants we computed were all expressed as cross ratios, which
are not permutation invariants. As the group of permutations S 4 has 24 elements,
there are potentially 24 possible cross ratios of four collinear points. However, not
all of these are distinct, because the permutations -
have the same effect on the cross ratio. We know that all permutations can be
written as products of transpositions of adjoining elements. For S 4 there are
three such transpositions 4). The
effect of - 1 and - 3 is F 1
- and that of - 2 is F 2 -. If we compute all
the possible combinations of these functions, we obtain the following six values:
- .
To obtain permutation invariant cross ratios of four elements, it is sufficient to
take an arbitrary symmetric function of the - i;i=1::6 . The simplest two symmetric
functions J
are not interesting
because they give constant values. Taking the second order basic symmetric functions
unbounded
functions. Further invariants can be generated by taking combinations of the former
ones
proposed in [8] bounded
between 2 and 2.8 or
J2 bounded between 2 and 14. These functions
are characterized by the fact that they have the same value for each of the six
arguments - i;i=1::6 .
Let us see what happens in the case of six points in projective space. We
saw that the invariants of the points A i;i=1::6 given by (4) correspond to cross
ratios of pencils of planes. Define: I
A -(1) A -(2) A g. It is easy to see that I
I 231456 , I I 132456 and I
We are interested in the effect of permutations of points on the invariant
. The first remark is that if we interchange the first two elements the
value is the same: I which is obvious from a geometric viewpoint
as A 1 A 2 A k and A 2 A 1 A k represent the same plane.
If we fix the first two points and permute the last four, we permute the four
planes giving the cross ratio. So if we apply one of the above symmetric functions,
for example J , the results will be invariant. Using the following proposition
proved in [2] we can find all the possible values for J(I -(1)-(2)-(3)-(4)-(5)-(6) ).
3 We generalize here the work of Meer et al. [8] and Morin [9] on the invariants of five
coplanar points to 3D configurations of points and lines.
Proposition 4. An arbitrary permutation - of S 6 can be written as a product
of four permutations - 1k - 2n - 1m ,
is a permutation of the last four elements.
Denote the value of I -(1)-(2)-(3)-(4)-(5)-(6) by -(I 123456 ). We have then -(I 123456
applying (for example) J , we obtain:
because - changes only the order of the four planes giving the cross ratio. But
- 1k has no effect on the value of the cross ratio: - 11 is the identity and - 12 interchanges
the first two elements but leaves the planes invariant, so J(-(I 123456
J((- 2n (- 1m (I 123456 ))).
Consequently, we obtain only C 2
different possible values instead of
These will be denoted by Jmn = J((- 2n (- 1m (I 123456 ))). It is easy
to see that
ng. The geometric meaning of this is that we fix the pair of
points giving the axis of the pencil of planes.
As a function of I 1 ; I 2 and I 3 , the 15 values are:
Note that the values of I 1 ; I 2 ; I 3 depend on the order of the points, but any
other ordering gives the same values in different ordering. For this reason, we
sort the 15 values after computing them to give a vector that characterizes the
configuration independently of the order of underlying points.
Now, consider the configuration of a line and four unordered points. This
case is a very simple case because we have a cross ratio, so it is enough to take
the value of the function J(I) where I is the invariant given by (6). Similarly
to compute permutation invariants of four lines in space, we apply J to the pair
of cross ratios - 1 and - 2 given in the section 1.5. It turn out that the order
of the two sets of intersection points of the four lines with the two transversals
change in the same way under permutations of the lines. When - 1 and - 2 are
complex conjugates, J(- 1 ) and J(- 2 ) are also complex conjugates, but if we
want to work with real invariants it is sufficient to take J
The fourth basic configuration is the set of three points and two lines. We saw
that we can compute invariants for this configuration using the invariants of five
coplanar points. In this case it is sufficient to apply the results of [8], which show
that if I 1 and I 2 are a pair of invariants for five coplanar points we can take the
sorted list with elements: J(I 1
), to obtain
a permutation invariant. But this means that we make mixed permutations of
points and lines, which is unnecessary because when we want to compare two
such configurations we have no trouble distinguishing lines from points. When
we have five points in the plane A 1 A 2 A 3 we will require that the center points
of the considered pencils of lines be chosen among A i;i=1::3 and not among the
intersection points R j;j=1;2 (see section 1.4). In this way we have the cross ratios
of the form - . If we interchange A 1 and
A 2 , we obtain I 2 . Interchanging A 2 and A 3 gives 1
I1 and interchanging A 1 and
A 3 we obtain I1
I2 . The permutation of the two lines (i.e. R 1 and R 2
I1 .
Hence, applying J we find that the sorted list of elements
is a projective and permutation invariant for this configuration.
2.1 Some Ideas Concerning More Complex Configurations
Consider a configuration of N ? 6 points 4 From proposition 2, this configuration
has 3N independent invariants. Denote the N points by A i;i=1::N
and the 3(N \Gamma 5) invariants of this configuration by - k
3 are the invariants of the sub-configurations A
As the invariants - k
i depend on the order of the points, we try to generalize
the above approach to the N point case. First note that the invariants
are cross ratios of four planes of a pencil I
Ng. Consequently, for a given
set fi; j; k; l; m;ng ae
different values J(I ijklmn ).
But, there are C 6
N subsets of six points, so we have C 6
6 different values which
can be computed from the 3(N \Gamma 5) independent invariants - k
We show how to obtain these values in the case of 7 points, but the approach
is the same for N ? 7. Denote the independent invariants of
this configuration by - 1
The C 6
values can be obtained by calculating for each subset of six
points fi; j; k; l; m;ng ae 7g such that
the values I I ikjlmn and I I ijklmn in function of - j
subsets
I
I
I 3 - 1
and then apply (10). Finally, we sort the resulting list of 105 elements.
Conclusion
We have presented a detailed study of the projective and permutation invariants
of configurations of points, lines and planes. These invariants can be used as a
basis for indexing, describing, modeling and recognizing polyhedral objects from
perspective images.
4 We will consider only the case of points. The other cases can be handled similarly,
but the approach is more complex and will be part of later work.
The invariants of complex configurations can be computed from those of
minimal configurations into which they can be decomposed. So it was sufficient
to treat only the case of minimal configurations. Also, in projective space there is
a duality between points and planes that preserves cross ratios, so configurations
of planes or planes and lines can be reduced to point and point-line ones.
For each configuration we gave several methods to compute the invariants.
There are basically two approaches - algebraic and geometric - and in each
case we showed the relations between the resulting invariants. We analyzed the
computation of these invariants in 3D space assuming that the points and lines
had already been reconstructed in an arbitrary projective basis, and we also gave
methods to compute them directly from correspondences in a pair of images. In
the second case the only information needed is the matched projections of the
points and lines and the fundamental matrix.
Finally, for each basic configuration we also gave permutation and projective
invariants, and suggested ways to treat permutation invariance for more
complicated configurations.
Acknowledgment
We would like to thank Bill Triggs for his carefully reading
of the draft of this manuscript.
--R
Multiple image invariants using the double algebra.
Modelisation projective des objets tridimensionnels en vision par ordi- nateur
Computing three-dimensional projective invariants from a pair of images using the Grassmann-Cayley algebra
From projective to euclidean reconstruction.
3D projective invariants from two images.
Invariants of lines in space.
Visually guided object grasping.
Correspondence of coplanar features through p 2
Quelques contributions des invariants projectifs 'a la vision par ordina- teur. Th'ese de doctorat
A comparison of projective reconstruction methods for pairs of views.
Algebraic Projective Geometry.
G'eom'etries affine
Geometric invariants for verification in 3-d object tracking
Applications of non-metric vision to some visual guided tasks
A robust technique for matching two uncalibrated images through the recovery of the unknown epipolar geometry.
Metric calibration of a stereo rig.
--TR
--CTR
Franoise Dibos , Patrizio Frosini , Denis Pasquignon, The Use of Size Functions for Comparison of Shapes Through Differential Invariants, Journal of Mathematical Imaging and Vision, v.21 n.2, p.107-118, September 2004
Yihong Wu , Zhanyi Hu, Camera Calibration and Direct Reconstruction from Plane with Brackets, Journal of Mathematical Imaging and Vision, v.24 n.3, p.279-293, May 2006
Arnold W. M. Smeulders , Marcel Worring , Simone Santini , Amarnath Gupta , Ramesh Jain, Content-Based Image Retrieval at the End of the Early Years, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.12, p.1349-1380, December 2000 | indexation;uncalibrated stereo;grassmann-cayley algebra;projective reconstruction;projective and permutation invariants;cross ratio |
298503 | L-Printable Sets. | A language is L-printable if there is a logspace algorithm which, on input 1n, prints all members in the language of length n. Following the work of Allender and Rubinstein [SIAM J. Comput., 17 (1988), pp. 1193--1202] on P-printable sets, we present some simple properties of the L-printable sets. This definition of "L-printable" is robust and allows us to give alternate characterizations of the L-printable sets in terms of tally sets and Kolmogorov complexity. In addition, we show that a regular or context-free language is L-printable if and only if it is sparse, and we investigate the relationship between L-printable sets, L-rankable sets (i.e., sets A having a logspace algorithm that, on input x, outputs the number of elements of A that precede x in the standard lexicographic ordering of strings), and the sparse sets in L. We prove that under reasonable complexity-theoretic assumptions, these three classes of sets are all different. We also show that the class of sets of small generalized Kolmogorov space complexity is exactly the class of sets that are L-isomorphic to tally languages. | Introduction
. What is an easy set? Typically, complexity theorists view
easy sets as those with easy membership tests. An even stronger requirement might
be that there is an easy algorithm to print all the elements of a given length. These
"printable" sets are easy enough that we can efficiently retrieve all of the information
we might need about them.
Hartmanis and Yesha first defined P-printable sets in 1984 [HY84]. A set A is
P-printable if there is a polynomial-time algorithm that on input 1 n outputs all of
the elements of A of length n. Any P-printable set must lie in P and be sparse, i.e.,
the number of strings of each length is bounded by a fixed polynomial of that length.
Allender and Rubinstein [AR88] give an in-depth analysis of the complexity of the
P-printable sets.
Once P-printability has been defined, it is natural to consider the analogous notion
of logspace-printability. Since it is not known whether or not an obvious question
to ask is: do the L-printable sets behave differently than the P-printable sets? In
this paper, we are able to answer this question in the affirmative, at least under plausible
complexity theoretic assumptions. Jenner and Kirsig [JK89] define L-printability
as the logspace computable version of P-printability. Because L-printability implies
P-printability, every L-printable set must be sparse and lie in L. In this paper we give
the first in-depth analysis of the complexity of L-printable sets. (Jenner and Kirsig
focused only one chapter on printability, and most of their printability results concern
NL-printable sets.)
y Department of Computer Science, University of Chicago, Chicago, IL 60637. The work of this
author was supported in part by NSF grant CCR-9253582.
z Department of Computer Science, University of Kentucky, Lexington, KY 40506-0046. The
work of these authors was supported in part by NSF grant CCR-9315354.
x DIMACS Center, Rutgers University, Piscataway, NJ 08855. The work of this author was
supported by NSF cooperative agreement CCR-9119999 and a grant from the New Jersey Commission
on Science and Technology.
- The work of this author was supported in part by a University of Kentucky Presidential
Fellowship.
L. FORTNOW, J. GOLDSMITH, M. LEVY, AND S. MAHANEY
Whenever a new class of sets is analyzed, it is natural to wonder about the
structure of those sets. Hence, we examine the regular and context-free L-printable
sets. Using characterizations of the sparse regular and context-free languages, we show
in x 4 that every sparse regular or context-free language is L-printable. (Although
the regular sets are a special case of the context-free sets, we include the results for
the regular languages because our characterization of the sparse regular languages is
simple and intuitive.)
We might expect many of the properties of P-printable sets to have logspace
analogues, and, in fact, this is the case. In x 5 we show that L-printable sets (like
their polynomial-time counterparts) are closely related to tally sets in L, and to sets
in L with low generalized space-bounded Kolmogorov complexity.
A set is said to have small generalized Kolmogorov complexity if all of its strings
are highly compressible and easily restorable. Generalized time-bounded Kolmogorov
complexity and generalized space-bounded Kolmogorov complexity are introduced in
[Har83] and [Sip83]. Several researchers [Rub86, BB86, HH88] show that P-printable
sets are exactly the sets in P with small generalized time-bounded Kolmogorov com-
plexity. [AR88] show that a set has small generalized time-bounded Kolmogorov
complexity if and only if it is P-isomorphic to a tally set. Using similar techniques,
we show in x 5 that the L-printable sets are exactly the sets in L with small generalized
space-bounded Kolmogorov complexity. We also prove that a set has small
generalized space-bounded Kolmogorov complexity if and only if it is L-isomorphic to
a tally set.
In x 6, we note that sets that can be ranked in logspace (i.e., given a string x,
a logspace algorithm can determine the number of elements in the set - x) seem
different from the L-printable sets. For sparse sets, P-rankability is equivalent to P-
printability. We show a somewhat surprising result in x 6, namely that the sparse
L-rankable sets and the L-printable sets are the same if and only if there are no tally
sets in P \Gamma L if and only if
Are all sparse sets in L either L-printable or L-rankable? Allender and Rubinstein
[AR88] show that every sparse set in P is P-printable if and only if there are no
sparse sets in FewP \Gamma P. In x 6, we similarly show a stronger collapse: every sparse
set in L is L-printable if and only if every sparse set in L is L-rankable if and only if
there are no sparse sets in FewP \Gamma L if and only if
Unlike L-printable sets, L-rankable sets may have exponential density. Blum
(see [GS91]) shows that every set in P is P-rankable if and only if every #P function
is computable in polynomial time. In x 6, we also show that every set in L is L-rankable
if and only if every #P function is computable in logarithmic space.
2. Definitions. We assume a basic familiarity with Turing machines and Turing
machine complexity. For more information on complexity theory, we suggest
either [BDG88] or [Pap94]. We also assume a familiarity with regular languages and
expressions and context-free languages as found in [Mar91]. We denote the characteristic
function of A by -A . We use the standard lexicographic ordering on strings and
let jwj be the length of the string w. (Recall that w - lex v iff jwj ! jvj or
and, if i is position of the leftmost bit where w and v differ, w .) The alphabet
all strings are elements of \Sigma . We denote the complement of A by A.
The class P is deterministic polynomial time, and L is deterministic logarithmic
space; remember that in calculating space complexity, the machine is assumed to have
separate tapes for input, computation, and output. The space restriction applies only
to the work tape. It is known that L ' P, but it is not known whether the two classes
L-PRIN
are equal. The class E is deterministic time 2 O(n) , and LinearSPACE is deterministic
space O(n).
Definition 2.1. A set A is in the class PP if there is a polynomial time nondeterministic
Turing machine that, on input x, accepts with more than half its computations
A. A function f is in #P if there is a polynomial time nondeterministic
Turing machine M such that for all x, f(x) is the number of accepting computations
of M(x).
Allender[All86] defined the class FewP. FewE is defined analogously.
Definition 2.2. [All86] A set A is in the class FewP if there is a polynomial
time nondeterministic Turing machine M and a polynomial p such that on all inputs
accepts x on at most p(jxj) paths. A set A is in the class FewE if there is an
exponential time nondeterministic Turing machine M and a constant c such that on
all inputs x, M accepts x on at most 2 cn paths. (Note that this is small compared
to the double exponential number of paths of an exponential-time nondeterministic
Turing machine.)
Definition 2.3. A set S is sparse if there is some polynomial p(n) such that for
all n, the number of strings in S of length n is bounded by p(n) (i.e., jS =n j - p(n) ).
set T over alphabet \Sigma is a tally set if T ' foeg , for any character oe 2 \Sigma.
The work here describes certain enumeration properties of sparse sets in L. There
are two notions of enumeration that are considered: rankability and printability.
Definition 2.4. If C is a complexity class, then a set A is C-printable if and
only if there is a function computable in C that, on any input of length n, outputs all
the strings of length n in A.
Note that P-printable sets are necessarily in P, and are sparse, since all of the
strings of length n must be printed in time polynomial in n. Since every logspace
computable function is also computable in polynomial time, L-printable sets are also
P-printable, and thus are also sparse.
Definition 2.5. If C is a complexity class, then a set, A, is C-rankable if and
only if there is a function r A computable in C such that r A
(In other words, r A (x) gives the lexicographic rank of x in A.) The function r A is
called the ranking function for A.
Note that P-rankable sets are necessarily in P but are not necessarily sparse.
Furthermore, a set is P-rankable if and only if its complement is P-rankable. Finally,
note that any P-printable set is P-rankable.
Definition 2.6. If C is a complexity class, then two sets, A and B, are C-
isomorphic there are total functions f and g computable in C that are
both one-one and onto, such that y, and f is a reduction
from A to B, and g is a reduction from B to A.
In order for two sets to be P-isomorphic, their density functions must be close to
each other: if one set is sparse and the other is not, then any one-one reduction from
the sparse set to the dense set must have super-polynomial growth rate. By the same
argument, if one has a super-polynomial gap, the other must have a similar gap.
A lexicographic (or order-preserving) isomorphism from A to B is, informally, a
bijection that maps the ith element of A to the ith element of B and maps the ith
element of A to the ith element of B. Note that in the definition of similar densities,
the isomorphisms need not be computable in any particular complexity class. This
merely provides the necessary condition on densities in order for the two sets to be
P-isomorphic or L-isomorphic.
Definition 2.7. Two sets, A and B, have similar densities if the lexicographic
4 L. FORTNOW, J. GOLDSMITH, M. LEVY, AND S. MAHANEY
isomorphisms from A to B and from B to A are polynomial size bounded.
The notion of printability, or of ranking on sparse sets, can be considered a
form of compression. Another approach to compression is found in the study of
Kolmogorov complexity; a string is said to have "low information content" if it it
has low Kolmogorov complexity. We are interested in the space-bounded Kolmogorov
complexity class defined by Hartmanis [Har83].
Definition 2.8. Let M v be a Turing machine, and let f and s be functions on
the natural numbers. Then we define
and M v uses s(n) space)g:
Following the notation of [AR88], we refer to y as the compressed string, f(n) as
the compression, and s(n) as the restoration space. Hartmanis [Har83] shows that
there exists a universal machine M u such that for all v, there exists a constant c such
that KS v [f(n); s(n)] ' KS u the subscript and let
3. Basic Results. We begin by formalizing some observations from the previous
section.
Observation 3.1. If A is L-printable, then A has polynomially bounded density,
i.e., A is sparse.
This follows immediately from the fact that logspace computable functions are P-time
computable (i.e., L-printability implies P-printability), and from the observations
on P-printable sets.
Proposition 3.2 ([JK89]). If A is L-printable, then A 2 L.
Proof. To decide x 2 A, simulate the L-printing function for A with input 1 jxj .
As each y 2 A is "printed," compare it, bit by bit, with x. If accept. Because
the comparisons can be done using O(1) space, and the L-printing function takes
O(log jxj) space, this is a logspace procedure.
Proposition 3.3. If A is L-rankable, then A 2 L.
Proof. Note that the function x \Gamma 1 (the lexicographic predecessor of x) can be
computed (though not written) in space logarithmic in jxj. Since logspace computable
functions are closed under composition, r A can be computed in logspace, as
can r A
Proposition 3.4. If A is L-printable, then A is L-rankable.
Proof. To compute the rank of x, we print the strings of A up to jxj and count the
ones that are lexicographically smaller than x. Since A is sparse, by Observation 3.1,
we can store this counter in logspace.
We can now prove the following, first shown by [JK89] with a different proof.
Proposition 3.5 ([JK89]). If A is L-printable, then A is L-printable in lexicographically
increasing order.
Proof. To prove this, we use a variation on selection sort. Suppose the logspace
machine M L-prints A. Then we can construct another machine, N , to L-print A in
lexicographically increasing order. Note that it is possible to store an instantaneous
description of a logspace machine, i.e., the position of the input head, the state, the
contents of the worktape, and the character just output, in O(log jxj) space.
The basic idea is that we store, during the computation, enough information
to produce three strings: the most recently printed string (in the lexicographically
ordered printing), the current candidate for the next string to be printed, and the
L-PRIN
current contender. We can certainly store three IDs for M in logspace. Each ID
describes the state of M immediately prior to printing the desired string.
In addition to storing the IDs, we must simulate M on these three computations
in parallel, so that we can compare the resulting strings bit by bit. If the contender
string is greater than the last string output (so it has not already been output) and
less than the candidate, it becomes the new candidate. Otherwise, the final ID of
the computation becomes the new contender. These simulated computations do not
produce output for N ; when the next string is found for N to print, its initial ID is
available, and the simulation is repeated, with output.
Using the same technique as in the previous proof, one can easily show the following
Proposition 3.6. If A is L-printable, and A - =log B, then B is L-printable as
well.
4. L-Printable Sets. We begin this section with a very simple example of a
class of L-printable sets.
Proposition 4.1 ([JK89]). The tally sets in L are L-printable.
Proof. On input of length n, decide whether 1 n 2 A. If so, print it.
One may ask, are all of the L-printable sets as trivial as Proposition 4.1? We
demonstrate in the following sections that every regular language or context-free language
that is sparse is also L-printable (see Theorem 4.5 and Corollary 4.14). We also
give an L-printable set that is neither regular nor context-free (see Proposition 4.15).
4.1. Sparse Regular Languages. We show that the sparse regular languages
are L-printable. In order to do so, we give some preliminary results about regular
expressions.
Definition 4.2 ([BEGO71]). Let r be a regular expression. We say r is unambiguous
if every string has at most one derivation from r.
Theorem 4.3 ([BEGO71]). For every regular language L, there exists an unambiguous
regular expression r such that
Proof. (Sketch) Represent L as the union of disjoint languages whose DFA's have
a unique final state. Using the standard union construction of an NFA from a DFA,
we get an NFA with the property that each string has a unique accepting path. Now,
using state elimination to construct a regular expression from this NFA, the unique
path for each string becomes a unique derivation from the regular expression.
We should note that even though removal of ambiguity from a regular expression
is, in general, PSPACE-complete [SH85], this does not concern us. Theorem 4.3
guarantees the existence of an unambiguous regular expression corresponding to every
regular language, that is sufficient for our needs.
We now define a restricted form of regular expression, that will generate precisely
the sparse regular languages. (Note that a similar, although more involved, characterization
was given in [SSYZ92]. They give characterizations for a variety of densities,
whereas we are only concerned with sparse sets.)
Definition 4.4. We define a static regular expression (SRE) on an alphabet \Sigma
inductively, as follows:
1. The empty expression is an SRE, and defines ;, the empty set.
2. If x 2 \Sigma or string ), then x is an SRE.
3. If s and t are SREs, then st, the concatenation of s and t, is an SRE.
4. If s and t are SREs, then s + t, the union of s and t, is an SRE.
5. If s is an SRE, then s is an SRE iff:
a) s does not contain a union of two SREs; and,
6 L. FORTNOW, J. GOLDSMITH, M. LEVY, AND S. MAHANEY
b) s does not contain any use of the operator.
Note the restriction of the operator in the above definition. I.e., can only be
applied to a string. This is the only difference between SREs and standard regular
expressions.
We can alternately define an SRE as a regular expression that is the sum of terms,
each of that is a concatenation of letters and starred strings.
Theorem 4.5. Let R be an unambiguous regular expression. Then L(R) is sparse
iff R is static.
Proof. We first prove two lemmas about "forbidden" subexpressions.
Lemma 4.6. Let ff, fi; S be non-empty regular expressions such that
and S is unambiguous. Then there is a constant k ? 0 such that, for infinitely many
k strings of length n.
Proof. Let such that u 2 L(ff) and v 2 L(fi). Let
S is unambiguous, there must be at least two strings of length k in L(S), namely u jvj
and v juj . So, for any length n such that there are at least 2
strings of length n in L(S).
Lemma 4.7. Let ff, fi; S be non-empty regular expressions such that S is unam-
biguous, where S is either of the form (ff fi) or of the form (fffi ) . Then, there is
a constant k such that, for infinitely many n,
k strings of length n.
Proof. Let such that u 2 L(ff) and v 2 L(fi). Suppose
jvj. If S is unambiguous, there are at least two distinct strings of length
k in L(S), namely, u jvj v and v juj+1 . So, for any length n such that
there are at least 2
k strings of length n in L(S).
The proof is very similar if unambiguous.
It is clear that unambiguity is necessary for both lemmas. For example, the
is not static, but L((a that is sparse.
Note that if R is the empty expression, the theorem is true, since R is static, and
which is certainly sparse. So, for the rest of the proof, we will assume that
R is non-empty.
To show one direction of Theorem 4.5, suppose R is not static. Then it contains
a subexpression that is either of the form (fl or of the form (fl 0 ff In
the first case, by a small modification to the proof of lemma 4.6, L(R) is not sparse.
In the second case, by a similar modification to the proof of lemma 4.7, L(R) cannot
be sparse.
Now, suppose R is static. If contains only the string x. If
is either a string of characters or a single character, L(R) can have
at most one string of any length.
are SREs. Let p r (n) and p s (n) bound the
number of strings in L(r) and L(s), respectively. Then there are at most p r (n)+p s (n)
strings of length n.
Finally, suppose are SREs. Let p r (n) and p s (n) bound
the number of strings in L(r) and L(s), respectively. Then, the number of strings of
length n is:
The degree of q is bounded by 1
on the complexity of R, L(R) is sparse.
L-PRIN
Note that the second half of the proof does not use unambiguity. Hence, any
static regular expression generates a sparse regular language.
Theorem 4.8. Let R be an SRE. Then L(R) is L-printable.
Proof. Basically, we divide R into terms that are either starred expressions or
non-starred expressions. For example, we would divide 0(1
into three parts: 0(1 internally L-print each
term independently, and check to see if the strings generated have the correct length.
In our example, to print strings of length 9, we might generate 0110, 11, and 0011,
respectively, and check that the combined string is in fact 9 characters long. (In this
case, the string is too long, and is not printed.)
Let k be the number of stars that appear in R. Partition R into at most 2k
subexpressions, k with stars, and the others containing no stars.
The machine to L-print L(R) has two types of counters. For each starred subex-
pression, the machine counts how many times that subexpression has been used. For
a string of length n, no starred subexpression can be used more than n times. Each
counter for a starred subexpression only needs to count up to n.
Each non-starred subexpression generates only a constant number of strings.
Thus, up to k +1 additional counters, each with a constant bound, are needed. (Note
that the production may intermix the two types of counters, for instance if (x
occurs.)
The machine uses two passes for each potential string. First, the machine generates
a current string, counting its length. If the string is the correct length, it
regenerates the string and prints it out. Otherwise, it increments the set of counters,
and continues. In this way, all strings of lengths - n are generated, and all strings of
length n are printed.
Lastly, we need to argue that this procedure can be done by a logspace machine.
Each of the at most 2k must count up to n (for n sufficiently large, say,
larger than jRj). Thus, the counting can be done in log n space. In addition, the
actual production of a string requires an additional counter, to store a loop variable.
The rest of the computation can be handled in O(1) space, using the states of the
machine. Thus, L(R) is L-printable.
Note that this L-printing algorithm may generate some strings in L(R) more than
once. To get a non-redundant L-printer, simply modify the program to output the
strings in lexicographic order, as in Proposition 3.5, or use an unambiguous SRE for
L(R).
Theorem 4.8 does not characterize the L-printable sets, as we see below.
Proposition 4.9. There exists a set S such that S is L-printable and not regular
Proof. The language L-printable (for any n, we print out
only if n is even), but not regular.
4.2. Sparse Context-Free Languages. Using the theory of bounded context-free
languages we can also show that every sparse context-free language is L-printable.
Definition 4.10. A set A is bounded if there exist strings w such that
Note the similarity between bounded languages and languages generated by SRE's.
Note also that every bounded language is sparse.
Ibarra and Ravikumar [IR86] prove the following.
8 L. FORTNOW, J. GOLDSMITH, M. LEVY, AND S. MAHANEY
Theorem 4.11 ([IR86]). If A is a context-free language then A is sparse if and
only if A is bounded.
Ginsburg [Gin66, p. 158] gives the following characterization of bounded context-free
languages.
Theorem 4.12 ([Gin66]). The class of bounded context-free languages is the
smallest class consisting of the finite sets and fulfilling the following properties.
1. If A and B are bounded context-free languages then A [ B is also a bounded
context-free language.
2. If A and B are bounded context-free languages then
is also a bounded context-free language.
3. If A is a bounded context-free language and x and y are fixed strings then the
following set is also a bounded context-free language.
ay
Corollary 4.13. Every bounded context-free language is L-printable.
Proof. Every finite set is L-printable. The L-printable sets are closed under the
three properties in Theorem 4.12.
Corollary 4.14. Every sparse context-free language is L-printable.
This completely characterizes the L-printable context-free languages. However
the sparse context-free languages do not characterize the L-printable languages.
Proposition 4.15. There exists an L-printable set S such that S is not context-free
Proof. The language is L-printable, but is not context-free.
5. L-Isomorphisms. It is easy to show that two P-printable sets, or P-rankable
sets, of similar densities are P-isomorphic. Since the usual proof relies on binary
search, it does not immediately extend to L-rankable sets. However, we are able to
exploit the sparseness of L-printable sets to show the following.
Theorem 5.1. If A and B are L-printable and have similar densities, then A
and B are L-isomorphic (i.e., A - =log B).
Proof. For each x, define y x to be the image of x in the lexicographic isomorphism
from A to B. Since A and B are L-printable, they are both sparse. Let p(n) be a
strictly increasing polynomial that bounds the densities of both sets. If
x is "close" to y x , in the sense that there are at most p(jxj) strings between them in
the lexicographic ordering. (Recall Definition 2.7.) In fact, for all x, jy x
Let r A (x) be the rank of x in A. If
A, then the rank of x in A is
Furthermore, is the unique element of B for which
this holds. Note that both r A (x) and r B (y x ) can be written in space O(log jxj). Thus,
to compute y x , we need to compute do so by maintaining
a variable d, that is initialized to r A (x). Counter c is initialized to 0. The following
loop is iterated until a counter, c, reaches p(jxj
1. L-print (in lexicographic order) the elements of B of length c; for each string
that is lexicographically smaller than decrement d;
2. increment c.
Note that, if d is written on the work tape, each bit of x \Gamma d can be computed
in logspace as needed, and the output of the L-printing function can be compared to
in a bit-by-bit manner.
L-PRIN
If x 2 A, since the L-printing function outputs strings in lexicographic order,
computing y x is easy: compute r A (x), then "L-print" B internally, actually outputting
the r A (x) th string.
Without loss of generality, we can assume that the simulated L-printer for B
prints B in lexicographic order. Thus, as soon as the r A 1st element of B is
printed internally, the simulation switches to output mode.
The following is an overview of the logspace algorithm computing the desired
isomorphism.
1. Compute A(x).
2. Compute r A (x), and write it on a work tape.
3. If x 2 A, find the r A (x) th element of B, and output it.
4. If
find the unique string y
and output y x .
Using this theorem, we can now characterize the L-printable sets in terms of
isomorphisms to tally sets, and in terms of sets of low Kolmogorov space complexity.
Theorem 5.2. The following are equivalent:
1. S is L-printable.
2. S is L-isomorphic to some tally set in L.
3. There exists a constant k such that S ' KS[k log n; k log n] and S 2 L.
Although it is not known whether or not every sparse L-rankable set is L-isomor-
phic to a tally set (see Theorem 6.1), we can prove the following lemma, that will be
of use in the proof of Theorem 5.2.
Lemma 5.3. Let A be sparse and L-rankable. Then there exists a tally set T 2 L
such that A and T have similar density.
Proof. Let A -n denote the strings of length at most n in A. Let p(n) be an
everywhere positive monotonic increasing polynomial such that jA -n j - p(n) for all
n, and such that greater than the number of strings of length n in
A. Let r(x) be the ranking function of A. We define the following tally set:
To show that T 2 L, notice that of the tally strings 1 i ,
the largest n such that p(n \Gamma that n can be written in binary
in space O(log m).) Then compute d This difference is bounded by
p(n), and thus can be written in logspace. Finally, compute d
compare to d 1 . Accept iff d 1 - d 2 .
Finally, we show that T and A have similar density. Let f : A ! T be the
lexicographic isomorphism between T and A. Note that f maps strings of length n to
strings of length at most p(n), so f is polynomially bounded. Note that p is always
positive, which implies that f is length-increasing. must also be polynomially
bounded. Thus, T and A have similar density.
The following proof of Theorem 5.2 is very similar to the proof of the analogous
theorem in [AR88].
Proof. [1 be L-printable. Then it is sparse and L-rankable. Let T be
the tally set guaranteed by Lemma 5.3. By Proposition 4.1, T is L-printable. Thus,
T and S are L-printable, and T and S have similar density. So by Theorem 5.1,
L. FORTNOW, J. GOLDSMITH, M. LEVY, AND S. MAHANEY
[2 be L-isomorphic to a tally set T , and let f be the L-isomorphism
from S to T . Let x 2 S be a string of length n. Let logspace
computable, there exists a constant c such that r - n c , i.e., jrj - c log n. In order to
recover x from r, we only have to compute f Computing 0 r given r requires
log n space for one counter. Further, there exists a constant l such that computing
requires at most lc log n space, since r - n c . So, the total space needed to
compute x given r is less than or equal to log n+ lc log n - k log n for some k. Hence,
log n]. If T 2 L, then S 2 L, since S - =log T .
[3 log n] for some k, and S 2 L. On input 0 n , we
simulate M u for each string of length k log n. For a given string x, log n, we
first simulate M u (x) and check whether it completes in space k log n. If it does, we
recompute M u (x), this time checking whether the output is in S. If it is, we recompute
M u (x), and print out the result. The entire computation only needs O(log n) space,
so S is L-printable.
It was shown in [AR88] that a set has small generalized Kolmogorov complexity
if and only it is P-isomorphic to a tally set. (Note: this was improvement of the result
in [BB86], which showed that a set has small generalized Kolmogorov complexity if
and only if it is "semi-isomorphic" to a tally set.) Using a similar argument and Theorem
5.2 we can show an analogous result for sets with small generalized Kolmogorov
space complexity. First, we prove the following result.
Proposition 5.4. For all M v and k, KS v [k log n; k log n] is L-printable.
Proof. To L-print for length n, simulate M v on each string of length less than or
equal to k log n, and output the results.
Corollary 5.5. There exists a k such that A ' KS[k log n; k log n] if and only
if A is L-isomorphic to a tally set.
Proof. Suppose A is L-isomorphic to a tally set. Then, by the argument given in
the proof of [2 ) 3] in Theorem 5.2, A ' KS[k log n; k log n].
suppose A ' KS[k log n; k log n]. By Proposition 5.4 and Theorem 5.2,
KS[k log n; k log n] is L-isomorphic to a tally set in L via some L-isomorphism f . It
is clear that A is L-isomorphic to f(A). Since f(A) is a subset of a tally set, f(A)
must also be a tally set.
6. Printability, Rankability and Decision. In this section we examine the
relationship among L-printable sets, L-rankable sets and L-decidable sets. We show
that any collapse of these classes, even for sparse sets, is equivalent to some unlikely
complexity class collapse.
Theorem 6.1. The following are equivalent:
1. Every sparse L-rankable set is L-printable.
2. There are no tally sets in
Proof. [2 , 3] This equivalence follows from techniques similar to those of
Suppose A is a sparse L-rankable set. Note that A 2 L.
Let
ith bit of the jth string in A is 1g;
where
L-PRIN
Note that hi; ji can be computed in space linear in jij + jjj. Since A is sparse, i and
are bounded by a polynomial in the length of the jth string. Hence, hi; ji can be
computed using logarithmic space with respect to the length of the jth string.
Given hi; ji, we can determine i and j in polynomial time, and we can find the
jth string of A by using binary search and the ranking function of A. Hence, T 2 P.
So, by assumption, T 2 L.
Next we give a method for printing A in logspace. Given a length n, we compute
(and store) the ranks of 0 n and 1 n in A. Let r start and r end be the ranks of 0 n and
the string with rank r start has length less than n. First,
we check to see if 0 n 2 A, and if so, print it. Then, for each j, r start
we output the jth string by computing and printing T (1 hi;ji ) for each bit i. This
procedure prints the strings of A of length n.
Note that since A is sparse, we can store r start and r end in O(log n) space. Since
store and increment the current value of i in log n space.
[1 be a tally set. Since the monotone circuit value problem
is P-complete (see [GHR95]), there exists a logspace-computable function f and a
nondecreasing polynomial p such that f(n) produces a circuit Cn with the following
properties.
1. Cn is monotone (i.e., Cn uses only AND and OR gates).
2. Cn has p(n) gates.
3. The only inputs to Cn are 0 and 1.
4. Cn outputs 1 iff 1 n is in T .
We can assume that the reduction orders the gates of Cn so that the value of
gate depends only on the constants 0 and 1 and the values of gates g j for
([GHR95]). Let xn be the string of length p(n) such that the ith bit of xn is the value
of gate g i .
Ng. Then A contains exactly one string of length p(n) for all
n, and no strings of any other lengths.
6.1.1. The set A is L-rankable.
Proof. To prove this claim, let w be any string. In logspace, we can find the
greatest n such that p(n) - jwj. If p(n) 6= jwj then w 62 A, and the rank of w is n.
Suppose xn is the only string of length p(n) in A, the rank of w is
Consider the ith bit of w as a potential value for gate g i in Cn . Let j be the
smallest value such that w j is not the value of g j . In order to find the value of a gate
i , we first use f(n) (our original reduction) to determine the inputs to g i . By the
time we consider the i th bit of w, we know that w is a correct encoding of all of the
gates g k such that k ! i, so we can use those bits of w as the values for the gates.
Thus, we can determine the value of g i and compare it to the ith bit of w. If they
differ, we are done. If they are the same, we continue with the next gate. We can
count up to p(n) in logspace, so this whole process needs only O(log p(n)) space to
compute.
Once j is found, there are three cases to consider.
1. If j doesn't exist then
2. If the jth bit of w is 0 then w ! xn .
3. If the jth bit of w is 1 then w ? xn .
These follow since the ith bit of xn matches the ith bit of w for all
Thus A is L-rankable and, by assumption, L-printable.
12 L. FORTNOW, J. GOLDSMITH, M. LEVY, AND S. MAHANEY
So, to determine if 1 n is in T , L-print A for length p(n) to get xn . The bit of xn
that encodes the output gate of Cn is 1 iff 1 every step of this algorithm
is computable in logspace, T 2 L.
This completes the proof of Theorem 6.1.
Corollary 6.2. There exist two non-L-isomorphic L-rankable sets of the same
density, unless there are no tally sets in
Proof. Consider the sets T and A from the second part of the proof of Theorem 6.1.
The set has the same density as A. By Proposition 4.1, B is
L-printable. If A and B were L-isomorphic then by Proposition 3.6, A would also be
L-printable and T would be in L.
One may wonder whether every sparse set in L is L-printable or at least L-
rankable. We show that either case would lead to the unlikely collapse of FewP and
L. Recall that FewP consists of the languages in NP accepted by nondeterministic
polynomial-time Turing machines with at most a polynomial number of accepting
paths.
Fix a nondeterministic Turing machine M and an input x. Let p specify an
accepting path of M(x) represented as a list of configurations of each computation
step along that path. Note that in logarithmic space we can verify whether p is such
an accepting computation since if one configuration follows another only a constant
number of bits of the configuration change.
We can assume without loss of generality that all paths have the same length and
that no accepting path consists of all zero or all ones.
Define the set PM by
is an accepting path of M on xg:
From the above discussion we have the following proposition that we will use in the
proofs of Theorems 6.6 and 6.7.
Proposition 6.3. For any nondeterministic machine M , PM is in L.
Allender and Rubinstein [AR88] showed the following about P-printable sets.
Theorem 6.4 ([AR88]). Every sparse set in P is P-printable if and only if there
are no sparse sets in FewP \Gamma P.
Allender [All86] also relates this question to inverting functions.
Definition 6.5. A function f is strongly L-invertible on a set S if there exists
a logspace computable function g such that for every x 2 S, g(x) prints out all of the
strings y such that
We extend the techniques of Allender [All86] and Allender and Rubinstein [AR88]
to show the following.
Theorem 6.6. The following are equivalent.
1. There are no sparse sets in FewP \Gamma L.
2. Every sparse set in L is L-printable.
3. Every sparse set in L is L-rankable.
4. Every L-computable, polynomial-to-one, length-preserving function is strongly
L-invertible on f1g .
5.
Proof. [1 A be a sparse set in L. Then A is in P. By (1) we have that
there are no sparse sets in FewP \Gamma P. By Theorem 6.4, A is P-printable.
Consider the following set B.
ith bit of the jth element of A of length n is bg
L-PRIN
Since A is P-printable then B is in P. By (1) (as B is sparse and in P ' FewP), we
have B is in L. Then A is L-printable by reading the bits off from B.
[2 ) 3] Follows immediately from Proposition 3.4.
[3 A be a sparse set in FewP accepted by a nondeterministic machine
M with computation paths of length q(n) for inputs of length n.
Consider the set PM defined as above. Note that PM is sparse since for any length
accepts a polynomial number of strings with at most a polynomial number
of accepting paths each. Also by Proposition 6.3 we have PM in L.
By (3) we have that PM is L-rankable. We can then determine in logarithmic
space whether M(x) accepts (and thus x is in A) by checking whether
r PM (x#0
[2 f be a L-computable, polynomial-to-one, length-preserving function.
Consider g. Since S is in L, S is L-printable.
A be a sparse set in L. Define x is in A and x otherwise.
If g is a strong L-inverse of f on 1 then g(1 n ) will print out the strings of length n of
A and 1 n . We can then print out the strings of length n in logspace by printing the
strings output by g(1 n ), except we print 1 n only if 1 n is in A.
[1 , 5] In [RRW94], Rao et al. show that there are no sparse sets in FewP \Gamma P if
and only if straightforward modification of their proofs is sufficient to
show that there are no sparse sets in FewP \Gamma L if and only if
Unlike L-printability, L-rankability does not imply sparseness. One may ask
whether every set computable in logarithmic space may be rankable. We show this
equivalent to the extremely unlikely collapse of PP and L.
Theorem 6.7. The following are equivalent.
1. Every #P function is computable in logarithmic space.
2.
3. Every set in L is L-rankable.
Our proof uses ideas from Blum (see [GS91]), who shows that every set in P is
P-rankable if and only if every #P function is computable in polynomial time. Note
that Hemachandra and Rudich [HR90] proved results similar to Blum's.
Proof. [1 A is in PP then there is a #P function f such that x is in A iff
the high-order bit of f(x) is one.
[2 implies that
. Thus we have and we can compute every bit of a #P function in
logarithmic space.
[1 A be in L. Consider the nondeterministic polynomial-time machine
M that on input x guesses a y - lex x and accepts if y is in A. The number of
accepting paths of M(x) is a #P function equal to r A (x).
[3 f be a #P function. Let M be a nondeterministic polynomial-time
machine such that f(x) is the number of accepting computations of M(x). Let q(n) be
the polynomial-sized bound on the length of the computation paths of M . Consider
PM as defined above. By Proposition 6.3 we have that PM is in L so by (3) PM is
L-rankable. We then can compute f(x) in logarithmic space by noticing
14 L. FORTNOW, J. GOLDSMITH, M. LEVY, AND S. MAHANEY
7. Conclusions. The class of L-printable sets has many properties analogous to
its polynomial-time counterpart. For example, even without the ability to do binary
searching, one can show two L-printable sets of the same density are isomorphic.
However, some properties do not appear to carry over: it is very unlikely that every
sparse L-rankable set is L-printable.
Despite the strict computational limits on L-printable, this class still has some
bite: every tally set in L, every sparse regular and context-free language and every
L-computable set of low space-bounded Kolmogorov complexity strings is L-printable.
Acknowledgments
. The authors want to thank David Mix Barrington for a
counter-example to a conjecture about sparse regular sets, Alan Selman for suggesting
the tally set characterization of L-printable sets and Corollary 5.5, Chris Lusena for
proofreading, and Amy Levy, John Rogers and Duke Whang for helpful discussions.
The simple proof sketch of Theorem 4.3 was provided by an anonymous referee. The
last equivalence of Theorem 6.6 was suggested by another anonymous referee. The
authors would like to thank both referees for many helpful suggestions and comments.
--R
The complexity of sparse sets in P
Sets with small generalized Kolmogorov complexity
Structural Complexity I
Ambiguity in graphs and expressions
Tally languages and complexity classes
The Mathematical Theory of Context-Free Languages
Limits to Parallel Computation: P-Completeness Theory
Compression and ranking
Generalized Kolmogorov complexity and the structure of feasible com- putations
On sparse oracles separating feasible complexity classes
Computation times of NP sets of different densities
ambiguity and other decision problems for acceptors and transducers
Alternierung und Logarithmischer Platz
Introduction to Languages and the Theory of Computation
Computational Complexity
Upward separation for FewP and related classes
A note on sets with small generalized Kolmogorov complexity
A complexity theoretic approach to randomness
Characterizing regular languages with polynomial densities
--TR
--CTR
Allender, NL-printable sets and nondeterministic Kolmogorov complexity, Theoretical Computer Science, v.355 n.2, p.127-138, 11 April 2006 | context-free languages;sparse sets;kolmogorov complexity;ranking;l-isomorphisms;computational complexity;regular languages;logspace |
298506 | The Inverse Satisfiability Problem. | We study the complexity of telling whether a set of bit-vectors represents the set of all satisfying truth assignments of a Boolean expression of a certain type. We show that the problem is coNP-complete when the expression is required to be in conjunctive normal form with three literals per clause (3CNF). We also prove a dichotomy theorem analogous to the classical one by Schaefer, stating that, unless P=NP, the problem can be solved in polynomial time if and only if the clauses allowed are all Horn, or all anti-Horn, or all 2CNF, or all equivalent to equations modulo two. | Introduction
. Logic deals with logical formulae, and more particularly with
the syntax and the semantics of such formulae, as well as with the interplay between
these two aspects [CK90]. In the domain of Boolean logic, for example, a Boolean
formula OE may come in a variety of syntactic classes-conjunctive normal form (CNF),
its subclasses 3CNF, 2CNF, Horn, etc.- and its semantics is captured by its models
or satisfying truth assignments, that is, the set -(OE) of all truth assignments that
satisfy the formula (see Figure 1 for an example).
Going back and forth between these two representations of a formula is therefore
of interest. One direction has been studied extensively from the standpoint of computational
complexity: going from OE to -(OE). In particular, telling whether
is the famous satisfiability problem (SAT), which is known to be NP-complete in its
generality and its special case 3SAT, among others, and polynomial-time solvable in
its special cases Horn, 2SAT, and exclusive-or [Co71, Sc78, Pa94]. All in all, this
direction is a much-studied computational problem. In this paper we study, and in a
certain sense completely settle, the complexity of the inverse problem, that is, going
from -(OE) back to OE. That is, for all the syntactic classes mentioned above, we identify
the complexity of telling, given a set M of models, whether there is a formula OE
in the class (3SAT, Horn, etc.) such that We call this problem inverse
satisfiability.
Besides its fundamental nature, there are many more factors that make inverse
satisfiability a most interesting problem. A major motivation comes from AI (in fact,
what we call here the inverse satisfiability problem is implicit in much of the recent AI
literature [Ca93, DP92, KKS95, KKS93, KPS93]). A set of models such as those in
Figure
1(b) can be seen as a state of knowledge. That is, it may mean that at present,
for all we know, the state of our three-variable world can be in any one of the three
states indicated. In this context, formula OE is some kind of knowledge representation.
In AI there are many sophisticated competing methods for knowledge representation
Received by the editors April 24, 1995; accepted for publication (in revised form) November
20, 1996; published electronically June 15, 1998. This work was partially supported by the Esprit
Project ALCOM II and the Greek Ministry of Research (\PiENE\Delta program 91E\Delta648).
http://www.siam.org/journals/sicomp/28-1/28511.html
y Department of Mathematics, University of Patras, Patras, Greece ([email protected]).
z Department of Computer Science, Athens University of Economics and Business, Athens, Greece
([email protected]).
(a)
(b)
Fig. 1. A Boolean formula in 3CNF (a), and the corresponding set of models (b).
(Boolean logic is perhaps the most primitive; see [GN87, Le86, Mc80, Mo84, Re80,
SK90]), and it is important to understand the expressibility of each. This is a form
of the inverse satisfiability problem.
The inverse satisfiability problem was also proposed in [DP92] as a form of discovering
structure in data. For example, establishing that a complex binary relation
is the set of models of a simple formula may indeed uncover the true structure and
nature of the heretofore meaningless table. [DP92] only address this problem in certain
fairly straightforward cases. The problem of learning a formula [AFP92] can be
seen as a generalization of the inverse satisfiability problem.
A recent trend in AI is to approximate complex formulae by simple ones, such as
Horn formulae [SK91, KPS93, GPS94]. Quantifying the quality and computational
feasibility of such approximations also involves understanding the inverse satisfiability
problem.
The basic computational problem we study is this: given a set of models M , is
there a CNF formula OE with at most three literals per clause, such that
We call this problem INVERSE 3SAT. Our first result is that INVERSE 3SAT is
coNP-complete (Theorem 1).
Note. INVERSE 3SAT, as well as all other problems we consider in this paper,
can be solved in polynomial time if the given m \Theta n table M has
if there are exponentially many models in M . The interesting cases of the problem
are therefore when
There are three well-known tractable cases of SAT: 2SAT (all clauses have two
literals), HORNSAT (all clauses are Horn, with at most one positive literal each,
and its symmetric case of anti-Horn formulae, in which all clauses have at most one
negative literal), and XORSAT (the clauses are equations modulo two). Schaefer's
elegant dichotomy theorem [Sc78] states that, unless P=NP, in a certain sense these
are precisely the only tractable cases of SAT. Interestingly, the inverse problem for
these three cases happens to also be tractable! That is, we can tell in polynomial
time if a set of models is the set of models of a Horn (or anti-Horn) formula, of a
2CNF formula, or of an exclusive-or formula (interestingly, the latter two results were
in fact pointed out by Schaefer himself [Sc78], while the first, left open in [Sc78], is
from [DP92, KPS93]). The question comes to mind: are there other tractable cases of
the inverse problem? Our Theorem 2 answers this in the negative; rather surprisingly,
a strong dichotomy theorem similar to Schaefer's holds for the inverse satisfiability
problem as well, in that the problem is coNP-complete for all syntactic classes of CNF
formulae except for the cases of Horn (and anti-Horn), 2CNF, and exclusive-or. The
proof of our dichotomy theorem draws from both that of Theorem 1 and Schaefer's
proof, and in fact strengthens Schaefer's main expressibility result (Theorem 3.0 in
[Sc78]).
154 DIMITRIS KAVVADIAS AND MARTHA SIDERI
2. Definitions. Most of the nonstandard terminology used in this paper comes
from [Sc78].
be a set of Boolean variables. A literal is a variable or its
negation. A model is a vector in f0; 1g n , intuitively a truth assignment to the Boolean
variables. We denote by - and - the logical or and and, respectively. We also extend
this notation to bitwise operations between models. If t is a model, we denote by t i
the constant (i.e., 0 or 1) in the ith position of t.
A k-place logical relation is a subset of f0; 1g k (k integer). We use the notation [OE],
where OE is a Boolean formula, to denote the relation defined by OE when the variables
are taken in lexicographic order. Let R be a logical relation. Call R Horn if it is
logically equivalent to a conjunction of clauses, each with at most one positive literal.
We call it anti-Horn if it is equivalent to a conjunction of clauses with at most one
negative literal. We call it 2CNF if it is equivalent to a 2CNF expression. Finally, we
call it affine if it is the solution of a system of equations in the two-element field.
be a set of Boolean relations. An S-clause (of arity
is an expression of the form R(a is a k-ary relation in S and the
a i 's are either Boolean literals or constants (0 or 1). Given a truth assignment, we
consider an S-clause to be true if the combination of the constants, if any, and the
values assigned to the variables form a tuple in R. Define an S-formula to be any
conjuction of S-clauses defined by the relations in S.
The generalized satisfiability problem is the problem of deciding whether a given
S-formula is satisfiable. Schaefer's dichotomy theorem [Sc78] states that the satisfiability
of an S-formula can be decided in polynomial time in each of the following
cases: (a) all relations in S are Horn, (b) all relations in S are anti-Horn, (c) all
relations in S are 2CNF, (d) all relations in S are affine. In all other cases the
problem is NP-complete. That is, Schaefer's result totally characterizes the complexity
of the CNF satisfiability problem where in addition, the clauses are allowed to
be arbitrary relations of bounded arity. It is interesting to note that several restricted
forms of SAT such as ONE-IN-THREE 3SAT, NOT-ALL-EQUAL 3SAT
etc., all follow as special cases of generalized satisfiability (see [GJ79, Pa94]). To
make this point more clear, notice that the problem ONE-IN-THREE 3SAT can be
considered as a set of four 3-ary relations g. The first relation is
and corresponds to the S-clause R 1
relation is ff0; 0; 0g; f1; 1; 0g; f1; 0; 1gg and corresponds to the S-clauses with one
negated literal, e.g., R 2
For any Boolean formula OE we denote by -(OE) its set of models. We say that a
set of models M is a 3CNF set (kCNF in general) if there is a formula OE in 3CNF
(respectively, kCNF) such that -(OE). Notice that for any model set M we can
construct a kCNF formula that has M as its model set, but in general, this may
require extra existentially quantified variables.
Based on the above we define the INVERSE SAT problem for a set of relations
S as follows.
Given a set M ' f0; 1g n , is there a conjunction of S-clauses over n variables that
has M as its set of models?
Our main result states that if the relations fall in each of the four cases above,
the INVERSE SAT problem is also polynomial. Otherwise it is coNP-complete.
Notice that we have excluded S from being part of the instance since we want to
emphasize that INVERSE SAT is actually a collection of infinitely many subproblems.
This means that all relations of S are of constant arity. Otherwise, relations of non-
constant arity could have exponentially many tuples and the problem becomes trivially
intractable.
In the next section we prove that the INVERSE SAT problem is coNP-complete
for 3CNF formulas. This proof includes the main construction that will be used in
the proof of the main theorem in the last section. This last proof makes use of an
expressibility result which is interesting on its own and partially relies on Schaefer's
main theorem but with several interesting extensions.
3. coNP-completeness of inverse 3SAT. We begin this section with a technical
definition that will be used throughout the paper.
Definition. Let n be a positive integer and let M ' f0; 1g n be a set of Boolean
vectors. For k ? 1, we say that a Boolean vector m 2 f0; 1g n is k-compatible with M
if for any sequence of k positions a vector in M
that agrees with m in these k positions.
The above definition implies that a vector m 2 f0; 1g n is not k-compatible with a
set of Boolean vectors M if there exists a sequence of k positions in m that does not
agree with any vector of M . The following is a useful characterization of kCNF sets.
Lemma 1. Let M ' f0; 1g n be a set of models. Then the following are equivalent.
(a) M is a kCNF set.
Proof. Let OE M be the conjuction of all possible kCNF clauses defined on n variables
and satisfied by all models in M . Notice that OE M is the most restricted kCNF
formula (in terms of its model set) which is satisfied by all models in M . Hence if
(a) holds,
does not satisfy at
least one clause of OE M and concequently disagrees with all models in M in the same k
positions corresponding to the variables in the clause, that is, m is not k-compatible
with M .
Conversely, assume that any model not in M is not k-compatible with M . Then
means m does not satisfy OE differs from all members of M in some k
positions, so the k-clause indicating the complement of m in those k positions is in
does not satisfy OE M . So is a kCNF set.
The INVERSE 3SAT problem is this: given a set of models M , is it a 3CNF set?
We now state our first complexity result.
Theorem 1. INVERSE 3SAT is coNP-complete.
Proof. Lemma 1 establishes that the problem is in coNP: given a set M of models,
in order to prove that it is not a 3CNF set, it suffices to produce a model
that is 3-compatible with M (obviously, 3-compatibility can be checked in polynomial
time). Alternatively, given M , we immediately have a candidate 3CNF formula OE
the conjunction of all 3CNF clauses that are satisfied by all models in M . Thus M is
not a 3CNF set iff there is a model not in M that satisfies OE M .
To prove coNP-completeness, we shall reduce the following well-known coNP-complete
problem to INVERSE 3SAT: given a 3CNF formula, is it unsatisfiable?
Given a 3CNF formula / with n - 4 variables and c clauses, we shall construct a set
of models M such that M is 3CNF iff / is unsatisfiable.
The set M will contain
models, one for each set W of three variables,
and each truth assignment T to these three variables that does not contradict a clause
of / (since we may assume that / consists of clauses that have exactly three literals
each). Let W be a set of three variables chosen among the variables fx of
formula /, and let T : W 7! f0; 1g be a truth assignment to the variables of W , such
that / does not contain a clause not satisfied by T . Consider some total order among
the pairs (W; T ), say the lexicographic one. The set M will contain a model mW;T
for each W and T and no other model.
Every boolean vector mW;T is a concatenation
of the
encodings - T
occuring in the formula /. The encoding - T
of a variable x is a Boolean vector of length and is defined as follows:
positions
z -
where (W; T ) is the ith pair in the total order mentioned above. Notice that if x 2 W ,
the value of x in T is determined by the first two positions of - T
the code 01 stands
for the value 1, and the code 10 stands for x being 0. In these two cases we call the
string - T
W (x) a value pattern. When x 62 W , the code 00 in the first two positions
denotes the absence of x from W , while the rest of the string uniquely determines
the pair (W; T ). In this case we call the string - T
W (x) a padding pattern. Notice that
by our construction in a vector
are exactly
occurences of the unique padding pattern for (W; T ), while the remaining three are
value patterns. Hence, the length of each Boolean vector mW;T is n(k
there is no exponential blow-up in the construction of the set M .
The proof of Theorem 1 now rests on the next claim.
Claim. There is a model not in M that is 3-compatible with M if and only if /
is satisfiable.
Proof of the claim. For the moment, consider a Boolean vector
where the length of each substring m i 2. It is obvious that if the model
m is 3-compatible with M , then it is 3-compatible in the positions restricted to one
substring That is, if we take three arbitrary positions of m i ,
there is a vector mW;T in M that agrees with m i in these three positions. The 3-
compatibility of m i with M also implies something stronger: that there is a vector
which contains a substring - T
To see this, first
assume that m i does not have the value 1 in any position j for 3 2. Then
3-compatibility forces m i to have the values 0 and 1 or 1 and 0 in the first and second
positions; i.e., m i is a value pattern. Now, if m i has the values 0 and 1 in positions
2, then the values in any triple of positions that includes
positions can only agree with the values in the same positions of a specific
model of M , namely, the one having the padding pattern with 0 in position
and 1 in position j. Therefore, m i is identical to this padding pattern. In this case,
however, an analogous observation shows that the whole 3-compatible model m is
identical to the model of M that has this pattern. So if m is 3-compatible with M ,
either it is in M or it consists of value patterns only.
Assume now that there exists a model
2 M that is 3-compatible with M . As
already proved, this model consists only of value patterns m i . Model m
encodes a satisfying truth assignment to the variables of /. For suppose it did conflict
with a clause c of / over variables fx i g. Consider the three value patterns
in the positions of the variables of c. Since m is 3-compatible with M
and each value pattern contains only one 1, we can conclude that there exists a model
which encodes a truth assignment T to the set of
variables such that - T
But since by construction T does not contradict a clause of /, we couldn't have
conflicted with a clause of /. Therefore, the Boolean vector is an
encoding of a satisfying assignment to the variables for formula /: string m i is an
encoding of the truth value assigned to the variable x i for each
formula / is satisfiable since every clause of / is satisfied by the truth assignment
described by vector m.
Conversely, assume that / is satisfiable; i.e., there exists a satisfying truth assignment
s for the variables fx g. Construct the model
concatenation of value patterns, where every string m i is defined as follows:
positions
z -
Obviously, model m is not included in the set M , since every model in M contains a
padding pattern. Suppose that m is not 3-compatible with M . In this case m contains
three positions that do not agree with any model in M . Since m is a concatenation
of value patterns, it must contain three substrings
assignment T for the set of variables such that the pair (W; T ) is not
encoded in any model of M . All
sets of variables are, however, examined during
the construction of M , and the only truth assignments that are not encoded are those
conflicting with a clause of /. Since T does not conflict with any clause-because it
is a restriction of s to three variables-we conclude that the pair (W; T ) is encoded
in some model of M . Hence, m is 3-compatible with M . So, if / is satisfiable, there
exists a model 3-compatible with M , specifically the model encoding a satisfying truth
assignment.
4. The dichotomy theorem. Our main result is the following generalization
of Theorem 1.
Theorem 2. The INVERSE SAT problem for S is in PTIME in each of the
following cases.
(a) All relations in S are Horn.
(b) All relations in S are anti-Horn.
(c) All relations in S are 2CNF.
(d) All relations in S are affine.
In all other cases, the INVERSE SAT problem for S is coNP-complete.
[Sc78] proves a surprisingly similar dichotomy theorem for SAT: SAT is in PTIME
for all of these four classes, and NP-complete otherwise. Our proof is based on an
interesting extension of Schaefer's main result, explained below.
Definition. Let S be a set of Boolean relations and let R be another Boolean
relation, of arity r. We say that S faithfully represents R if there are binary Boolean
functions s such that there is a conjunction of S-clauses over the variables
which is logically equivalent to the formula
s
for some That is, S-clauses can express R with the help of
uniquely defined auxiliary variables.
This is a substantial restriction of Schaefer's notion of ``represents,'' which allows
arbitrary existentially quantified conjunctions of S-clauses (our definition only allows
quantifiers which are logically equivalent to 9!x). Hence our main technical result
below extends the main result of [Sc78, Theorem 3.0]. Independently, Creignou and
Hermann [CH96] have defined the concept "quasi-equivalent," which is the same as
the concept of "faithful representation" defined in this paper.
Theorem 3. If S does not satisfy any of the four conditions of Theorem 2; then
faithfully represents all Boolean relations.
Proof. Assuming that none of the four conditions are satisfied by S, the proof
proceeds by finding more and more elaborate Boolean relations that are faithfully
represented by S. Notice that, since the notion of faithful representation was defined
as equivalence of two S-formulas, we shall restrict the proof to the construction of
appropriate S-clauses-faithful representation of the corresponding relations will then
follow immediately. In this process the allowed operations must preserve the uniqueness
of the values of the auxiliary variables and produce a formula which is also in
conjuctive form. Therefore, if C and C 0 are S-formulas, the allowed operations are:
(a) C - C 0 , i.e., conjuction of two S-formulas, (b) C[a=x], i.e., substitution of a variable
symbol by another symbol, (c) C[0=x] and C[1=x], i.e., substitution of a variable
by a constant (this is actually a selection of the tuples that agree in the specified
constant), and (d) 9!xC(x), i.e., existential quantification, where the bound variables
are uniquely defined. Some of the steps are provided by Schaefer's proof, and some
are new.
Step 1. Expressing [x j y]. This was shown in [Sc78, Lemma 3.2 and Corollary
3.2.1]. The following exposition is somewhat simpler and is based on the fact that a
set M ' f0; 1g n is the model set of a Horn formula iff it is closed under bitwise -; see
the Appendix and [KPS93].
Let R be any non-Horn relation of S (say of arity k). The closure property
mentioned above implies that there exist models t and t 0 in R such that t - t
Based on R we may define the clause R set a
all positions i where both t i and t i 0 are 0 (resp., 1). Set a x to all positions where
y to all positions where t 1. It is easy to
see that both x and y actually appear in R 0 . (If not, then one of t and t 0 coincides
with their conjunction.) Now 01 and 10 are models of R 0 , but 00 is not. Hence R 0 is
either in addition, S contains a relation which is not anti-Horn,
then a symmetric argument rules out tuple 11, resulting in a clause R 00 which is either
Notice that since this is the case we
shall henceforth feel free to use negative literals in our expressions.
Step 2. Expressing [x -y]. Schaefer shows in Lemma 3.3 that there is an S-clause
involving variables x; y; z whose set of models contains 000; 101; 011, but not 110. The
proof is as follows: it is known (see the Appendix) that an S-clause is affine if and
only if for any three models t is also a model.
Consider, therefore, an S-clause that is not affine and assume that [x j y] can be
represented. By the observation in Step 1 we may negate the variables of the clause in
the positions where t 0 is 1. Now the new S-clause, call it S 0 , is satisfied by the all-zero
truth assignment and moreover by the assignments t 1
not by 0 \Phi t 1
. Construct a new clause R(a 1 ; a is the arity of
positions i where both t 1
are 0, a
both are 1, a
0 is 1, and finally a
0 is
0. The S-clause R defined on x; y; z, has models 000, 011, 101 (corresponding to the
all-zero assignment, t 1
0 of S 0 , respectively), but not 110 (which corresponds
to t 1
We will show that R faithfully represents one of the four versions of or:
Observe that at least two of x; actually occur in
R. If exactly two variables are present in R, then R represents a version of or as
follows: if x and y are present, then R(x; are present, then
are present, then R(y; z). If all three variables
are present, depending on which of the remaining four possible models are also in the
model set of the S-clause, we have sixteen possible relations. Of these, the strongest,
with models identical to the set can be used to define X(x;
(which is true when exactly one of x; y; z is true) as follows: X(x;
and in this case the current step is unnecessary. In each of the other fifteen cases,
we show by exhaustive analysis that there is an R-clause, with one constant, which
represents a version of or. If
then R(0;
then R(x; 0;
Since we can also faithfully
express [w j x], by Step 1, we have all four versions of or.
Step 3. Expressing X(x; z). X is a formula which is satisfied if exactly one
of the three variables has the value 1. It is known (see the Appendix) that an S-
clause is 2CNF iff for any set of three satisfying assignments t 0 , t 1 , t 2 , the assignment
satisfying assignment.
We use this characterization to prove that if a relation set S contains a relation
which is not 2CNF and also contains relations which are not Horn, anti-Horn, and
affine, then X(x; y; z) can be faithfully represented.
Consider an S-clause which is not 2CNF. We may therefore find three satisfying
assignments such that the expression (t
a satisfying assignment. As in the previous step we may negate the variables in the
positions where t 0 has the value 1, resulting in a new clause S 0 , which is satisfied by the
all-zero assignment, by t 1
not by t 1
which is equal to t 1
0 . Set 0 to all positions where both t 1
are 0, x to all
positions where both t 1
are 1, y where t 1
0 is 1, and finally z where
0 is 0. Observe that all three variables actually occur in the constructed
clause R: if x is not present then t 1
0 is identical to the all-zero assignment, a
contradiction; if either y or z is not present then t 1
0 is identical to t 1
again a contradiction. The clause constructed includes models 000,
110, and 101, but not 100. Now the S-clause R(x;
exactly the models 100, 010, and 001; i.e., it is X(x;
Step 4. Expressing [x j (y-z)]. Notice that the expression X(x; s; y)-X(x; t; z)-
X(s; t; u) is equivalent to
Thus we prove that we can faithfully represent a relation in which a variable is logically
equivalent to the or of two other variables. Notice that the auxiliary variables s; t; u
are uniquely defined by the values of y and z.
Step 5. Using repeatedly [x j (y - z)] and [x j y] we can faithfully represent any
clause, and by taking conjunctions of arbitrary clauses we can faithfully represent any
Boolean relation, completing the proof of Theorem 3.
Proof of Theorem 2. Let S be a set of relations satisfying one of conditions (a)-
(d), and let r be the maximum arity of any relation in S; we can solve the inverse
satisfiability problem for S as follows.
Given a set of models M , we first identify in time O(n r jM all S-clauses that
are satisfied by all models in M ; call the conjunction of these S-clauses OE. Clearly,
if there is a conjunction of S-clauses that has M as its set of models, then by the
arguments used in Lemma 1, it is precisely OE. To tell whether the set of models of OE
is indeed M , we show how to generate the set of models of OE with polynomial delay
between consecutive outputs [JPY88]. Provided that such generation is possible, we
can decide whether checking if the generated models belong in M . If
a model not in M is generated, then we reply "no"; otherwise, if the set of models
generated is exactly M , we reply "yes." Observe that the answer will be obtained
after at most jM in overall polynomial time.
Our generation algorithm is based on a more general observation that also explains
the analogy of our dichotomy theorem to the one of Schaefer's. Call a syntactic form
of a Boolean formula hereditary if the substitution of a variable by a constant results
in a new formula of the same syntactic form. Observe that the four cases for which
we claim that the inverse satisfiability problem is polynomial are indeed hereditary
and coincide with the polynomial cases of satisfiability [Sc78].
Theorem 4. If the following two conditions hold for a class of Boolean formulas:
(a) the syntactic form of the class is hereditary, and
(b) the satisfiability problem for the class is in PTIME,
then the models of any formula in the class can be generated with polynomial delay
between consecutive outputs.
Proof. Here is an informal description of the generation algorithm: at each step
we substitute a variable by a constant, first by the value 1 and then by 0. Since
(a) holds, the substitution results in a new formula of the same syntactic form. We
then ask a polynomial-time oracle whether the produced formula is satisfiable. Since
(b) holds, such an oracle exists. If the produced formula is satisfiable, we proceed
recursively and substitute the next variable until all variables have been assigned a
value, in which case we return the model. When at a certain step we are through
with the value 1 for a variable (either by discovering a model or by rejecting the value
because the produced formula is unsatisfiable), we try the value 0, and when finished,
we backtrack to the previous step. It is easy to see that after at most 2n queries to
the oracle (where n is the number of variables) we either generate a new model or we
know that all models of the formula have been generated.
Now, to show coNP-completeness of all other cases, let S be a set of Boolean
relations not satisfying conditions (a)-(d). It is clear that the INVERSE SAT problem
for S is in coNP: let r - 3 be the largest arity of any relation in S. Given a set
of models M , we construct all S-clauses satisfied by all models in M-this takes
time. M is the set of models of a conjunction of S-clauses if and only if all
models not in M fail to satisfy at least one of these S-clauses.
To show completeness, we shall reduce UNSATISFIABILITY, the problem of
telling whether a 3CNF expression / is unsatisfiable, to the INVERSE SAT(S). We
suppose that / is a 3CNF expression on n ? 3r variables. Set M contains a model
for each 3r-tuple of variables and values for these variables that don't contradict any
clause of /. Let k be the cardinality of M , a quantity bounded by a function of r
and of the number of variables and clauses of /. Notice that since r is constant, the
number of models is not exponential. Our construction is a generalization of that of
Theorem 1: we consider some total order among the pairs (W; T ), where W is a set
of 3r variables and T a truth assignment to those variables that does not contradict
any clause of /. Every Boolean vector mW;T in M is a concatenation of two strings:
W .
String fi T
W is a concatenation of the encodings - T
occuring
in the formula /: fi T
The encoding of - T
W (x) of a variable x
is a Boolean vector of length and is defined in the proof of Theorem 1. Notice
that in this construction the unique padding pattern for (W; T ) occurs
in the string fi T
W . Call the length of a string fi T
W .
The string ffl T
W is constructed as follows: we consider all 3CNF clauses on N variables
satisfied by the set of strings fi T
W for all sets of 3r variables W and assignments
T to those variables. Call OE the conjuction of all these clauses. We express OE faithfully
by S-clauses. This will involve auxiliary variables . From the definition
of faithful representation we see that xN+' j f '
however, that each of the auxiliary variables depends on at most
three of the N variables appearing in the 3CNF clauses. This follows from the fact
that we are representing 3CNF clauses, and consequently, we can express each 3CNF
clause separately by S-clauses and then take the conjunction of the representations.
Thus, the overall dependency of an auxiliary variable xN+'
a Boolean function f '
be a string fi T
W .
The values in the s positions of the corresponding string ffl T
are the
values of the auxiliary variables: b
(Note that these
values are stated explicitly, i.e., not encoded as value patterns.) This is where the
concept of faithful representation is necessary: for each string fi T
W there is a unique
string ffl T
W . With ordinary representation the multiple ways to extend a string fi T
W via
the auxiliary variables would result in an exponential increase of our model set.
Let M ' f0; 1g N+s be the constructed set of models. We claim that M is the
set of models of a conjunction of S-clauses iff the original 3CNF expression / is
unsatisfiable.
If / is satisfiable, then M is not the set of models of any rCNF expression.
Consider the model corresponding to the satisfying truth assignment. This model
is a concatenation of two parts: the first has N positions and consists of the value
patterns encoding the values of all variables in the satisfying truth assignment, exactly
as in the proof of Theorem 1, and the second consists of the corresponding values of
the s auxiliary variables. This model is r-compatible with M : any r-tuple restricted
to the first N positions certainly matches a corresponding tuple in some model, by
the construction of M . In fact, when the tuple is restricted to the first part, any
3r-tuple can be matched. This is precisely why an r-tuple that is not restricted to
the first N positions is also r-compatible: by the dependency of each auxiliary value
to at most 3 of the first N , a compatibility of an i-tuple (i - r) in the second part
holds if a 3i-compatibility in the first part holds. Alternatively, instead of looking
at a position in the second part, we can look at the three corresponding positions
of the first part. Therefore, the whole model corresponding to the satisfying truth
assignment is r-compatible with M . It follows by Lemma 1 that M is not rCNF, and
as a result, M is not the set of models of any conjunction of S-clauses (recall that the
maximum arity in S is r).
Suppose then that / is unsatisfiable. Let M 0 be M restricted to the first N
positions. Then M 0 is exactly the set of models of OE (the conjuction of all 3CNF
clauses on N variables which don't disagree with M 0 ) by the reasoning in Theorem 1:
no model is 3-compatible with M 0 except those in M 0 . Since M 0 is the set of models
of OE, it follows that M is the set of models of the corresponding conjunction of S-expressions
that faithfully represents OE.
Appendix
. This appendix contains the proof of the closure properties of Horn,
anti-Horn 2CNF, and affine sets of models, which are used in the proof of Theorem 3.
In what follows, M ' f0; 1g n denotes a set of models.
Sets. M is Horn iff for any two models t; t the model (t - t 0 ) is also
in M .
The proof is based on the following proposition from [KPS93]. If t and t 0 are
bit-vectors we use the notation t - t 0 to denote that t
Proposition. The following are equivalent.
(a) There is a Horn formula whose model set is M .
(b) For each
either there is no t or there is a unique
(c) If t; t
Proof. That (a) implies (c) is easy. To establish (b) from (c), take t 0 to be the -
of all t 00 2 M such that t - t 00 . Finally, if we have property (b), we can construct the
following set of Horn clauses: for each
be the model guaranteed by (b);
create a Horn clause ((
1. It is
easy to see that the set of all these Horn clauses comprises the desired OE.
Anti-Horn Sets. This case is symmetric to the above. Just replace 1 with 0
and - with -.
Sets. M is 2CNF iff for any set of three models t the model
Proof. This was shown in [Sc78, Lemma 3.1B]. We give a different proof, which
is simpler and is based on Lemma 1 for 2. First notice that the model
has the following property. The value of t in each position
is equal to a value, which is the majority among the three values of the
models position (e.g., if the values of models t position i
are (1; 1; 0), respectively, the value of t in position i is 1). Call the outcome of the
operation the majority model of t
Only if: Suppose M is 2CNF. By Lemma 1 any 2-compatible model with M is
in M . It is easy to see that the majority model of any three models is 2-compatible
with these three models.
If: Suppose that the majority model of any set of three models t
also in M . We shall prove that any 2-compatible model with M is in M . We prove
this inductively, by showing that any 2-compatible model is in fact n-compatible.
Consider a model m k-compatible with M and a 1)-tuple of positions in this
model. The k distinct k-tuples of this (k + 1)-tuple agree with some model in M .
Take three of those not necessarily distinct k models. (If the models are less than
three, then .) Notice that any one of those differs in at most one position of
the 1)-tuple with m. Therefore, the (k + 1)-tuple of m agrees with the majority
model of those three models. Hence, m is 1)-compatible with M . Therefore, any
2-compatible model with M is in M and, by Lemma 1, M is a 2CNF set.
Affine Sets. M is affine iff for any three models t
is also in M .
Proof. This fact follows from linear algebra and especially the theory of diophantine
linear equations. It states the intuitive observation (and its converse) that every
convex polytope is the convex hull of its vertices. For more on that see the book of
Schrijver [Sc86].
Acknowledgments
. We are grateful to Christos Papadimitriou for helpful discussions
and suggestions. We are also indebted to the anonymous referees for their
detailed comments and suggestions that decisively helped us improve the presentation
by making it more complete and precise.
--R
Learning conjunctions of Horn clauses
Semantical and computational considerations in Horn approximations
Complexity of generalized satisfiability counting prob- lems
Model Theory
The complexity of theorem-proving procedures
Structure identification in relational data
Computers and Intractability
Logical Foundations of Artificial Intelligence
incremental recompilation of knowl- edge
On generating all maximal independent sets
Horn approximations of empirical data
Reasoning with characteristic models
Making believers out of computers
Computational Complexity
A Logic for default reasoning
The complexity of satisfiability problems
Theory of Linear and Integer Programming
Model preference default theories
Knowledge compilation using Horn approximation
--TR
--CTR
Jean-Jacques Hbrard , Bruno Zanuttini, An efficient algorithm for Horn description, Information Processing Letters, v.88 n.4, p.177-182, November
Lane A. Hemaspaandra, SIGACT news complexity theory column 34, ACM SIGACT News, v.32 n.4, December 2001
Lefteris M. Kirousis , Phokion G. Kolaitis, The complexity of minimal satisfiability problems, Information and Computation, v.187 n.1, p.20-39, November 25,
Lane A. Hemaspaandra, SIGACT news complexity theory column 43, ACM SIGACT News, v.35 n.1, March 2004 | coNP-completeness;boolean satisfiability;model;computational complexity;polynomial-time algorithms |
298507 | On Syntactic versus Computational Views of Approximability. | We attempt to reconcile the two distinct views of approximation classes: syntactic and computational. Syntactic classes such as MAX SNP permit structural results and have natural complete problems, while computational classes such as APX allow us to work with classes of problems whose approximability is well understood. Our results provide a syntactic characterization of computational classes and give a computational framework for syntactic classes.We compare the syntactically defined class MAX SNP with the computationally defined class APX and show that every problem in APX can be "placed" (i.e., has approximation-preserving reduction to a problem) in MAX SNP. Our methods introduce a simple, yet general, technique for creating approximation-preserving reductions which shows that any "well"-approximable problem can be reduced in an approximation-preserving manner to a problem which is hard to approximate to corresponding factors. The reduction then follows easily from the recent nonapproximability results for MAX SNP-hard problems. We demonstrate the generality of this technique by applying it to other classes such as MAX SNP-RMAX(2) and MIN F$^{+}\Pi_2(1)$ which have the clique problem and the set cover problem, respectively, as complete problems.The syntactic nature of MAX SNP was used by Papadimitriou and Yannakakis [J. Comput. System Sci., 43 (1991), pp. 425--440] to provide approximation algorithms for every problem in the class. We provide an alternate approach to demonstrating this result using the syntactic nature of MAX SNP. We develop a general paradigm, nonoblivious local search, useful for developing simple yet efficient approximation algorithms. We show that such algorithms can find good approximations for all MAX SNP problems, yielding approximation ratios comparable to the best known for a variety of specific MAX SNP-hard problems. Nonoblivious local search provably outperforms standard local search in both the degree of approximation achieved and the efficiency of the resulting algorithms. | Introduction
The approximability of NP optimization (NPO) problems has been investigated in the past via the definition
of two different types of problem classes: syntactically-defined classes such as MAX SNP, and
computationally-defined classes such as APX (the class of optimization problems to which a constant factor
approximation can be found in polynomial time). The former is useful for obtaining structural results
and has natural complete problems, while the latter allows us to work with classes of problems whose
approximability is completely determined. We attempt to develop linkages between these two views of
approximation problems and thereby obtain new insights about both types of classes. We show that a
natural generalization of MAX SNP renders it identical to the class APX. This is an unexpected validation
of Papadimitriou and Yannakakis's definition of MAX SNP as an attempt at providing a structural basis
to the study of approximability. As a side-effect, we resolve the open problem of identifying complete
problems for MAX NP. Our techniques extend to a generic theorem that can be used to create an approximation
hierarchy. We also develop a generic algorithmic paradigm which is guaranteed to provide good
approximations for MAX SNP problems, and may also have other applications.
1.1 Background and Motivation
A wide variety of classes are defined based directly on the polynomial-time approximability of the problems
contained within, e.g., APX (constant-factor approximable problems), PTAS (problems with polynomial-time
approximation schemes), and FPTAS (problems with fully-polynomial-time approximation schemes).
The advantage of working with classes defined using approximability as the criterion is that it allows us to
work with problems whose approximability is well-understood. Crescenzi and Panconesi [9] have recently
also been able to exhibit complete problems for such classes, particularly APX. Unfortunately such complete
problems seem to be rare and artificial, and do not seem to provide insight into the more natural problems
in the class. Research in this direction has to find approximation-preserving reductions from the known
complete but artificial problems in such classes to the natural problems therein, with a view to understanding
the approximability of the latter.
The second family of classes of NPO problems that have been studied are those defined via syntactic
considerations, based on a syntactic characterization of NP due to Fagin [10]. Research in this direction,
initiated by Papadimitriou and Yannakakis [22], and followed by Panconesi and Ranjan [21] and Kolaitis
and Thakur [19], has led to the identification of approximation classes such as MAX SNP, RMAX(2), and
(1). The syntactic prescription in the definition of these classes has proved very useful in the
establishment of complete problems. Moreover, the recent results of Arora, Lund, Motwani, Sudan, and
Szegedy [3] have established the hardness of approximating complete problems for MAX SNP to within
(specific) constant factors unless It is natural to wonder why the hardest problems in this syntactic
sub-class of APX should bear any relation to all of NP.
Though the computational view allows us to precisely classify the problems based on their approxima-
bility, it does not yield structural insights into natural questions such as: Why certain problems are easier
to approximate than some others? What is the canonical structure of the hardest representative problems
of a given approximation class? and, so on. Furthermore, intuitively speaking, this view is too abstract to
identification of, and reductions to establish, natural complete problems for a class. The syntactic
view, on the other hand, is essentially a structural view. The syntactic prescription gives a natural way of
identifying canonical hard problems in the class and performing approximation-preserving reductions to
establish complete problems.
Attempts at trying to find a class with both the above mentioned properties, i.e., natural complete
problems and capturing all problems of a specified approximability, have not been very successful. Typically
the focus has been to relax the syntactic criteria to allow for a wider class of problems to be included in
the class. However in all such cases it seems inevitable that these classes cannot be expressive enough to
encompass all problems with a given approximability. This is because each of these syntactically defined
approximation classes is strictly contained in the class NPO; the strict containment can be shown by syntactic
considerations alone. As a result if we could show that any of these classes contains all of P, then we would
have separated P from NP. We would expect that every class of this nature would be missing some problems
from P, and this has indeed been the case with all current definitions.
We explore a different direction by studying the structure of the syntactically defined classes when we
look at their closure under approximation-preserving reductions. The advantage of this is that the closure
maintains the complete problems of the set, while managing to include all of P into the closure (for problems
in P, the reduction is to simply use a polynomial time algorithm to compute an exact solution). It now
becomes interesting, for example, to compare the closure 1 of MAX SNP (denoted MAX SNP) with APX.
A positive resolution, i.e., MAX would immediately imply the non-existence of a PTAS for
MAX SNP-hard problems, since it is known that PTAS is a strict subset of APX, if P 6= NP. On the other
hand, an unconditional negative result would be difficult to obtain, since it would imply P 6= NP.
Here we resolve this question in the affirmative. The exact nature of the result obtained depends upon the
precise notion of an approximation preserving reduction used to define the closure of the class MAX SNP.
The strictest notion of such reductions available in the literature are the L-reductions due to Papadimitriou
and Yannakakis [22]. We introduce a new notion of reductions, called E-reductions, which are a slight
extension of L-reductions. Using such reductions to define the class MAX SNP we show that this equals
APX-PB, the class of all polynomially bounded NP optimization problems which are approximable to
within constant factors. By using slightly looser definitions of approximation preserving reductions (and
in particular the PTAS-reductions of Crescenzi et al [8]) this can be extended to include all of APX into
MAX SNP. We then build upon this result to identify an interesting hierarchy of such approximability
classes. An interesting side-effect of our results is the positive answer to the question of Papadimitriou and
Yannakakis [22] about whether MAX NP has any complete problems.
The syntactic view seems useful not only in obtaining structural complexity results but also in developing
paradigms for designing efficient approximation algorithms. Exploiting the syntactic nature of MAX SNP,
we develop a general paradigm for designing good approximation algorithms for problems in that class
and thereby provide a more computational view of it. We refer to this paradigm as non-oblivious local
search, and it is a modification of the standard local search technique [24]. We show that every MAX SNP
problem can be approximated to within constant factors by such algorithms. It turns out that the performance
of non-oblivious local search is comparable to that of the best-known approximation algorithms for
several interesting and representative problems in MAX SNP. An intriguing possibility is that this is not a
coincidence, but rather a hint at the universality of the paradigm or some variant thereof.
Our results are related to some extent to those of Ausiello and Protasi [4]. They define a class GLO
(for Guaranteed Local Optima) of NPO problems which have the property that for all locally optimum
solutions, the ratio between the value of the global and the local optimum is bounded by a constant. It
follows that GLO is a subset of APX, and it was shown that it is in fact a strict subset. We show that
a MAX SNP problem is not contained in GLO, thereby establishing that MAX SNP is not contained in
GLO. This contrasts with our notion of non-oblivious local search which is guaranteed to provide constant
factor approximations for all problems in MAX SNP. In fact, our results indicate that non-oblivious local
search is significantly more powerful than standard local search in that it delivers strictly better constant
ratios, and also will provide constant factor approximations to problems not in GLO. Independently of our
work, Alimonti [1] has used a similar local search technique for the approximation of a specific problem
not contained in GLO or MAX SNP.
1 Papadimitriou and Yannakakis [22] hinted at the definition of MAX SNP by stating that: minimization problems will be
"placed" in the classes through L-reductions to maximization problems.
1.2 Summary of Results
In Section 2, we present the definitions required to state our results, and in particular the definitions of an E-
reduction, APX, APX-PB, MAX SNP and MAX SNP. In Section 3, we show that MAX
A generic theorem which allows to equate the closure of syntactic classes to appropriate computational
classes is outlined in Section 4; we also develop an approximation hierarchy based on this result.
The notion of non-oblivious local search and NON-OBLIVIOUS GLO is developed in Section 5. In
Section 6, we illustrate the power of non-obliviousness by first showing that oblivious local search can
achieve at most the performance ratio 3=2 for MAX 2-SAT, even if it is allowed to search exponentially
large neighborhoods; in contrast, a very simple non-oblivious local search algorithm achieves a performance
ratio of 4=3. We then establish that this paradigm yields a 2 k approximation to MAX k-SAT. In
Section 7, we provide an alternate characterization of MAX SNP via a class of problems called MAX k-CSP.
It is shown that a simple non-oblivious algorithm achieves the best-known approximation for this problem,
thereby providing a uniform approximation for all of MAX SNP. In Section 8, we further illustrate the power
of this class of algorithms by showing that it can achieve the best-known ratio for a specific MAX SNP
problem and for VERTEX COVER (which is not contained in GLO). This implies that MAX SNP is not
contained in GLO, and that GLO is strict subset of NON-OBLIVIOUS GLO. In Section 9, we apply it to
approximating the traveling salesman problem. Finally, in Section 10, we apply this technique to improving
a long-standing approximation bound for maximum independent sets in bounded-degree graphs.
A preliminary version of this paper appeared in [18].
Preliminaries and Definitions
Given an NPO problem P and an instance I of P, we use jIj to denote the length of I and OPT (I) to
denote the optimum value for this instance. For any solution S to I, the value of the solution, denoted
by V (I; S), is assumed to be a polynomial time computable function which takes positive integer values
(see [7] for a precise definition of NPO).
solution S to an instance I of an NPO problem P has error E(I;
Notice that the above definition of error applies uniformly to the minimization and maximization
problems at all levels of approximability.
(Performance Ratio) An approximation algorithm A for an optimization problem P has
performance ratio R(n) if, given an instance I of P with
oe
A solution of value within a multiplicative factor r of the optimal value is referred to as an r-approximation.
The performance ratio for A is R if it always computes a solution with error at most R \Gamma 1.
2.1 E-reductions
We now describe the precise approximation preserving reduction we will use in this paper. Various other
notions of approximation preserving reductions exist in the literature (cf. [2, 16]) but the reduction which
we use, referred to as the E-reduction (for error-preserving reduction), seems to be the strictest. As we will
see, the E-reduction is essentially the same as the L-reduction of Papadimitriou and Yannakakis [22] and
differs from it in only one relatively minor aspect.
problem P E-reduces to a problem P 0 (denoted exist
polynomial time computable functions f , g and a constant fi such that
maps an instance I of P to an instance I 0 of P 0 such that OPT (I) and OPT are related by a
polynomial factor,
maps solutions S 0 of I 0 to solutions S of I such that
E(I;
Remark 1 An E-reduction is essentially the strictest possible notion of reduction. It requires that the error
for P be linearly related to the error for P 0 . Most other notions of reductions in the literature, for example
the F -reductions and P -reductions of Crescenzi and Panconesi [9], do not enforce this condition. One
important consequence of this constraint is that E-reductions are sensitive, i.e., when I 2 P is mapped
to I under an E-reduction then a good solution to I 0 should provide structural information about
a good solution to I. Thus, reductions from real optimization problems to decision problems artificially
encoded as optimization problems are implausible.
Having P /E P 0 implies that P is as well approximable as P 0 ; in fact, an E-reduction is an
FPTAS-preserving reduction. An important benefit is that this reduction can applied uniformly at all levels
of approximability. This is not the case with the other existing definitions of FPTAS-preserving reduction in
the literature. For example, the FPTAS-preserving reduction of Crescenzi and Panconesi [9]
is much more unrestricted in scope and does not share this important property of the E-reduction. Note
that Crescenzi and Panconesi [9] showed that there exists a problem P 0 2 PTAS such that for any problem
. Thus, there is the undesirable situation that a problem P with no PTAS has a
FPTAS-preserving reduction to a problem P 0 with a PTAS.
Remark 3 The L-reduction of Papadimitriou and Yannakakis [22] enforces the condition that the optima of
an instance I of P be linearly related to the optima of the instance I 0 of P 0 to which it is mapped. This appears
to be an unnatural restriction considering that the reduction itself is allowed to be an arbitrary polynomial
time computation. This is the only real difference between their L-reduction and our E-reduction, and an
E-reduction in which the linearity relation of the optimas is satisfied is an L-reduction. Intuitively, however,
in the study of approximability the desirable attribute is simply that the errors in the corresponding solutions
are closely (linearly) related. The somewhat artificial requirement of a linear relation between the optimum
values precludes reductions between problems which are related to each other by some scaling factor. For
instance, it seems desirable that two problems whose objective functions are simply related by any fixed
polynomial factor should be inter-reducible under any reasonable definition of an approximation-preserving
reduction. Our relaxation of the L-reduction constraint is motivated precisely by this consideration.
Let C be any class of NPO problems. Using the notion of an E-reduction, we define hardness and
completeness of problems with respect C, as well its closure and polynomially-bounded sub-class.
Definition 4 (Hard and Complete Problems) A problem P 0 is said to be C-hard if for all problems
we have P /E P 0 . A C-hard problem P is said to be C-complete if in addition P 2 C.
Definition 5 (Closure) The closure of C, denoted by C, is the set of all NPO problems P such that P /E P 0
for some P 0 2 C.
Remark 4 The closure operation maintains the set of complete problems for a class.
Definition 6 (Polynomially Bounded Subset) The polynomially bounded subset of C, denoted C-PB, is
the set of all problems P 2 C for which there exists a polynomial p(n) such that for all instances I 2 P,
2.2 Computational and Syntactic Classes
We first define the basic computational class APX.
Definition 7 (APX) An NPO problem P is in the class APX if there exists a polynomial time computable
function A mapping instances of P to solutions, and a constant c - 1, such that for all instances I of P,
The class APX-PB consists of all polynomially bounded NPO problems which can be approximated
within constant factors in polynomial time.
If we let F-APX denote the class of NPO problems that are approximable to within a factor F , then
we obtain a hierarchy of approximation classes. For instance, poly-APX and log-APX are the classes of
NPO problems which have polynomial time algorithms with performance ratio bounded polynomially and
logarithmically, respectively, in the input length. A more precise form of these definitions are provided in
Section 4.
Let us briefly review the definition of some syntactic classes.
Definition 8 (MAX SNP and MAX NP [22]) MAX SNP is the class of NPO problems expressible as finding
the structure S which maximizes the objective function
denotes the input (consisting of a finite universe U and a finite set of bounded arity
predicates P ), S is a finite structure, and F is a quantifier-free first-order formula. The class MAX NP is
defined analogously except the objective function is
A natural extension is to associate a weight with every tuple in the range of the universal quantifier; the
modified objective is to find an S which maximizes V (I;
~x w(~x)F(I;
the weight associated with the tuple ~x.
Example 1 (MAX k-SAT) The MAX k-SAT problem is: given a collection of m clauses on n boolean
variables where each (possibly weighted) clause is a disjunction of precisely k literals, find a truth assignment
satisfying a maximum weight collection of clauses. For any fixed integer k, MAX k-SAT belongs to the
class MAX SNP. The results of Papadimitriou and Yannakakis [22] can be adapted to show that for k - 2,
MAX k-SAT is complete under E-reductions for the class MAX SNP.
Definition 9 (RMAX(k) [21]) RMAX(k) is the class of NPO problems expressible as finding a structure
which maximizes the objective function
where S is a single predicate and F(I; S; ~y) is a quantifier-free CNF formula in which S occurs at most k
times in each clause and all its occurrences are negative.
The results of Panconesi and Ranjan [21] can be adapted to show that MAX CLIQUE is complete under
E-reductions for the class RMAX(2).
is the class of NPO problems expressible as finding a
structure S which minimizes the objective function
where S is a single predicate, F(I; S; ~y) is a quantifier-free CNF formula in which S occurs at most k times
in each clause and all its occurrences are positive.
The results of Kolaitis and Thakur [19] can be adapted to show that SET COVER is complete under
E-reductions for the class MIN F
3 MAX SNP Closure and APX-PB
In this section, we will establish the following theorem and examine its implications. The proof is based on
the results of Arora et al [3] on efficient proof verifications.
Theorem 1 MAX
Remark 5 The seeming weakness that MAX SNP only captures polynomially bounded APX problems
can be removed by using looser forms of approximation-preserving reduction in defining the closure. In
particular, Crescenzi and Trevisan [8] define the notion of a PTAS-preserving reduction under which
APX-PB. Using their result in conjunction with the above theorem, it is easily seen that MAX
This weaker reduction is necessary to allow for reductions from fine-grained optimization problems to
coarser (polynomially-bounded) optimization problems (cf. [8]).
The following is a surprising consequence of Theorem 1.
Theorem 2 MAX
Papadimitriou and Yannakakis [22] (implicitly) introduced both these closure classes but did not conjecture
them to be the same. It would be interesting to see if this equality can be shown independent of the
result of Arora et al [3]. We also obtain the following resolution to the problem posed by Papadimitriou and
Yannakakis [22] of finding complete problems for MAX NP.
Theorem 3 MAX SAT is complete for MAX NP.
The following sub-sections establish that MAX SNP ' APX-PB. The idea is to first E-reduce any minimization
problem in APX-PB to a maximization problem in therein, and then E-reduce any maximization
problem in APX-PB to a specific complete problem for MAX SNP, viz., MAX 3-SAT.
Since an E-reduction forces the optimas of the two problems involved to be related by polynomial
factors, it is easy to see that MAX SNP ' APX-PB. Combining, we establish Theorem 1.
3.1 Reducing Minimization to Maximization
Observe that the fact that P belongs to APX implies the existence of an approximation algorithm A and a
constant c such that
c
Henceforth, we will use a(I) to denote V (I; A(I)). We first reduce any minimization problem P 2 APX-PB
to a maximization problem P 0 2 APX-PB, where the latter is obtained by merely modifying the objective
function for P, as follows: let P 0 have the objective function V 0 (I;
instances I and solutions S for P. It can be verified that the optimum value for any instance I of P 0 always
lies between a(I) and (c 1)a(I). Thus, A is a 1)-approximation algorithm for P 0 . If S is a ffi -error
solution to the optimum of P 0 , i.e.,
where OPT 0 (I) is the optimal value of V 0 for I. We obtain that
c
c
c
Thus a solution s to P 0 with error ffi is a solution to P with error at most (c implying an E-reduction
with
3.2 NP Languages and MAX 3-SAT
The following theorem, adapted from a result of Arora, Lund, Motwani, Sudan, and Vazirani [3], is critical
to our E-reduction of maximization problems to MAX 3-SAT.
Theorem 4 Given a language L 2 NP and an instance x 2 S n , one can compute in polynomial time an
instance F x of MAX 3-SAT, with the following properties.
1. The formula F x has m clauses, where m depends only on n.
2. There exists a constant ffl ? 0 such that (1 \Gamma ffl)m clauses of F x are satisfied by some truth assignment.
3. If x 2 L, then F x is (completely) satisfiable.
4. If x 62 L, then no truth assignment satisfies more than clauses of F x .
5. Given a truth assignment which satisfies more than clauses of F x , a truth assignment which
satisfies F x completely can be constructed in polynomial time.
Some of the properties above may not be immediately obvious from the construction given by Arora,
Lund, Motwani, Sudan, and Szegedy [3]. It is easy to verify that they provide a reduction with properties
(1), (3) and (4). Property (5) is obtained from the fact that all assignments which satisfy most clauses are
actually close (in terms of Hamming distance) to valid codewords from a linear code, and the uniquely
error-corrected codeword obtained from this "corrupted code-word" will satisfy all the clauses of F x .
Property (2) requires a bit more care and we provide a brief sketch of how it may be ensured. The idea
is to revert back to the PCP model and redefine the proof verification game. Suppose that the original game
had the properties that for x 2 L there exists a proof such that the verifier accepts with probability 1, and
otherwise, for x 62 L, the verifier accepts with probability at most 1=2. We now augment this game by
adding to the proof a 0th bit which the prover uses as follows: if the bit is set to 1, then the prover "chooses"
to play the old game, else he is effectively "giving up" on the game. The verifier in turn first looks at the 0th
bit of the proof. If this is set, then she performs the usual verification, else she tosses an unbiased coin and
accepts if and only if it turns up heads. It is clear that for x 2 L there exists a proof on which the verifier
always accepts. Also, for x 62 L no proof can cause the verifier to accept with probability greater than 1=2.
Finally, by setting the 0th bit to 0, the prover can create a proof which the verifier accepts with probability
exactly 1=2. This proof system can now be transformed into a 3-CNF formula of the desired form.
3.3 Reducing Maximization to MAX 3-SAT
We have already established that, without loss of generality, we only need to worry about maximization
problems Consider such a problem P, and let A be a polynomial-time algorithm which
delivers a c-approximation for P, where c is some constant. Given any instance I of P, let
the bound on the optimum value for I obtained by running A on input I. Note that this may be a stronger
bound than the a priori polynomial bound on the optimum value for any instance of length jIj. An important
consequence is that p - c OPT (I).
We generate a sequence of NP decision problems L Given an
instance I, we create p formulas F i , for using the reduction from Theorem 4, where ith formula
is obtained from the NP language L i .
Consider now the formula
that has the following features.
ffl The number of satisfiable clauses of F is exactly
where ffl and m are as guaranteed by Theorem 4.
ffl Given an assignment which satisfies (1 clauses of F , we can construct in polynomial
time a solution to I of value at least j. To see this, observe the following: any assignment which
so many clauses must satisfy more than clauses in at least j of the formulas F i . Let i be
the largest index for which this happens; clearly, i - j. Furthermore, by property (5) of Theorem 4,
we can now construct a truth assignment which satisfies F i completely. This truth assignment can be
used to obtain a solution S such that V (I;
In order to complete the proof it remains to be shown that given any truth assignment with error ffi , i.e.,
which satisfies MAX =(1 clauses of F , we can find a solution S for I with error E(I;
some constant fi. We show that this is possible for cffl)=ffl. The main idea behind finding such a
solution is to use the second property above to find a "good" solution to I using a "good" truth assignment
for F .
Suppose we are given a solution which satisfies MAX =(1 clauses. Since MAX =(1
we can use the second feature from above to
construct a solution S 1 such that
fflm
c
readily seen that
Assuming we obtain that
On the other hand, if then the error in a solution S 2 obtained by running the c-approximation
algorithm for P is given by
Therefore, choosing immediately obtain that the solution with larger value, among S 1
has error at most fi ffi . Thus, this reduction is indeed an E-reduction.
4 Generic Reductions and an Approximation Hierarchy
In this section we describe a generic technique for turning a hardness result into an approximation preserving
reduction.
We start by listing the kind of constraints imposed on the hardness reduction, the approximation class
and the optimization problem. We will observe at the end that these restrictions are obeyed by all known
hardness results and the corresponding approximation classes.
Definition 11 (Additive Problems) An NPO problem P is said to be additive if there exists an operator
which maps a pair of instances I 1 and I 2 to an instance I 1 I 2 such that OPT
Definition 12 (Downward Closed Family) A family of functions is said to be
downward closed if for all g 2 F and for all constants c, g 0 (n) 2 O(g(n c )) implies that g 0 2 F . A function
g is said to be hard for the family F if for all g 0 2 F , there exists a constant c such that g 0 (n) 2 O(g(n c ));
the function g is said to be complete for F if g is hard for F and g 2 F .
For a downward closed family F , the class F -APX consists of all problems
approximable to within a ratio of g(jIj) for some function g 2 F .
Definition 14 (Canonical Hardness) An NP maximization problem P is said to be canonically hard for the
class F -APX if there exists a transformation T , constants n 0 and c, and a gap function G which is hard for
the family F , such that given an instance x of 3-SAT on n - n 0 variables and N - n c , I = T (x; N) is an
instance of P with the following properties.
ffl If x 2 3-SAT, then OPT
ffl If x 62 3-SAT, then OPT
ffl Given a solution S to I with V (I; S) ? N=G(N), a truth assignment satisfying x can be found in
polynomial time.
Canonical hardness for NP minimization problems is analogously defined: OPT when the
formula is satisfiable and OPT Given any solution with value less than NG(N),
one can construct a satisfying assignment in polynomial time.
4.1 The Reduction
Theorem 5 If F is a downward closed family of functions, and an additive NPO problem W is canonically
hard for the class F -APX, then all problems in F -APX E-reduce to P.
Proof: Let P be a problem in F-APX, approximable to within c(:), and let W be a problem shown to
be hard to within a factor G(:) where G is complete for F . We start with the special case where both P
and W are maximization problems. We describe the functions f , g and the constant fi as required for an
E-reduction.
Let I 2 P be an instance of size n; pick N so that c(n) is O(G(N)). To describe our reduction, we
need to specify the functions f and g. The function f is defined as follows. Let
each denote the NP-language fIj OPT (I) - ig. Now for each i, we create an
instance OE i 2 W of size N such that if I 2 L i then OPT (OE i ) is N , and it is N=G(N) otherwise. We define
We now construct the function g. Given an instance I 2 P and a solution s 0 to f(I), we compute a
solution s to I in the following manner. We first use A to find a solution s 1 . We also compute a second
solution s 2 to I as follows. Let j be the largest index such that the solution s 0 projects down to a solution s
to the instance OE j such that V 0
This in turn implies we can find a solution s 2 to witness
Our solution s is the one among s 1 and s 2 that yields the larger objective function value.
We now show that the reduction holds for
c(n)
Consider the following two cases:
Case 1 [j - m]: In this case, V (I; m. Thus s is an (ff \Gamma 1) approximate solution to I. We now
argue that s 0 is at best a (ff \Gamma 1)=fi approximate solution to OE. We start with the following upper bound on
c(n)
G(N) \GammaG(N)
Thus the approximation factor achieved by s 0 is given by
Nm
So in this case s 1 (and hence s) approximates I to within a factor of fi ffl, if s 0 approximates OE to within a
factor of ffl.
Case 2 [j - m]: Let flm. Note that fl ? 1 and that s is an (ff \Gamma fl)=fl approximate solution to I. We
bound the value of the solution s 0 to OE as
c(n)
and its quality as
Thus in this case also we find that s (by virtue of s 2 ) is a solution of quality fi ffl if s 0 is a solution of quality s.
We now consider the more general cases where P and W are not both maximization problems. For the
case where both are minimization problems, the above transformation works with one minor change. When
creating OE i , the NP language consists of instances (I; i) such that there exists s with
For the case where P is a minimization problem and W is a maximization problem, we first E-reduce P
to a maximization problem P 0 and then proceed as before. The reduction proceeds as follows. The objective
function of P 0 is defined as V 0 (I; To begin with, it is easy to verify that P 2 F-APX
implies
Let s be a fi approximate solution to instance I of P. We will show that s is at best a fi=2 approximate
solution to instance I of P 0 . Assume, without loss of generality, that fi 6= 0. Then
Multiplying by 2m 2 =(OPT (I)V (I; s)), we get
2:
This implies that
Upon rearranging,
Thus the reduction from P to P 0 is an E-reduction.
Finally, the last remaining case, i.e., P being a maximization problem and W being a minimization
problem, is dealt with similarly: we transform P into a minimization problem P 0 .
Remark 6 This theorem appears to merge two different notions of the relative ease of approximation of
optimization problems. One such notion would consider a problem P 1 easier than P 2 if there exists an
approximation preserving reduction from P 1 to P 2 . A different notion would regard P 1 to be easier than
one seems to have a better factor of approximation than the other. The above statement essentially
states that these two comparisons are indeed the same. For instance, the MAX CLIQUE problem and the
CHROMATIC NUMBER problem, which are both in poly-APX, are inter-reducible to each other. The
above observation motivates the search for other interesting function classes f , for which the class f -APX
may contain interesting optimization problems.
4.2 Applications
The following is a consequence ofTheorem 5.
Theorem 6
a)
b) If SET COVER is canonically hard to approximate to within a factor of W(log n), then
We briefly sketch the proof of this theorem. The hardness reduction for MAX SAT and CLIQUE
are canonical [3, 11]. The classes APX-PB, poly-APX, log-APX are expressible as classes F-APX for
downward closed function families. The problems MAX SAT, MAX CLIQUE and SET COVER are
additive. Thus, we can now apply Theorem 5.
Remark 7 We would like to point out that almost all known instances of hardness results seem to be shown
for problems which are additive. In particular, this is true for all MAX SNP problems, MAX CLIQUE,
CHROMATIC NUMBER, and SET COVER. One case where a hardness result does not seem to directly
apply to an additive problem is that of LONGEST PATH [17]. However in this case, the closely related
LONGEST S-T PATH problem is easily seen to be additive and the hardness result essentially stems from
this problem. Lastly, the most interesting optimization problems which do not seem to be additive are
problems related to GRAPH BISECTION or PARTITION, and these also happen to be notable instances
where no hardness of approximation results have been achieved!
5 Local Search and MAX SNP
In this section we present a formal definition of the paradigm of non-oblivious local search, and describe
how it applies to a generic MAX SNP problem. Given a MAX SNP problem P, recall that the goal is to
find a structure S which maximizes the objective function: V (I;
~x F(I; S; ~x). In the subsequent
discussion, we view S as a k-dimensional boolean vector.
5.1 Classical Local Search
We start by reviewing the standard mechanism for constructing a local search algorithm. A ffi-local algorithm
A for P is based on a distance function D(S 1 which is the Hamming distance between two k-dimensional
vectors. The ffi-neighborhood of a structure S is given by N(S;
U is the universe. A structure S is called ffi-optimal if 8S
algorithm computes a ffi-optimum by performing a series of greedy improvements to an initial structure S 0 ,
where each iteration moves from the current structure S i to some S
For constant ffi , a ffi-local search algorithm for a polynomially-bounded NPO problem runs in polynomial
time because:
ffl each local change is polynomially computable, and
ffl the number of iterations is polynomially bounded since the value of the objective function improves
monotonically by an integral amount with each iteration, and the optimum is polynomially-bounded.
5.2 Non-Oblivious Local Search
A non-oblivious local search algorithm is based on a 3-tuple hS is the initial solution
structure which must be independent of the input, F(I; S) is a real-valued function referred to as the weight
function, and D is a real-valued distance function which returns the distance between two structures in some
appropriately chosen metric. The distance function D is computable in time polynomial in jU j. Thus as
before, for constant ffi, a non-oblivious ffi -local algorithm terminates in time polynomial in the input size.
The classical local search paradigm, which we call oblivious local search, makes the natural choice for
the function F(I; S), and the distance function D, i.e., it chooses them to be V (I; S) and the Hamming
distance. However, as we show later, this choice does not always yield a good approximation ratio. We
now formalize our notion of this more general type of local search.
Definition 15 (Non-Oblivious Local Search Algorithm) A non-oblivious local search algorithm is a
local search algorithm whose weight function is defined to be
~x
r
where r is a constant, F i 's are quantifier-free first-order formulas, and the profits p i are real constants. The
distance function D is an arbitrary polynomial-time computable function.
A non-oblivious local search can be implemented in polynomial time in much the same way as oblivious
local search. Note that the we are only considering polynomially-bounded weight functions and the profits
are fixed independent of the input size. In general, the non-oblivious weight functions do not direct the
search in the direction of the actual objective function. In fact, as we will see, this is exactly the reason why
they are more powerful and allow for better approximations.
Definition (Non-Oblivious GLO) The class of problems NON-OBLIVIOUS GLO consists of all problems
which can be approximated within constant factors by a non-oblivious ffi -local search algorithm for some
constant ffi .
Remark 8 We make some observations about the above definition. It would be perfectly reasonable to
allow weight functions which are non-linear, but we stay with the above definition for the purposes of this
paper. Allowing only a constant number of predicates in the weight functions enables us to prevent the
encoding of arbitrarily complicated approximation algorithms. The structure S is a k-dimensional vector,
and so a convenient metric for the distance function D is the Hamming distance. This should be assumed
to be the underlying metric unless otherwise specified. However, we have found that it is sometimes useful
to modify this, for example by modifying the Hamming distance so that the complement of a vector is
considered to be at distance 1 from it. Finally, it is sometimes convenient to assume that the local search
makes the best possible move in the bounded neighborhood, rather than an arbitrary move which improves
the weight function. We believe that this does not increase the power of non-oblivious local search.
6 The Power of Non-Oblivious Local Search
We will show that there exists a choice of a non-oblivious weight function for MAX k-SAT such that any
assignment which is 1-optimal with respect to this weight function, yields a performance ratio of 2 k
with respect to the optimal. But first, we obtain tight bounds on the performance of oblivious local search
for MAX 2-SAT, establishing that its performance is significantly weaker than the best-known result even
when allowed to search exponentially large neighborhoods. We use the following notation: for any fixed
truth assignment ~
is the set of clauses in which exactly i literals are true; and, for a set of clauses S,
W (S) denotes the total weight of the clauses in S.
6.1 Oblivious Local Search for MAX 2-SAT
We show a strong separation in the performance of oblivious and non-oblivious local search for MAX 2-SAT.
Suppose we use a ffi-local strategy with the weight function F being the total weight of the clauses satisfied
by the assignment, i.e., The following theorem shows that for any an
oblivious ffi -local strategy cannot deliver a performance ratio better than 3=2. This is rather surprising given
that we are willing to allow near-exponential time for the oblivious algorithm.
Theorem 7 The asymptotic performance ratio for an oblivious ffi -local search algorithm for MAX 2-SAT
is 3=2 for any positive This ratio is still bounded by 5=4 when ffi may take any value less than
n=2.
Proof: We show the existence of an input instance for MAX 2-SAT which may elicit a relatively poor
performance ratio for any ffi -local algorithm provided In our construction of such an input instance,
we assume that n - 2ffi + 1. The input instance comprises of a disjoint union of four sets of clauses, say
defined as below:
1-i!j-n
1-i!j-n
2ffi+2-i-n
i!j-n
Clearly,
1). Without loss of generality, assume
that the current input assignment is ~
1). This satisfies all clauses in G 1 and G 2 . But none of
the clauses in G 3 and G 4 are satisfied. If we flip the assignment of values to any k - ffi variables, it would
unsatisfy precisely k(n \Gamma clauses in G 1 . This is the number of clauses in G 1 flipped
variable occurs with an unflipped variable.
On the other hand, flipping the assigned values of any k - ffi variables can satisfy at most k(n \Gamma
clauses in G 3 as we next show.
denote the set of clauses on n variables given by
We claim the following.
assignment of values to the n variables such that at most k - ffi variables have been assigned
value false, can satisfy at most k(n \Gamma clauses in P(n; ffi).
Proof: We prove by simultaneous induction on n and ffi that the statement is true for any instance
P(n; ffi) where n and ffi are non-negative integers such that n. The base case includes
and is trivially verified to be true for the only allowable value of ffi , namely We now assume
that the statement is true for any instance P(n Consider now the
instance P(n; ffi ). The statement is trivially true for
g be any choice of k - ffi variables such that q. Again the assertion is
trivially true if We assume that k - 2 from now on. If we delete all clauses containing the
variables z 1 and z 2 from P(n; ffi ), we get the instance P(n \Gamma 2; 1). We now consider three cases.
Case 1 In this case, we are reduced to the problem of finding an upper bound on the maximum
number of clauses satisfied by setting any k variables to false in P(n \Gamma 2; use
the inductive hypothesis to conclude that no more than (n clauses will be satisfied. Thus the
assertion holds in this case. However, we may not directly use the inductive hypothesis if But in this
case we observe that since by the inductive hypothesis, setting any k \Gamma 1 variables in P(n \Gamma 2;
false, satisfies at most (n clauses, assigning the value false to any set of k variables,
can satisfy at most
clauses. Hence the assertion holds in this case also.
Case In this case, z j 1
satisfies one clause and the remaining variables satisfy at most
clauses by the inductive hypothesis on Adding up the two terms,
we see that the assertion holds.
Case 3 We analyze this case based on whether
precisely clauses and the remaining variables, satisfy at most (n
clauses using the inductive hypothesis. Thus the assertion still holds. Otherwise, z 1 satisfies precisely
clauses and the remaining no more than (n clauses using the
inductive hypothesis. Summing up the two terms, we get (n \Gamma k)k as the upper bound on the total number
of clauses satisfied. Thus the assertion holds in this case also.
To see that this bound is tight, simply consider the situation when the k variables set to false are
ffi. The total number of clauses satisfied is given by
Assuming that each clause has the same weight, Lemma 1 allows us to conclude that a ffi-local algorithm
cannot increase the total weight of satisfied clauses with this starting assignment. An optimal assignment on
the other hand can satisfy all the clauses by choosing the vector ~
0). Thus the performance
ratio of a ffi-local algorithm, say R ffi , is bounded as
For any asymptotically converges to 3=2. We next show that this bound is tight
since a 1-local algorithm achieves it. However, before we do so, we make another intriguing observation,
namely, for any ffi ! n=2, the ratio R ffi is bounded by 5=4.
To see that a 1-local algorithm ensures a performance ratio of 3=2, consider any 1-optimal assignment
~
Z and let ff i denote the set of clauses containing the variable z i such that no literal in any clause of ff i is
satisfied by ~
Z . Similarly, let fi i denote the set of clauses containing the variable z i such that precisely one
literal is satisfied in any clause in fi i and furthermore, it is precisely the literal containing the variable z i . If
we complement the value assigned to the variable z i , it is exactly the set of clauses in ff i which becomes
satisfied and the set of clauses in fi i which is no longer satisfied. Since ~
Z is 1-optimal, it must be the
case that W (ff i ) - W (fi i ). If we sum up this inequality over all the variables, then we get the inequality
We observe that
because each clause in S 0 gets counted twice while each clause in S 1 gets counted exactly once. Thus the
fractional weight of the number of clauses not satisfied by a 1-local assignment is bounded as
Hence the performance ratio achieved by a 1-local algorithm is bounded from above by 3=2. Combining
this with the upper bound derived earlier, we conclude that R We may summarize our results as
follows.
Lemma 2 The performance ratio R ffi for any ffi -local algorithm for MAX 2-SAT using the weight function
positive integer o(n). Furthermore, this ratio is still bounded by
5=4 when ffi may take any value less than n=2.
6.2 Oblivious Local Search for MAX 2-SAT
We now illustrate the power of non-oblivious local search by showing that it achieves a performance ratio
of 4=3 for MAX 2-SAT, using 1-local search with a simple non-oblivious weight function.
Theorem 8 Non-oblivious 1-local search achieves a performance ratio of 4=3 for MAX 2-SAT.
Proof: We use the non-oblivious weight function
Consider any assignment ~
Z which is 1-optimal with respect to this weight function. Without loss of
generality, we assume that the variables have been renamed such that each unnegated literal gets assigned
the value true. Let P i;j and N i;j respectively denote the total weight of clauses in S i containing the literals
z j and z j , respectively. Since ~
Z is a 1-optimal assignment, each variable z j must satisfy the following
equation.
\Gamma2 P 2;j \Gamma2 P 1;j +2 N 1;j +2 N 0;j - 0:
Summing this inequality over all the variables, and using
we obtain the following inequality:
This immediately implies that the total weight of the unsatisfied clauses at this local optimum is no more
than 1=4 times the total weight of all the clauses. Thus, this algorithm ensures a performance ratio of 4=3.
Remark 9 The same result can be achieved by using the oblivious weight function, and instead modifying
the distance function so that it corresponds to distances in a hypercube augmented by edges between nodes
whose addresses are complement of each other.
6.3 Generalization to MAX k-SAT
We can also design a non-oblivious weight function for MAX k-SAT such that a 1-local strategy ensures a
performance ratio of 2 k 1). The weight function F will be of the form
the coefficients c i 's will be specified later.
Theorem 9 Non-oblivious 1-local search achieves a performance ratio of 2 k
Proof: Again, without loss of generality, we will assume that the variables have been renamed so that
each unnegated literal is assigned true under the current truth assignment. Thus the set S i is the set of
clauses with i unnegated literals.
denote the change in the current weight when we flip the value of z j , that
is, set it to 0. It is easy to verify the following equation:
(D
Thus when the algorithm terminates, we know that @F
Summing over all values of
j, and using the fact
get the following inequality.
We now determine the values of D i 's such that the coefficient of each term on the left hand side is unity.
It can be verified that
achieves this goal. Thus the coefficient of W (S 0 ) on the right hand side of equation (2) is
the weight of the clauses not satisfied is bounded by 1=2 k times the total weight of all the clauses. It is
worthwhile to note that this is regardless of the value chosen for the coefficient c 0 .
7 Local Search for CSP and MAX SNP
We now introduce a class of constraint satisfaction problems such that the problems in MAX SNP are exactly
equivalent to the problems in this class. Furthermore, every problem in this class can be approximated to
within a constant factor by a non-oblivious local search algorithm.
7.1 Constraint Satisfaction Problems
The connection between the syntactic description of optimization problems and their approximability through
non-oblivious local search is made via a problem called MAX k-CSP which captures all the problems in
MAX SNP as a special case.
Definition 17 (k-ary Constraint) Let be a set of boolean variables. A k-ary constraint
on Z is is a size k subset of Z, and is a k-ary boolean
predicate.
Definition Given a collection C of weighted k-ary constraints over the variables
g, the MAX k-CSP problem is to find a truth assignment satisfying a maximum weight
sub-collection of the constraints.
The following theorem shows that MAX k-CSP problem is a "universal" MAX SNP problem, in that it
contains as special cases all problems in MAX SNP.
Theorem
a) For fixed k, MAX k-CSP 2 MAX SNP.
Moreover, the k-CSP instance
corresponding to any instance of this problem can be computed in polynomial time.
7.2 Non-Oblivious Local Search for MAX k-CSP
A suitable generalization of the non-oblivious local search algorithm for MAX k-SAT yields the following
result.
Theorem 11 A non-oblivious 1-local search algorithm has performance ratio 2 k for MAX k-CSP.
Proof: We use an approach similar to the one used in the previous section to design a non-oblivious
weight function F for the weighted version of the MAX k-CSP problem such that a 1-local algorithm yields
performance ratio to this problem.
We consider only the constraints with at least one satisfying assignment. Each such constraint can be
replaced by a monomial which is the conjunction of some k literals such that when the monomial evaluates to
true the corresponding literal assignment represents a satisfying assignment for the constraint. Furthermore,
each such monomial has precisely one satisfying assignment. We assign to each monomial the weight of
the constraint it represents. Thus any assignment of variables which satisfies monomials of total weight W 0 ,
also satisfies constraints in the original problem of total weight W 0 .
denote the monomials with i true literals, and assume that the weight function F is of the form
assuming that the variables have been renamed so that the current assignment gives
value true to each variable, we know that for any variable z j , @F
is given by equation (1). As before, using
the fact that for any 1-optimal assignment, @F
summing over all values of j, we
can write the following inequality.
We now determine the values of D i 's such that the coefficient of each term on the left hand side is unity. It
can be verified that
achieves this goal. Thus the coefficient of W (S k ) on the right hand side of equation (1) is
the total weight of clauses satisfied is at least 1=2 k times the total weight of all the clauses with at least one
satisfiable assignment.
We conclude the following theorem.
Theorem 12 Every optimization problem P 2 MAX SNP can be approximated to within some constant
factor by a (uniform) non-oblivious 1-local search algorithm, i.e.,
For a problem expressible as k-CSP, the performance ratio is at most 2 k .
8 Non-Oblivious versus Oblivious GLO
In this section, we show that there exist problems for which no constant factor approximation can be obtained
by any ffi -local search algorithm with oblivious weight function, even when we allow ffi to grow with the
input size. However, a simple 1-local search algorithm using an appropriate non-oblivious weight function
can ensure a constant performance ratio.
8.1 MAX 2-CSP
The first problem is an instance of MAX 2-CSP where we are given a collection of monomials such that
each monomial is an "and" of precisely two literals. The objective is to find an assignment to maximize the
number of monomials satisfied.
We show an instance of this problem such that for every there exists an instance one of whose
local optima has value that is a vanishingly small fraction of the global optimum.
The input instance consists of a disjoint union of two sets of monomials, say G 1 and G 2 , defined as
below:
1-i!j-n
i!j-n
Clearly,
. Consider the truth assignment ~
1). It satisfies
all monomials in G 2 but none of the monomials in G 1 . We claim that this assignment is ffi -optimal with
respect to the oblivious weight function. To see this, observe that complementing the value of any p - ffi
variables will unsatisfy at least ffip=2 monomials in G 2 for any On the other hand, this will satisfy
precisely
. For any p - ffi , we have (ffip)=2 -
so Z is a ffi -local optimum.
The optimal assignment on the other hand, namely ~
all monomials in G 1 .
Thus, for n=2, the performance ratio achieved by any ffi -local algorithm is no more than
which asymptotically diverges to infinity for any We have already seen in Section 7 that a 1-
local non-oblivious algorithm ensures a performance ratio of 4 for this problem. Since this problem is in
MAX SNP, we obtain the following theorem.
Theorem 13 There exist problems in MAX SNP such that for oblivious algorithm can
approximate them to within a constant performance ratio, i.e.,
MAX SNP 6' GLO:
8.2 Vertex Cover
Ausiello and Protasi [4] have shown that VERTEX COVER does not belong to the class GLO and, hence,
there does not exist any constant ffi such that an oblivious ffi-local search algorithm can compute a constant
factor approximation. In fact, their example can be used to show that for any the performance
ratio ensured by ffi-local search asymptotically diverges to infinity. However, we show that there exists a
rather simple non-oblivious weight function which ensures a factor 2 approximation via a 1-local search. In
fact, the algorithm simply enforces the behavior of the standard approximation algorithm which iteratively
builds a vertex cover by simply including both end-points of any currently uncovered edge.
We assume that the input graph G is given as a structure (V; fEg) where V is the set of vertices and
encodes the edges of the graph. Our solution is represented by a 2-ary predicate M which
is iteratively constructed so as to represent a maximal matching. Clearly, the end-points of any maximal
matching constitute a valid vertex cover and such a vertex cover can be at most twice as large as any other
vertex cover in the graph. Thus M is an encoding of the vertex cover computed by the algorithm.
The algorithm starts with M initialized to the empty relation and at each iteration, at most one new pair
is included in it. The non-oblivious weight function used is as below:
where
Let M encode a valid matching in the graph G. 2 We make the following observations.
obtained from M by either deleting an edge from it, or including an edge in which
is incident on an edge of M , has the property that F(I; M 0 ) - F(I; M). Thus in a 1-local search
from M , we will never move to a relation M 0 which does not encode a valid matching of G.
ffl On the other hand, if a relation M 0 corresponds to the encoding of a matching in G which is larger
than the matching encoded by M , then F(I; does not encode a maximal
matching in G, there always exist a relation in its 1-neighborhood of larger weight than itself.
These two observations, combined with the fact that we start with a valid initial matching (the empty
matching), immediately allow us to conclude that any 1-optimal relation M always encodes a maximal
matching in G. We have established the following.
Theorem 14 A 1-local search algorithm using the above non-oblivious weight function achieves a performance
ratio of 2 for the VERTEX COVER problem.
Theorem 15 GLO is a strict subset of NON-OBLIVIOUS GLO:
2 It is implicit in our formulation that M will correspond to a lower triangular matrix representation of the matching edges.
9 The Traveling Salesman Problem
The TSP(1,2) problem is the traveling salesman problem restricted to complete graphs where all edge
weights are either 1 or 2; clearly, this satisfies the triangle inequality. Papadimitriou and Yannakakis [23]
showed that this problem is hard for MAX SNP. The natural weight function for TSP(1,2), that is, the weight
of the tour, can be used to show that a 4-local algorithm yields a 3=2 performance ratio. The algorithm starts
with an arbitrary tour and in each iteration, it checks if there exist two disjoint edges (a; b) and (c; d) on the
tour such that deleting them and replacing them with the edges (a; c) and (b; d) yields a tour of lesser cost.
Theorem 4-local search algorithm using the oblivious weight function achieves a 3=2 performance
ratio for TSP(1,2).
Proof: Let C be a 4-optimal solution and let Let - be a permutation such that the vertices in C occur in
the order v - Consider any optimal solution O. With each unit cost edge e in O, we associate
a unit cost edge e 0 in C as follows. Let
consider the edges e
) on C. We claim either e 1 or e 2 must be of unit
cost. Suppose not, then the tour C 0 which is obtained by simply deleting both e 1 and e 2 and inserting the
edges e and
has cost at least one less than C. But C is 4-optimal and thus this is a
contradiction.
Let UO denotes the set of unit cost edges in O and let UC be the set of unit cost edges in C which form
the image of UO under the above mapping. Since an edge e
in UC can only be the image of
unit cost edges incident on v - i
in O and since O is a tour, there are at most two edges in UO which map to
e 0 . Thus jU C j - jU O j=2 and hence
cost(O)
The above bound can be shown to be tight.
Theorem 17 There exists a TSP(1,2) instance such that the optimal solution has cost n + O(1) and there
exists a certain 4-optimal solution for it with cost 3n=2 +O(1).
Maximum Independent Sets in Bounded Degree Graphs
The input instance to the maximum independent set problem in bounded degree graphs, denoted MIS-B, is
a graph G such that the degree of any vertex in G is bounded by a constant D. We present an algorithm with
performance ratio (
1)=2 for this problem when D - 10.
Our algorithm uses two local search algorithms such that the larger of the two independent sets computed
by these algorithms, gives us the above claimed performance ratio. We refer to these two algorithms as A 1
and A 2 .
In our framework, the algorithm A 1 can be characterized as a 3-local algorithm with the weight function
simply being jI Thus if we start with I initialized to empty set, it is easy to see that at
each iteration, I will correspond to an independent set in G. A convenient way of looking at this algorithm
is as follows. We define an swap to be the process of deleting i vertices from S and including j
vertices from the set V \Gamma S to the set S. In each iteration, the algorithm A 1 performs either a 0
however, can be interpreted as j applications of 0
swaps. Thus the algorithm may be viewed as executing a 0 swap at each iteration.
The algorithm terminates when neither of these two operations is applicable.
Let I denote the 3-optimal independent set produced by the algorithm A 1 . Furthermore, let O be any
optimal independent set and let We make the following useful observations.
ffl Since for no vertex in I , a 0 can be performed, it implies that each vertex in V \Gamma I must
have at least one incoming edge to I .
ffl Similarly, since no 1 swaps can be performed, it implies that at most jI \Gamma X j vertices in O \Gamma I
can have precisely one edge coming into I . Thus vertices in O \Gamma X
must have at least two edges entering the set I .
A rather straightforward consequence of these two observations is the following lemma.
Lemma 3 The algorithm A 1 has performance ratio (D + 1)=2 for MIS-B.
Proof: The above two observations imply that the minimum number of edges entering I from the
vertices in O \Gamma X is On the other hand, the maximum number of edges coming out
of the vertices in I to the vertices in O \Gamma X is bounded by jI \Gamma X jD. Thus we must have
Rearranging, we get
which yields the desired result.
This nearly matches the approximation ratio of D=2 due to Hochbaum [15]. It should be noted that the
above result holds for a broader class of graphs, viz., k-claw free graphs. A graph is called
there does not exist an independent set of size k or larger such that all the vertices in the independent set are
adjacent to the same vertex. Lemma 3 applies to (D
Our next objective is to further improve this ratio by using the algorithm A 1 in combination with the
algorithm A 2 . The following lemma uses a slightly different counting argument to give an alternative bound
on the approximation ratio of the algorithm A 1 when there is a constraint on the size of the optimal solution.
Lemma 4 For any real number c ! D, the algorithm A 1 has performance ratio (D \Gamma c)=2 for MIS-B when
the optimal value itself is no more than
Proof: As noted earlier, each vertex in V \Gamma I must have at least one edge coming into the set I and at
least vertices in O must have at least two edges coming into I . Therefore, the following inequality
must be satisfied:
Thus Finally, observe that
The above lemma shows that the algorithm A 1 yields a better approximation ratio when the size of the
optimal independent set is relatively small.
The algorithm A 2 is simply the classical greedy algorithm. This algorithm can be conveniently included
in our framework if we use directed local search. If we let N(I) denote the set of neighbors of the vertices
in I , then the weight function is simply jI j(D 1). It is not
difficult to see that starting with an empty independent set, a 1-local algorithm with directed search on above
weight function simply simulates a greedy algorithm. The greedy algorithm exploits the situation when
the optimal independent set is relatively large in size. It does so by using the fact that the existence of a
large independent set in G ensures a large subset of vertices in G with relatively small average degree. The
following two lemmas characterize the performance of the greedy algorithm.
Lemma 5 Suppose there exists an independent set X ' V such that the average degree of vertices in X
is bounded by ff. Then for any ff - 1, the greedy algorithm produces an independent set of size at least
Proof: The greedy algorithm iteratively chooses a vertex of smallest degree in the remaining graph
and then deletes this vertex and all its neighbors from the graph. We examine the behavior of the greedy
by considering two types of iterations. First consider the iterations in which it picks a vertex outside X .
Suppose in the ith such iteration, it picks a vertex in exactly k i neighbors in the set X in the
remaining graph. Since each one of these k i vertices must also have at least k i edges incident on them,
we loose at least k 2
edges incident on X . Suppose only p such iterations occur and let
observe that
Secondly, we consider the iterations when the greedy selects a vertex in X .
Then we do not loose any other vertices in X because X is an independent set. Thus the total size of the
independent set constructed by the greedy algorithm is at least p
By the Cauchy-Schwartz inequality,
Therefore, we have (1
Rearranging, we obtain that
Thus
and the result follows.
Lemma 6 For non-negative real number c - 3D \Gamma
has performance ratio (D \Gamma c)=2 for MIS-B when the optimal value itself is at least ((D \Gamma c)jV j)=(D+c+4).
Proof: Observe that the average degree of vertices in O is bounded by (jV \Gamma OjD=jOj) and thus using
the fact that jOj - (D \Gamma c)jV we know that the algorithm A 2 computes an independent set of
size at least jOj=(1 Hence it is sufficient to
determine the range of values c can take such that the following inequality is satisfied:
Substituting the bound on the value of ff and rearranging the terms of the equation, yields the following
Since c must be strictly bounded by D, the above quadratic equation is satisfied for any choice of
Combining the results of Lemmas 4 and 6 and choosing the largest allowable value for c, we get the
following result.
Theorem approximation algorithm which simply outputs the larger of the two independent sets
computed by the algorithms A 1 and A 2 , has performance ratio (
The performance ratio claimed above is essentially D=2:414. This improves upon the long-standing
approximation ratio of D=2 due to Hochbaum [15], when D - 10. However, very recently, there has been
a flurry of new results for this problem. Berman and Furer [6] have given an algorithm with performance
(D when D is even, and (D fixed constant.
Halldorsson and Radhakrishnan [14] have shown that algorithm A 1 when run on k-clique free graphs, yields
an independent set of size at least 2n=(D They combine this algorithm with a clique-removal based
scheme to achieve a performance ratio of D=6(1
Acknowledgements
Many thanks to Phokion Kolaitis for his helpful comments and suggestions. Thanks also to Giorgio Ausiello
and Pierluigi Crescenzi for guiding us through the intricacies of approximation preserving reductions and
the available literature on it.
--R
New Local Search Approximation Techniques for Maximum Generalized Satisfiability Problems.
Approximate Solution of NP Optimization Problems.
Proof Verification and Hardness of Approximation Problems.
Optimization Problems and Local Optima.
Efficient probabilistically checkable proofs.
Approximating Maximum Independent Set in Bounded Degree Graphs.
Introduction to the Theory of Complexity.
Completeness in approximation classes.
Generalized First-Order Spectra and Polynomial-time Recognizable Sets
Computers and Intractability: A Guide to the Theory of NP- Completeness
Improved Approximations of Independent Sets in Bounded-Degree Graphs
Efficient bounds for the stable set
On the Approximability of NP-complete Optimization Problems
On approximating the longest path in a graph.
On Syntactic versus Computational Views of Approx- imability
Approximation Properties of NP Minimization Classes.
On the hardness of approximating minimization problems.
Quantifiers and Approximation.
The traveling salesman problem with distances one and two.
The analysis of local search problems and their heuristics.
--TR
--CTR
Bruno Escoffier , Vangelis Th. Paschos, Completeness in approximation classes beyond APX, Theoretical Computer Science, v.359 n.1, p.369-377, 14 August 2006
Angel , Evripidis Bampis , Laurent Gourvs, Approximating the Pareto curve with local search for the bicriteria TSP(1,2) problem, Theoretical Computer Science, v.310 n.1-3, p.135-146, 01 January 2004
James B. Orlin , Abraham P. Punnen , Andreas S. Schulz, Approximate local search in combinatorial optimization, Proceedings of the fifteenth annual ACM-SIAM symposium on Discrete algorithms, January 11-14, 2004, New Orleans, Louisiana
Jukka Suomela, Approximability of identifying codes and locating--dominating codes, Information Processing Letters, v.103 n.1, p.28-33, June, 2007
Cristina Bazgan , Bruno Escoffier , Vangelis Th. Paschos, Completeness in standard and differential approximation classes: poly-(D)APX- and (D)PTAS-completeness, Theoretical Computer Science, v.339 n.2, p.272-292, 12 June 2005
Friedrich Eisenbrand , Fabrizio Grandoni, On the complexity of fixed parameter clique and dominating set, Theoretical Computer Science, v.326 n.1-3, p.57-67, 20 October 2004
Alex characterization of NP with optimal amortized query complexity, Proceedings of the thirty-second annual ACM symposium on Theory of computing, p.191-199, May 21-23, 2000, Portland, Oregon, United States
Jianer Chen , Xiuzhen Huang , Iyad A. Kanj , Ge Xia, Polynomial time approximation schemes and parameterized complexity, Discrete Applied Mathematics, v.155 n.2, p.180-193, January, 2007
Moni Naor , Kobbi Nissim, Communication preserving protocols for secure function evaluation, Proceedings of the thirty-third annual ACM symposium on Theory of computing, p.590-599, July 2001, Hersonissos, Greece
Harry B. Hunt, III , Madhav V. Marathe , Venkatesh Radhakrishnan , S. S. Ravi , Daniel J. Rosenkrantz , Richard E. Stearns, Parallel approximation schemes for a class of planar and near planar combinatorial optimization problems, Information and Computation, v.173 n.1, p.40-63, February 25, 2002 | approximation algorithms;computational classes;computational complexity;local search;complete problems;polynomial reductions |
298555 | A Spectral Algorithm for Seriation and the Consecutive Ones Problem. | In applications ranging from DNA sequencing through archeological dating to sparse matrix reordering, a recurrent problem is the sequencing of elements in such a way that highly correlated pairs of elements are near each other. That is, given a correlation function f reflecting the desire for each pair of elements to be near each other, find all permutations $\pi$ with the property that if $\pi(i)<\pi(j)<\pi(k)$ then $f(i,j) \ge f(i,k)$ and $f(j,k) \ge f(i,k)$. This seriation problem is a generalization of the well-studied consecutive ones problem. We present a spectral algorithm for this problem that has a number of interesting features. Whereas most previous applications of spectral techniques provide only bounds or heuristics, our result is an algorithm that correctly solves a nontrivial combinatorial problem. In addition, spectral methods are being successfully applied as heuristics to a variety of sequencing problems, and our result helps explain and justify these applications. | Introduction
. Many applied computational problems involve ordering a set
so that closely coupled elements are placed near each other. This is the underlying
problem in such diverse applications as genomic sequencing, sparse matrix envelope
reduction and graph linear arrangement as well as less familiar settings such as archaeological
dating. In this paper we present a spectral algorithm for this class of
problems. Unlike traditional combinatorial methods, our approach uses an eigenvector
of a matrix to order the elements. Our main result is that this approach correctly
solves an important ordering problem we call the seriation problem which includes the
well known consecutive ones problem [5] as a special case.
More formally, we are given a set of n elements to sequence; that is, we wish to
bijectively map the elements to the integers We also have a symmetric, real
valued correlation function (sometimes called a similarity function) that reflects the
desire for elements to be near each other in the sequence. We now wish to
find all ways to sequence the elements so that the correlations are consistent; that is,
if - is our permutation of elements
k). Although there may be an exponential number of such orderings,
they can all be described in a compact data structure known as a PQ-tree [5] which
we review in the next section. Not all correlation functions allow for a consistent
sequencing. If a consistent ordering is possible we will say the problem is well posed.
Determining an ordering from a correlation function is what we will call the seriation
problem, reflecting its origins in archaeology [29, 33].
This work was supported by the Mathematical, Information and Computational Sciences Division
of the U.S. DOE, Office of Energy Research, and work at Sandia National Laboratories, operated for
the U.S. DOE under contract No. DE-AC04-94AL85000. Sandia is multiprogram laboratory operated
by Sandia Corporation, a Lockheed Martin Company, for the U.S. DOE.
y Dept. Mathematics, University of Michigan, 2072 East Hall, Ann Arbor, MI 48109.
[email protected].
z Scientific Computing & Computational Mathematics, Gates Bldg. 2B, Stanford Univ., Stanford,
94305-9025. [email protected].
x Applied & Numerical Mathematics Dept., Sandia National Labs, Albuquerque, NM 87185-1110.
[email protected].
The consecutive ones problem (C1P) is a closely related ordering problem. A
1)-matrix C has the consecutive ones property if there exists a permutation matrix
\Pi such that for each column in \PiC, all the ones form a consecutive sequence. If a
matrix has the consecutive ones property, then the consecutive ones problem is to find
all such permutations. As shown by Kendall [19] and reviewed in x6, C1P is a special
case of the seriation problem.
Our algorithm orders elements using their value in an eigenvector of a Laplacian
matrix which we formally define in x2. Eigenvectors related to graphs have been studied
since the 1950's (see, for example, the survey books by Cvetkovi'c et al. [8, 7]. Most
of the early work involved eigenvectors of adjacency matrices. Laplacian eigenvectors
were first studied by Fiedler [10, 11] and independently by Donath and Hoffman [9].
More recently, there have been a number of attempts to apply spectral graph theory
to problems in combinatorial optimization. For example, spectral algorithms have
been developed for graph coloring [3], graph partitioning [9, 28] and envelope reduction
[4], and more examples can be found in the survey papers of Mohar [23, 24].
However, in most previous applications, these techniques have been used to provide
bounds, heuristics, or in a few cases, approximation algorithms [2, 6, 14] for NP-hard
problems. There are only a small number of previous results in which eigenvector
techniques have been used to exactly solve combinatorial problems including finding
the number of connected components of a graph [10], coloring k-partite graphs [3],
and finding stable sets (independent sets) in perfect graphs [16]. This paper describes
another such application.
Spectral methods are closely related to the more general method of semidefinite
programming, which has been applied successfully to many combinatorial problems
(e.g. MAX-CUT and MAX-2SAT[14] and graph coloring[18]). See Alizadeh[1] for a
survey of semidefinite programming with applications to combinatorial optimization.
Our result is important for several reasons. First, it provides new insight into the
well-studied consecutive ones problem. Second, some important practical problems
like envelope reduction for matrices and genomic reconstruction can be thought of
as variations on seriation. For example, if biological experiments were error-free, the
genomic reconstruction problem would be precisely C1P. Unfortunately, real experimental
data always contain errors, and attempts to generalize the consecutive ones
concept to data with errors seems to invariably lead to NP-complete problems [31, 15].
A spectral heuristic based upon our approach has recently been applied to such problems
and found to be highly successful in practice [15]. Our result helps explain this
empirical success by revealing that in the error-free case the technique will correctly
solve the problem. This places the spectral method on a stronger theoretical footing
as a cross between a heuristic and an exact algorithm. Similar comments apply to
envelope reduction. Matrices with dense envelopes are closely related to matrices with
the consecutive ones property. Recent work has shown spectral techniques to be better
in practice than any existing combinatorial approaches at reducing envelopes [4]. Our
result sheds some light on this success.
Another way to interpret our result is that we provide an algorithm for C1P that
generalizes to become an attractive heuristic in the presence of errors. Designed as decision
algorithms for the consecutive ones property, existing combinatorial approaches
for C1P break down if there are errors and fail to provide useful approximate orderings.
However, our goal here is not to analyze the approach as an approximation algorithm,
but rather to prove that it correctly solves error-free problem instances.
This paper is organized in the following way. In the next section we introduce the
mathematical notation and the results from matrix theory that we will need later. We
also describe a spectral heuristic for ordering problems which motivates the remainder
of the paper. The theorem that underpins our algorithm is proved in x3, the proof of
which requires the use of a classic theorem from matrix analysis. Several additional
results in x4 lead us to an algorithm and its analysis in x5. We review the connection
to C1P in x6.
2. Mathematical background.
2.1. Notation and Definitions. Matrix concepts are useful because the correlation
function defined above can be considered as a real, symmetric matrix. A
permutation of the elements corresponds to a symmetric permutation of this matrix,
a permutation of the matrix elements formed by permuting the rows and the columns
in the same fashion. The question of whether or not the ordering problem is well
posed can also be asked as a property of this matrix. Specifically, suppose the matrix
has been permuted to reflect a consistent solution to the ordering problem. The
off-diagonal matrix entries must now be non-increasing as we move away from the
diagonal. More formally, we will say a matrix A is an R-matrix 1 if and only if A is
symmetric and
a i;j - a i;k for
a i;j - a i;k for
The diagonal entries of an R-matrix are unspecified. If A can be symmetrically permuted
to become an R-matrix, then we say that A is pre-R. Note that pre-R matrices
correspond precisely to well-posed ordering problems. Also, the R-matrix property is
preserved if we add a constant to all off-diagonal entries, so we can assume without
loss of generality that all off-diagonal values are non-negative.
When - is a permutation of the natural numbers ng and x is a column
vector, i.e. we will denote by x - the permutation of x by -, i.e.
Similarly, A - is the symmetric permutation of A by -, i.e. a -
We denote by e the vector whose entries are all 1, by e i the vector consisting of zeros
except for a 1 in position i, and by I the identity matrix. A symmetric matrix A is
reducible if there exists a permutation - such that
where B and C are non-empty square matrices. If no such permutation exists then
A is irreducible. If B and C are themselves irreducible, then we refer to them as the
irreducible blocks of A.
We say that - is an eigenvalue of A if
corresponding vector x is an eigenvector. An n \Theta n real, symmetric matrix has n
eigenvectors that can be constructed to be pairwise orthogonal, and its eigenvalues
are all real. We will assume that the eigenvalues are sorted by increasing value, and
refer to them as - i , n. The (algebraic) multiplicity of an eigenvalue - is
This class of matrices is named after W. S. Robinson who first defined this property in his work
on seriation methods in archaeology [29].
defined as the number of times - occurs as a root in the characteristic polynomial
value that occurs only once is called simple; the eigenvector
of a simple eigenvalue is unique (up to normalization). We write A - 0 and say A is
non-negative if all its elements a i;j are non-negative. A real vector x is monotone if
We define the Laplacian of a symmetric matrix A to be
a diagonal matrix with d
a i;j . The minimum eigenvalue with an eigenvector
orthogonal to e (the vector of all ones) is called the Fiedler value and a corresponding
eigenvector is called a Fiedler vector 2 . Alternatively, the Fiedler value is given by
min
and a Fiedler vector is any vector x that achieves this minimum while satisfying these
constraints. When A - 0 and irreducible, it is not hard to show that the Fiedler
value is the smallest non-zero eigenvalue and a Fiedler vector is any corresponding
eigenvector. We will be notationally cavalier and refer to the Fiedler value and vector
of A when we really mean those of LA .
2.2. PQ-trees. A PQ-tree is a data structure introduced by Booth and Lueker
to efficiently encode a set of related permutations [5]. A PQ-tree over a set
is a rooted, ordered tree whose leaves are elements of U and whose
internal nodes are distinguished as either P-nodes or Q-nodes. A PQ-tree is proper
when the following three conditions hold:
1. Every element u appears precisely once as a leaf.
2. Every P-node has at least two children.
3. Every Q-node has at least three children.
Two PQ-trees are said to be equivalent if one can be transformed into the other by
applying a sequence of the following two equivalence transformations:
1. Arbitrarily permute the children of a P-node.
2. Reverse the children of a Q-node.
Conveniently, the equivalence class represented by a PQ-tree corresponds precisely to
the set of permutations consistent with an instance of a seriation problem. In x5 we
describe an algorithm which uses Laplacian eigenvectors to construct a PQ-tree for an
instance of the seriation problem.
2.3. Motivation for Spectral Methods. With the above definitions we can
describe a simple heuristic for the seriation problem that will motivate the remainder
of the paper. This heuristic is at the heart of the more complex algorithms we will
devise, and underlies many previous applications of spectral algorithms [17]. We begin
by constructing a simple penalty function g whose value will be small when closely
correlated elements are close to each other. We define
Unfortunately, minimizing g is NP-hard due to the discrete nature of the permutation
[13]. Instead we approximate it by a function h of continuous variables x i
that we can minimize and that maintains much of the structure of g. We define
. Note that h does not have a unique minimizer, since
its value does not change if we add a constant to each x component. To avoid this
ambiguity, we need to add a constraint like
We still have a trivial solution
2 This is in recognition of the work of Miroslav Fiedler [10, 11].
when all the x i 's are zero, so we need a second constraint like
1. The resulting
minimization problem is now well defined.
Minimize
(1)
subject to:
The solution to this continuous problem can be used as a heuristic for sequencing.
Merely construct the solution vector x, sort the elements x i and sequence based upon
their sorted order. One reason this heuristic is attractive is that the minimization
problem has an elegant solution. We can rewrite h(x) as x T L F x where is
the correlation matrix. The constraints require that x be a unit vector orthogonal to e,
and since LA is symmetric, all other eigenvectors satisfy the constraints. Consequently,
a solution to the constrained minimization problem is just a Fiedler vector.
Even if the problem is not well posed, sorting the entries of the Fiedler vector
generates an ordering that tries to keep highly correlated elements near each other. As
mentioned above, this technique is being used for a variety of sequencing problems [4,
15, 17]. The algorithm we describe in the remainder of the paper is based upon this
idea. However, when we encounter ties in entries of the Fiedler vector, we need to
recurse on the subproblem encompassing the tied values. In this way, we are able to
find all permutations which make a pre-R matrix into an R matrix.
3. The key theorem. Our main result is that a modification of the simple
heuristic presented in x2.3 is actually an algorithm for well-posed instances of the
seriation problem. Completely proving this will require us to deal with the special
cases of multiple Fiedler vectors and ties within the Fiedler vector. The cornerstone
of our analysis is a classical result in matrix theory due to Perron and Frobenius [27].
The particular formulation below can be found on page 46 of [30].
Theorem 3.1 (Perron-Frobenius). Let M be a real, non-negative matrix. If
we define
1. ae(M) is an eigenvalue of M , and
2. there is a vector x - 0 such that
We are now ready to state and prove our main theorem.
Theorem 3.2. If A is an R-matrix then it has a monotone Fiedler vector.
Proof. Our proof uses the Perron-Frobenius Theorem 3.1. The non-negative vector
in that theorem will consist of differences between neighboring entries in the Fiedler
vector of the Laplacian of A.
First define the matrix S 2 IR (n\Gamma1)\Thetan as
. 0
Note that for any vector x,
. 0
It is easy to verify that
1 . We define
g. We now show that Sx is an eigenvector of MA if
and only if x is an eigenvector of LA and x 6= ffe.
The transformation from the second to the third line follows from LA
holds between all the above equations, so - is an eigenvalue for both LA and
MA for eigenvectors of LA other than e. Hence the eigenvalues of MA are the same as
the eigenvalues of LA with the zero eigenvalue removed, and the eigenvectors of MA
are differences between neighboring entries of the corresponding eigenvectors of LA .
It is easily seen that (SLA )
(a i;k \gamma a i+1;k
Since, by assumption, A is an R-matrix, a i;k - a i+1;k for
j. For i ? j we can use the fact that
(l i;k \gamma l i+1;k
Again, from the R-matrix property we conclude that Consequently,
all the off-diagonal elements in MA are non-positive.
Now let fi be a value greater than are the eigenvalues of
MA . Then ~
non-negative with eigenvalues ~
MA and
MA share the same set of eigenvectors. By Theorem 3.1, there exists a non-negative
eigenvector y of ~
MA corresponding to the largest eigenvalue of ~
MA . But y is also an
eigenvector of MA corresponding to MA 's smallest eigenvalue. And this is just Sx,
where x is a Fiedler vector of LA . Since non-negative, the corresponding
Fiedler vector of LA is non-decreasing and the theorem follows. (Note that since the
sign of an eigenvector is unspecified, the Fiedler vector could also be non-increasing.)
Theorem 3.3. Let A be a pre-R matrix with a simple Fiedler value and a Fiedler
vector with no repeated values. Let - 1 (respectively - 2 ) be the permutation induced by
sorting the values in the Fiedler vector in increasing (decreasing) order. Then A - 1
and A - 2 are R-matrices, and no other permutations of A produce R-matrices.
Proof. First note that since the Fiedler value is simple, the Fiedler vector is unique
up to a multiplicative constant. Next observe that if x is the Fiedler vector of A, then
x - is the Fiedler vector of A - . So applying a permutation to A merely changes the
order of the entries in the Fiedler vector. Now let - be a permutation such that A -
is an R-matrix. By Theorem 3.2 x - is monotone since x is the only Fiedler vector.
Since x has no repeated values, - must be either - 1 or - 2 .
Theorem 3.3 provides the essence of our algorithm for the seriation problem, but
it is too restrictive as the Fiedler value must be simple and contain no repeated values.
We will show how to remove these limitations in the next section.
4. Removing the restrictions. Several observations about the seriation problem
will simplify our analysis. First note that if we add a constant to all the correlation
values the set of solutions is unchanged. Consequently, we can assume without loss
of generality that the smallest value of the correlation function is zero. Note that
subtracting the smallest value from all correlation values does not change whether or
not the matrix is pre-R. In our algebraic formulation this translates into the following.
Lemma 4.1. Let A be a symmetric matrix and let -
real
ff. A vector x is a Fiedler vector of A iff x is a Fiedler vector of -
A. So without loss
of generality we can assume that the smallest off-diagonal entry of A is zero.
Proof. By the definition of a Laplacian it follows that L -
where n is the dimension of A. Then L -
but for any other eigenvector x of LA ,
That is, the eigenvalues are simply shifted down by ffn while
the eigenvectors are preserved.
This will justify the first step of our algorithm, which subtracts the value of the
smallest correlation from every correlation. Accordingly, we now make the assumption
that our pre-R matrix has smallest off diagonal entry of zero. Next observe that if A is
reducible then the seriation problem can be decoupled. The irreducible blocks of the
matrix correspond to connected components in the graph of the nonzero values of the
correlation function. We can solve the subproblems induced by each of these connected
components, and link the pieces together in an arbitrary order. More formally, we have
the following lemma.
Lemma 4.2. Let A i , be the irreducible blocks of a pre-R matrix A, and
let - i be a permutation of block A i such that the submatrix A - i
i is an R-matrix. Then
any permutation formed by concatenating the - i 's will make A become an R-matrix.
In terms of a PQ-tree, the - i permutations are children of a single P-node.
Proof. By Lemma 4.1, we can assume all entries in the irreducible blocks are non-
negative. Consequently, the correlation between elements within a block will always
be at least as strong as the correlation between elements in different blocks. Also, by
the definition of irreducibility, each element within a block must have some positive
correlation with another element in that block. Hence, any ordering that makes A i an
R-matrix must not interleave elements between different irreducible blocks. As long as
the blocks themselves are ordered to be R-matrices, any ordering of blocks will make
A an R-matrix since correlations across blocks are all identical.
With these preliminaries, we will now assume that the smallest off-diagonal value
is zero and that the matrix is irreducible. As the following three lemmas and theorem
show, this is sufficient to ensure that the Fiedler vector is unique up to a multiplicative
constant.
Lemma 4.3. Let A be an n \Theta n R-matrix with a monotone Fiedler vector x. If
is a maximal interval such that x
Proof. We can without loss of generality assume x is non-decreasing since \Gammax is
also a Fiedler vector. We will show that a r;k = a s;k for all
since A is an
R-matrix then all elements between a r;k and a s;k must also be equal. Consider rows r
and s in the equation
(l s;k \gamma l r;k )x
Since LA is a Laplacian, we know that
(l sk \gamma l rk )(x r \gamma x k )
(l s;k \gamma l r;k )
(l s;k \gamma l r;k )
where we have used the fact that x is non-decreasing. Because all terms in the sum
are non-negative, all terms must be exactly zero. By assumption, x k 6= x r for
and consequently l
2 J and the result follows.
The following lemma is essentially a converse of this. Its proof requires detailed
algebra, but it is not fundamental to what follows. Consequently, the proof is relegated
to the end of this section.
Lemma 4.4. Let A be an irreducible n \Theta n R-matrix with a
[1; n] is an interval such that a r;k = a s;k for all
any Fiedler vector x.
Lemma 4.5. Let A be an irreducible R-matrix with a
Fiedler vector of A. If is an interval such that x
for any Fiedler vector y, y
Proof. First apply Lemma 4.3 to conclude that for any k =
Now use this in conjunction with
Lemma 4.4 to obtain the result.
Theorem 4.6. If A is an irreducible R-matrix with a then the Fiedler
value - 2 is a simple eigenvalue.
Proof. We will assume that - 2 is a repeated eigenvalue and produce a contradic-
tion. Let x and y be two linearly independent Fiedler vectors with x non-decreasing.
sin(')y, with 0 -. Let ' be the smallest value of '
that makes z Such a ' must exist since x and
y are linearly independent.
By Lemma 4.5 the indices of any repeated values in x are indices of repeated
values in y and z('). Coupled with the monotonicity of x, this implies that z(' ) is
monotone. By Lemma 4.5 the indices of any repeated values in z(' ) must be repeated
in x which gives the desired contradiction.
All that remains is to handle the situation where the Fiedler vector has repeated
values. As the following theorem shows, repeated values decouple the problem into
pieces that can be solved recursively.
Theorem 4.7. Let A be a pre-R matrix with a simple Fiedler value and Fiedler
vector x. Suppose there is some repeated value fi in x and define I, J and K to be
the indices for which
1.
2. x
3.
Then - is an R-matrix ordering for A iff - or its reversal can be expressed as (-
is an R-matrix ordering for the submatrix A(J ; J ) of A induced by J , and - i
and - k are the restrictions of some R-matrix ordering for A to I and K, respectively.
Proof. From Theorem 3.2 we know that for any R-matrix ordering A - , x - is
monotone, so elements in I must appear before (after) elements from J and elements
from K must appear after (before) elements from J . By Lemma 4.3, we have a
for all
2 J . Hence the orderings of elements inside J must be indifferent
to the ordering outside of J and vice versa. Consequently, the R-matrix ordering of
elements in J depends only of A(J ; J ).
Algorithmically, this theorem means that we can break ties in the Fiedler vector
by recursing on the submatrix A(J ; J ) where J corresponds to the set of repeated
values. The distinct values in the Fiedler vector of A constrain R-matrix orderings,
but repeated values need to be handled recursively. In the language of PQ-trees,
the distinct values are combined via a Q-node, and the components (subtrees) of the
Q-node must then be expanded recursively.
Proof of Lemma 4.4. First we recall that the Fiedler value is the value obtained
by
min
a i;j
(2)
and a Fiedler vector is a vector that achieves this minimum. We note that if we
replace A by a matrix that is at least as large on an elementwise comparison then
x T LAx cannot decrease for any vector x.
We consider A(J ; J ), the diagonal block of A indexed by J . By the definition
of an R-matrix, all values in A(J ; J ) must be at least as large as a r;s . However, a r;s
must be greater than zero. Otherwise, by the R-matrix property a
then by the statement of the theorem
a which would make the matrix
reducible.
The remainder of the proof will proceed in two stages. First we will force all the
off-diagonal values in A(J ; J ) to be a r;s and show the result for this modified matrix.
We will then extend the result to our original matrix.
Stage 1:
We define the matrix B to be identical to A outside of B(J ; J ), but all off-diagonal
values of B within B(J ; J ) are set to ff = a r;s . It follows from the hypotheses that B
is an R-matrix. We define note that, by the R-matrix property,
We now define ~
ff)I and consider the eigenvalue equation ~
~
This matrix has the same eigenvectors as LB with eigenvalues shifted by
rows of ~
LB in J are identical. Consequently,
either all elements of x in J are equal, or ~ - (which is equivalent to -
We will show that irreducibility and a which will complete
the proof of Stage 1.
We assume look for a contradiction. We introduce a new matrix
B as follows
ff otherwise.
Since B is an R-matrix, -
B is at least as large as B elementwise, so - 2 ( -
We define the vector -
y by
and -
x to be the unit vector in the direction of -
y. We note that -
and that
We have the following chain of inequalities.
The last inequality is strict since - b
then we can combine an inequality due to Fiedler [10],
l ii ;
with the observation that min i l i;i - ffi to obtain - 2 - n
. This can
only be true if equality holds throughout, implying that
But this contradicts (3), so - 2 ff and the proof of Stage 1 is complete.
Stage 2:
We will now show that A and B have the same Fiedler vectors. Since A is elementwise
at least as large as B, for any vector z, z T LA z - z T LB z. From Stage 1 we know that
any Fiedler vector of B satisfies x . In this vector,
for so the contribution to the sum in (2) from B(J ; J ) is zero. But this
contribution will also be zero when applied to A(J ; J ). Since A and B are identical
outside of A(J ; J ) and B(J ; J ), we now have that a Fiedler vector of B gives an
upper bound for the Fiedler value of A; that is, It follows that the
Fiedler vectors of B are also Fiedler vectors of A and vice versa.
5. A spectral algorithm for the seriation problem. We can now bring all
the preceding results together to produce an algorithm for well-posed instances of the
seriation problem. Specifically, given a well-posed correlation function we will generate
all consistent orderings. Given a pre-R matrix, our algorithm constructs a PQ-tree
for the set of permutations that produce an R-matrix.
Our Spectral-Sort algorithm is presented in Fig. 1. It begins by translating all
the correlations so that the smallest is 0. It then separates the irreducible blocks (if
there are more than one) into the children of a P-node and recurses. If there is only
one such block, it sorts the elements into the children of a Q-node based on their
values in a Fielder vector. If there are ties in the entries of the Fiedler vector, the
algorithm is invoked recursively.
Input: A, an n \Theta n pre-R matrix
U , a set of indices for the rows/columns of A
Output: T , a PQ-tree that encodes the set of all permutations -
such that A - is an R-matrix
begin
(1) ff := min i6=j a i;j
(1) A := A \gamma ffee T
:= the irreducible blocks of A
:= the corresponding index sets
else
(3) else if
else
Fiedler vector for LA
number of distinct values in x
indices of elements in x with jth value
Fig. 1. Algorithm Spectral-Sort.
We now prove that the algorithm is correct. Step (1) is justified by Lemma 4.1,
and requires time proportional to the number of nonzeros in the matrix. The identification
of irreducible blocks in step (2) can be performed with a breadth-first or
depth-first search algorithm, also requiring time proportional to the number of nonze-
ros. Combining the permutations of the resulting blocks with a P-node is correct by
Lemma 4.2.
Step (3) handles the boundary conditions of the recursion, while in step (4) the
Fiedler vector is computed and sorted. If there are no repeated elements in the Fiedler
vector then the Q-node for the permutation is correct by Theorem 3.3. Steps (3) and
(4) are the dominant computational steps and we will discuss their run time below.
The recursion in step (5) is justified by Theorem 4.7.
Note that this algorithm produces a tree whether A is pre-R or not. To determine
whether A is pre-R, simply apply one of the generated permutations. If the result
is an R-matrix then all permutations in the PQ-tree will solve the seriation problem,
otherwise the problem is not well posed.
The most expensive steps in algorithm Spectral-Sort are the generation and
sorting of the eigenvector. Since the algorithm can invoke itself recursively, these
operations can occur on problems of size n, n if the time for an eigen-
calculation on a matrix of size n is T (n), the runtime of algorithm Spectral-Sort is
A formal analysis of the complexity of the eigenvector calculation can be simplified
by noting that for a Pre-R matrix, all that matters is the dominance relationships between
matrix entries. So, without loss of generality, we can assume that all entries are
integers less than n 2 . With this observation, it is possible to compute the components
of the Fiedler vector to a sufficient precision that the components can be correctly
sorted in polynomial time. We now sketch one way this can be done, although we
don't recommend this procedure in a real-world implementation.
Let - denote a specific eigenvalue of L, in our case the Fiedler value. This can
be computed in polynomial time as discussed in [25]. Then we can compute the
corresponding eigenvector x symbolically by solving
where p(z) is the characteristic polynomial of L. Gaussian elimination over a field is in
P [21], so if p(z) is irreducible we obtain a solution x where each component x i is given
by a polynomial in z with bounded integer coefficients. We note that letting z be any
eigenvalue will force x to be a true eigenvector. If p(z) is reducible, we try the above.
If we fail to solve the equation, we will instead find a factorization of p(z) and proceed
by replacing p(z) with the factor containing - as a root. This yields a polynomial
formula for each x i and we can identify equal elements by e.g. the method in [22]. To
decide the order of the remaining components, we evaluate the root - to a sufficient
precision and then compute the x i 's numerically and sort. Since - is algebraic, the
cannot be arbitrarily close [22] and polynomial precision is sufficient.
In practice, eigencalculations are a mainstay of the numerical analysis community.
To calculate eigenvectors corresponding to the few highest or lowest eigenvalues (like
the Fiedler vector), the method of choice is known as the Lanczos algorithm. This is
an iterative algorithm in which the dominant cost in each iteration is a matrix-vector
multiplication which requires O(m) time. The algorithm generally converges in many
fewer than n iterations, often only O(
n) [26]. However, a careful analysis reveals a
dependence on the difference between the distinct eigenvalues.
6. The consecutive ones problem. Ordering an R-matrix is closely related
to the consecutive ones problem. As mentioned in x1, a (0; 1)-matrix C has the
consecutive ones property if there exists a permutation matrix \Pi such that for each
column in \PiC, all the ones form a consecutive sequence. 3 A matrix that has this
property without any rearrangement (i.e. I) is in Petrie form 4 and is called a P-
matrix. Analogous to R-matrices, we say a matrix with the consecutive ones property
is pre-P. The consecutive ones problem can be restated as: Given a pre-P matrix C,
find a permutation matrix \Pi such that \PiC is a P-matrix.
3 Some authors define this property in terms of rows instead of columns.
4 Sir William M. F. Petrie was an archaeologist who studied mathematical methods for seriation
in the 1890's.
There is a close relationship between P-matrices and R-matrices. The following
results are due to D.G. Kendall and are proved in [19] and [33].
Lemma 6.1. If C is a P-matrix, then is an R-matrix.
Lemma 6.2. If C is pre-P and is an R-matrix, then C is a P-matrix.
Theorem 6.3. Let C be a pre-P matrix, let A = CC T , and let \Pi be a permutation
matrix. Then \PiC is a P-matrix if and only if \PiA\Pi T is an R-matrix.
This theorem allows us to use algorithm Spectral-Sort to solve the consecutive
ones problem. First construct apply our algorithm to A (note that
the elements of A are small non-negative integers). Now apply one of the permutations
generated by the algorithm to C. If the result is a P-matrix then all the permutations
produce C1P orderings. If not, then C has no C1P orderings.
The run time for this technique is not competitive with the linear time algorithm
for this problem due to Booth and Lueker [5]. However, unlike their approach, our
Spectral-Sort algorithm does not break down in the presence of errors and can
instead serve as a heuristic.
Several other combinatorial problems have been shown to be equivalent to the
consecutive ones problem. Among these are recognizing interval graphs [5, 12] and
finding dense envelope orderings of matrices [5].
One generalization of P-matrices is to matrices with unimodal columns (a uni-modal
sequence is a sequence that is non-decreasing until it reaches its maximum,
then non-increasing). These matrices are called unimodal matrices [32]. Kendall [20]
showed that the results 6.1 - 6.3 are also valid for unimodal matrices if the regular
matrix product is replaced by the matrix circle product defined by
Note that P-matrices are just a special case of unimodal matrices, and that the circle
product is equivalent to matrix product for (0; 1)-matrices. Kendall's result implies
that our spectral algorithm will correctly identify and order unimodal matrices.
Acknowledgements
. We are indebted to Robert Leland for innumerable discussions
about spectral techniques and to Sorin Istrail for his insights into the consecutive
ones problem and his constructive feedback on an earlier version of this paper. We
are further indebted to David Greenberg for his experimental testing of our approach
on simulated genomic data, and to Nabil Kahale for showing us how to simplify the
proof of Theorem 3.2. We also appreciate the highly constructive feedback provided
by an anonymous referee.
--R
Interior point methods in semidefinite programming with applications to combinatorial optimization.
A spectral technique for coloring random 3-colorable graphs
Graph coloring using eigenvalue decomposition.
A spectral algorithm for envelope reduction of sparse matrices.
Testing for the consecutive ones property
A near optimal algorithm for edge seperators (preliminary version).
Recent Results in the Theory of Graph Spectra.
Spectra of Graphs: Theory and Application.
Lower bounds for the partitioning of graphs.
Algebraic connectivity of graphs.
A property of eigenvectors of nonnegative symmetric matrices and its application to graph theory.
Incidence matrices and interval graphs.
An analysis of spectral envelope-reduction via quadratic assignment problems
Physical mapping by STS hybridization: Algorithmic strategies and the challenge of software evaluation.
Geometric Algorithms and Combinatorial Optimiza- tion
Optimal linear labelings and eigenvalues of graphs.
Approximate graph coloring by semidefinite program- ming
Incidence matrices
Abundance matrices and seriation in archaeology.
The Design and Analysis of Algorithms.
of algebraic numbers.
The Laplacian spectrum of graphs.
Laplace eigenvalues of graphs - a survey
Algebraic complexity of computing polynomial zeros.
The Lanczos algorithm with selective orthogonalization.
Zur Theorie der Matrizen.
Partitioning sparse matrices with eigenvectors of graphs.
A method for chronologically ordering archaeological deposits.
Matrix Iterative Analysis.
Approximation of the consecutive ones matrix augmentation problem.
Mathematics in the Archaeological and Historical Sciences
Techniques of data analysis and seriation theory.
--TR
--CTR
Antonio Robles-Kelly , Edwin R. Hancock, Graph Edit Distance from Spectral Seriation, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.3, p.365-378, March 2005
Jordi Petit, Experiments on the minimum linear arrangement problem, Journal of Experimental Algorithmics (JEA), 8,
Aristides Gionis , Teija Kujala , Heikki Mannila, Fragments of order, Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, August 24-27, 2003, Washington, D.C.
Anil Menon, Aspects of the Binary CMAC: Unimodularity and Probabilistic Reconstruction, Neural Processing Letters, v.22 n.3, p.263-276, December 2005
Antti Ukkonen , Mikael Fortelius , Heikki Mannila, Finding partial orders from unordered 0-1 data, Proceeding of the eleventh ACM SIGKDD international conference on Knowledge discovery in data mining, August 21-24, 2005, Chicago, Illinois, USA
Huaijun Qiu , Edwin R. Hancock, Graph matching and clustering using spectral partitions, Pattern Recognition, v.39 n.1, p.22-34, January, 2006
Sebastian Lange , Martin Middendorf, Design Aspects of Multi-level Reconfigurable Architectures, Journal of Signal Processing Systems, v.51 n.1, p.23-37, April 2008 | consecutive ones property;eigenvector;analysis of algorithms;fiedler vector;seriation |
298560 | New Collapse Consequences of NP Having Small Circuits. | We show that if a self-reducible set has polynomial-size circuits, then it is low for the probabilistic class ZPP (NP). As a consequence we get a deeper collapse of the polynomial-time hierarchy PH to ZPP(NP) under the assumption that NP has polynomial-size circuits. This improves on the well-known result in Karp and Lipton [ Proceedings of the 12th ACM Symposium on Theory of Computing, ACM Press, New York, 1980, pp. 302--309] stating a collapse of PH to its second level $\Sigmap_2$ under the same assumption. Furthermore, we derive new collapse consequences under the assumption that complexity classes like UP, FewP, and C=P have polynomial-size circuits. Finally, we investigate the circuit-size complexity of several language classes. In particular, we show that for every fixed polynomial s, there is a set in ZPP(NP) which does not have O(s(n))-size circuits. | Introduction
. The question of whether intractable sets can be efficiently
decided by non-uniform models of computation has motivated much work in structural
complexity theory. In research from the early 1980's to the present, a variety of results
has been obtained showing that this is impossible under plausible assumptions (see,
e.g., the survey [18]). A typical model for non-uniform computations are circuit
families. In the notation of Karp and Lipton [22], sets decidable by polynomial-size
circuits are precisely the sets in P/poly, i.e., they are decidable in polynomial time
with the help of a polynomial length bounded advice function [32].
Karp and Lipton (together with Sipser) [22] proved that no NP-complete set
has polynomial size circuits (in symbols NP 6' P/poly) unless the polynomial time
hierarchy collapses to its second level. The proof given in [22] exploits a certain kind
of self-reducibility of the well-known NP-complete problem SAT. More generally, it is
shown in [8, 7] that every (Turing) self-reducible set in P/poly is low for the second
level \Sigma P
2 of the polynomial time hierarchy. Intuitively speaking, a set is low for a
relativizable complexity class if it gives no additional power when used as an oracle
for that class.
In this paper, we show that every self-reducible set in P/poly is even low for the
probabilistic class ZPP(NP), meaning that for every
oracle A, \Sigma lowness for ZPP(NP) implies lowness for \Sigma P. As
a consequence of our lowness result we get a deeper collapse of the polynomial-time
hierarchy to ZPP(NP) under the assumption that NP has polynomial-size circuits. At
Abteilung f?r Theoretische Informatik, Universit?t Ulm, Oberer Eselsberg, D-89069 Ulm, Germany
([email protected]).
y Department of Computer Science, Tokyo Institute of Technology, Meguro-ku, Tokyo 152, Japan
([email protected]). Part of this work has been done while visiting the University of Ulm
(supported in part by the guest scientific program of the University of Ulm).
J. K -
OBLER AND O. WATANABE
least in some relativized world, the new collapse level is quite close to optimal: there
is an oracle relative to which NP is contained in P/poly but PH does not collapse to
P(NP) [17, 39].
We also derive new collapse consequences from the assumption that complexity
classes like UP, FewP, and C=P have polynomial-size circuits. Furthermore, our
lowness result implies new relativizable collapses for the case that Modm P, PSPACE,
or EXP have polynomial-size circuits. As a final application, we derive new circuit-
size lower bounds. In particular, it is shown (by relativizing proof techniques) that for
every fixed polynomial s, there is a set in ZPP(NP) which does not have O(s(n))-size
circuits. This improves on the result of Kannan [21] that for every polynomial s, the
class \Sigma P
2 contains such a set. It further follows that in every relativized world,
there exist sets in the class ZPEXP(NP) that do not have polynomial-size circuits.
It should be noted that there is a non-relativizing proof for a stronger result. As a
corollary to the result in [4], which is proved by a non-relativizing technique, it is
provable that MA exp " co-MA exp (a subclass of ZPEXP(NP)) contains non P/poly
sets [12, 36].
Some explanation of how our work builds on prior techniques is in order. The
proof of our lowness result heavily uses the universal hashing technique [13, 34] and
builds on ideas from [2, 14, 24]. For the design of a zero error probabilistic algorithm
which, with the help of an NP oracle, simulates a given ZPP(NP(A)) computation
(where A is a self-reducible set in P/poly) we further make use of the newly defined
concept of half-collisions. More precisely, we show how to compute on input 0 n in
expected polynomial time a hash family H that can be used to decide all instances
of A of length up to n by a strong NP computation. The way H is used to decide
(non)membership to A is by checking whether H leads to a half-collision on certain
sets. Very recently, Bshouty, Cleve, Gavald'a, Kannan, and Tamon [11] building on
a result from [19] have shown that the class of all circuits is exactly learnable in
(randomized) expected polynomial time with equivalence queries and the aid of an
NP oracle. This immediately implies that for every set A in P/poly an advice function
can be computed in FZPP(NP(A)), i.e., by a probabilistic oracle transducer T in
expected polynomial time under an oracle in NP(A). More precisely, since the circuit
produced by the probabilistic learning algorithm of [11] depends on the outcome of
the coin flips, T computes a multi-valued advice function, i.e., on input 0 n , T accepts
with probability at least 1/2, and on every accepting path, T outputs some circuit
that correctly decides all instances of length n w.r.t. A. Using the technique in [11] we
are able to show that every self-reducible set A in P/poly even has an advice function
in FZPP(NP). Although this provides a different way to deduce the ZPP(NP) lowness
of all self-reducible sets in P/poly, we prefer to give a self-contained proof using the
"half-collision technique" that does not rely on the mentioned results in [11, 19].
The paper is organized as follows: Section 2 introduces notation and defines the
self-reducibility that we use. In Section 3 we prove the ZPP(NP) lowness of all self-
reducible sets in P/poly. In Section 4 we state the collapse consequences, and the
new circuit-size lower bounds are derived in Section 5.
2. Preliminaries and notation. All languages are over the binary alphabet
1g. As usual, we denote the lexicographic order on \Sigma by -. The length
of a string x 2 \Sigma is denoted by jxj. \Sigma -n (\Sigma !n ) is the set of all strings of length
at most n (resp., of length smaller than n). For a language A, A
. The cardinality of a finite set A is denoted by jAj. The characteristic
function of A is defined as otherwise. For a class
C of sets, co-C denotes the class f\Sigma \Gamma A j A 2 Cg. To encode pairs (or tuples) of
strings we use a standard polynomial-time computable pairing function denoted by
h\Delta; \Deltai whose inverses are also computable in polynomial time. Where intent is clear
we in place of f(hx denotes the set of non-negative
integers. Throughout the paper, the base of log is 2.
The textbooks [9, 10, 25, 31, 33] can be consulted for the standard notations used
in the paper and for basic results in complexity theory. For definitions of probabilistic
complexity classes like ZPP see also [15].
An machine M is a polynomial-time nondeterministic Turing machine. We
assume that each computation path of M on a given input x either accepts, rejects, or
outputs "?". M accepts on input x, if M performs at least one accepting computation,
otherwise M rejects x. M strongly accepts (strongly rejects) x [26] if
ffl there is at least one accepting (resp., rejecting) computation path and
ffl there are no rejecting (resp., accepting) computation paths.
If M strongly accepts or strongly rejects x, M is said to perform a strong computation
on input x. An NP machine that on every input performs a strong computation is
called a strong NP machine. It is well known that exactly the sets in NP " co-NP are
accepted by strong NP machines [26].
Next we define the kind of self-reducibility that we use in this paper.
Definition 2.1. Let - be an irreflexive and transitive order relation on \Sigma . A
sequence of strings is called a -chain (of length k) from x 0 to x k if
Relation - is called length checkable if there is a polynomial q
such that
1. for all x; y 2 \Sigma , x - y implies jyj - q(jxj),
2. the language fhx; there is a -chain of length k from x to yg is in NP.
Definition 2.2. A set A is self-reducible, if there is a polynomial-time oracle
machine M self and a length checkable order relation - such that
on any input x, M self queries the oracle only about strings y OE x.
It is straightforward to check that the polynomially related self-reducible sets introduced
by Ko [23] as well as the length-decreasing and word-decreasing self-reducible
sets of Balc'azar [6] are self-reducible in our sense. Furthermore, it is well-known (see,
for example, [9, 6, 29]) that complexity classes like NP, \Sigma P
PSPACE, and EXP have many-one complete self-reducible sets.
Karp and Lipton [22] introduced the notion of advice functions in order to characterize
non-uniform complexity classes. A function h : N ! \Sigma is called a polynomial-
length function if for some polynomial p and for all n - 0, p(n). For a
class C of sets, let C=poly be the class of sets A such that there is a set I 2 C and a
4 J. K -
OBLER AND O. WATANABE
polynomial-length function h such that for all n and for all x 2 \Sigma -n ,
Function h is called an advice function for A, whereas I is the corresponding interpreter
set.
In this paper we will heavily make use of the "hashing technique" which has been
very fruitful in complexity theory. Here we review some notations and facts about
hash families. We also extend the notion of "collision" by introducing the concept of
a "half-collision" which is central to our proof technique.
Sipser [34] used universal hashing, originally invented by Carter and Wegman
[13], to decide (probabilistically) whether a finite set X is large or small. A linear
function h from \Sigma m to \Sigma k is given by a Boolean (k; m)-matrix (a ij ) and maps
any string x to a string is the inner product
of the i-th row a i and x.
h be a linear hash function from \Sigma m to \Sigma k . Then
we say that x has a collision on Y w.r.t. h if there exists a string y 2 Y , different
from x, such that generally, if X is a subset of \Sigma m and H is a
family l ) of linear hash functions from \Sigma m to \Sigma k , then we say that X has
a collision on Y w.r.t. H (Collision(X; Y; H) for short) if there is some x 2 X that
has a collision on Y w.r.t. every h i in H. That is,
and for all
If X has a collision on itself w.r.t. H, we simply say that X has a collision w.r.t.
H. Next we extend the notion of "collision" in the following way. For any X and Y
family l ) of linear hash functions, we say that X has a
half-collision on Y w.r.t. H (Half-Collision(X; Y; H) for short) if there is some x 2 X
that has a collision on Y w.r.t. at least dl=2e many of the hash functions h i in H.
That is,
Half-Collision(X;
and
An important relationship between collisions and half-collisions is the following one:
If X has a collision w.r.t. H on must have a half-collision w.r.t.
H either on Y 1 or on Y 2 .
Note that the predicate Collision(X; Y; H) can be decided in NP provided that
membership in X and Y can be tested in NP. More precisely, the language fhv; Hi j
(as well as the set fhv; Hi j Half-Collision(X
to NP, if the sets X v and Y v are succinctly represented in such a way that the languages
are in NP.
We denote the set of all families l ) of l linear hash functions from
\Sigma m to \Sigma k by H(l; m; k). The following theorem is proved by a pigeon-hole argument.
It says that every sufficiently large set must have a collision w.r.t. any hash family.
Theorem 2.3. [34] For any hash family H 2 H(l; m; k) and any set X ' \Sigma m of
cardinality jXj ? l must have a collision w.r.t. H.
On the other hand, we get from the next theorem (called Coding Lemma in [34])
an upper bound on the collision probability for sufficiently small sets.
Theorem 2.4. [34] Let X ' \Sigma m be a set of cardinality at most 2 k\Gamma1 . If we
choose a hash family H uniformly at random from H(k; m; k), then the probability
that X has a collision w.r.t. H is at most 1=2.
We will also make use of the following extension of Theorem 2.4 which can be
proved along the same lines.
Theorem 2.5. Let X ' \Sigma m be a set of cardinality at most 2 k\Gammas . If we choose a
family H uniformly at random from H(l; m; k), then the probability that X has
a collision w.r.t. H is at most 2 k\Gammas(l+1) .
Gavald'a [14] extended Sipser's Coding Lemma (Theorem 2.4) to the case of a
collection C of exponentially many sets. The following theorem has a similar flavor.
Theorem 2.6. Let C be a collection of at most 2 n subsets of \Sigma m , each of which
has cardinality at most 2 k\Gammas . If we choose a hash family H uniformly at random from
then the probability that some X 2 C has a collision w.r.t. H is at most
Proof. By Theorem 2.5, we have that for every fixed X 2 C, the probability
that it has a collision w.r.t. a randomly chosen hash family H 2 H(l; m; k) is at
most 2 k\Gammas(l+1) . Hence, the probability that there exists such a set X 2 C is at most
In this paper we make use of a corresponding result for the case of half-collisions.
Theorem 2.7. Let X ' \Sigma m and let C be a collection of at most 2 n subsets of
, each of which has cardinality at most 2 k\Gammas\Gamma2 . If we choose a hash family H
uniformly at random from H(l; m; k), then the probability that X has a half-collision
on some Y 2 C w.r.t. H is at most jXj \Delta 2 n\Gammasl=2 .
Proof. For every fixed Y 2 C and every fixed x 2 X, the probability that x has
a collision on Y w.r.t. a randomly chosen h is at most 2 \Gammas\Gamma2 . Hence, the probability
that x has a collision on Y w.r.t. at least half of the functions in a randomly chosen
hash family H 2 H(l; m; k) is at most
l
l
That is, the probability that x has a half-collision on Y w.r.t. a randomly chosen hash
family H is bounded by 2 \Gammasl=2 . Hence, the probability that there exists a Y 2 C and
an x 2 X such that x has a half-collision on Y w.r.t. H is at most jXj \Delta 2 n\Gammasl=2 .
3. Lowness of self-reducible sets in P/poly. In this section, we show that every
self-reducible set A in (NP " co-NP)=poly is low for ZPP(NP). Let I 2 NP " co-NP
6 J. K -
OBLER AND O. WATANABE
be an interpreter set and h be an advice function for A. We construct a probabilistic
algorithm T and an NP oracle O having the following two properties:
a) The expected running time of T is polynomially bounded.
b) On every computation path on input 0 n , T with oracle O outputs some
information that can be used to determine the membership to A of any x
up to length n by some strong NP computation (in the sense of [26]).
Using these properties, we can prove the lowness of A for ZPP(NP) as follows: In
order to simulate any NP(A) computation, we first precompute the above mentioned
information for A (up to some length) by T O , and then by using this information,
we can simulate the NP(A) computation by some NP(NP " co-NP) computation.
Note that the precomputation (performed by T O ) can be done in ZPP(NP), and
since the remaining computation can be done in NP. Hence,
which implies further that ZPP(NP(A)) ' ZPP(ZPP(NP)) (=
ZPP(NP) [41]).
We will now make the term "information" precise. For this, we need some additional
notation. Let the self-reducibility of A be witnessed by a polynomial-time
oracle machine M self , a length checkable order relation -, and a polynomial q. We
assume that fixed polynomial p ? 0. In the following, we
fix n and consider instances of length up to q(n) as well as advice strings of length
exactly p(n).
ffl A sample is a sequence hx of pairs, where the x i 's are
instances of length up to q(n) and b
ffl For any sample Consistent(S) be the set of all
advice strings w that are consistent with S, i.e.
The cardinality of Consistent(S) is denoted by c(S).
ffl For any sample S and any instance x, let Accept(x; S) (resp., Reject(x; S))
be the set of all consistent advice strings that accept x (resp., reject x):
and
ffl Let Correct(x; S) be the set fw 2 Consistent(S) j I(x; of consistent
advice strings that decide x correctly, and let Incorrect(x; S) be the
complementary set fw 2 Consistent(S) j I(x; w) 6= A(x)g.
Note that the sets Accept(x; S) and Reject(x; S) (as well as Correct(x; S) and
Incorrect(x; S)) form a partition of the set Consistent(S), and that
The above condition b) can now be precisely stated as follows:
b) On every computation path on input 0 n , T O outputs a pair hS; Hi consisting
of a sample S and a linear hash family H such that for all x up to length
n, Consistent(S) has a half-collision w.r.t. H on Correct(x; S), but not on
Incorrect(x; S).
Once we have a pair hS; Hi satisfying condition b), we can determine whether an
instance x of length up to n is in A by simply checking whether Consistent(x; S)
has a half-collision w.r.t. H on Accept(x; S) or on Reject(x; S). Since condition b)
guarantees that the half-collision can always be found, this checking can be done by
a strong NP computation. Let us now prove our main lemma.
Lemma 3.1. For any self-reducible set A in (NP " co-NP)=poly, there exist a
probabilistic transducer T and an oracle O in NP satisfying the above two conditions.
Proof. We use the notation introduced so far. Recall that q(n) is a length bound
on the queries occuring in the self-reduction tree produced by M self on any instance of
length n and that p(n) is the advice length for the set of all instances of length up to
q(n). Let l be the polynomial defined as 1). Further, we denote
by \Sigma -n the set fy j 9x 2 \Sigma -n ; y - xg. Then it is clear that \Sigma -n ' \Sigma -n ' \Sigma -q(n) . A
description of T is given below.
input
loop
randomly from H(l(n); p(n); k),
has a collision w.r.t. H k g
if there exists an x 2 \Sigma -n such that Consistent(S) has
a half-collision on Incorrect(x; S) w.r.t. H kmax
then
use oracle O to find such a string x and to determine A(x)
else exit(loop) end
loop
output
Starting with the empty sample, T enters the main loop. During each execution
of the loop, T first randomly guesses a series of p(n) many hash families
computes the integer kmax as the maximum
p(n)g such that Consistent(S) has a collision w.r.t. H k . Notice that
by a padding trick we can assume that c(S) is always larger than 2l(n), implying that
Consistent(S) must have a collision w.r.t. H 1 . Since, in particular, Consistent(S) has
a collision w.r.t. H kmax , it follows that for every instance x 2 \Sigma -n , Consistent(S) has
a half-collision w.r.t. H kmax on either Correct(x; S) or Incorrect(x; S). If there exists
a string x 2 \Sigma -n such that Consistent(S) has a half-collision on Incorrect(x; S) w.r.t.
H kmax , then this string is added to the sample S and T continues executing the loop.
(We will describe below how T uses the NP oracle O to find x in this case.) Otherwise,
8 J. K -
OBLER AND O. WATANABE
the pair hS; H kmax i fulfills the properties stated in condition b) and T halts.
We now show that the expected running time of T is polynomially bounded. Since
the initial size of Consistent(S) is 2 p(n) , and since Consistent(S) never becomes empty,
it suffices to prove that for some polynomial r, T eliminates in each single execution
of the main loop with probability at least 1=r(n) at least an 1=r(n)-fraction of the
circuits in Consistent(S). In fact, we will show that each single extension of S by a
reduces the size of Consistent(S) with probability at least by a
factor smaller than can only perform more than 2 7 l(n)p(n) loop
iterations, if during some iteration of the main loop T extends S by a pair hx; A(x)i
which does not shrink the size of Consistent(S) by a factor smaller than
the probability for this event is bounded by 2 7 l(n)p(n)
Let S be a sample and let kmax be the corresponding integer as determined by T
during some specific execution of the loop. We first derive a lower bound for kmax .
be the smallest integer k - 1 such that c(S) - l(n)2 k+1 . Since either
p(n) or Consistent(S) does not have a collision w.r.t. the hash family H kmax
1), we have (using Theorem 2.3) that c(S) - l(n)2 kmax +1 . Hence,
Since T expands S only by strings x 2 \Sigma -n such that Consistent(S) has a
half-collision on Incorrect(x; S) w.r.t. H kmax , and since Consistent(S#hx;
the probability that the size of Consistent(S) does
not decrease by a factor smaller than bounded by the probability that,
w.r.t. H kmax , Consistent(S) has a half-collision on some set Incorrect(x; S) of size at
most c(S)=2 7 l(n). Let
it follows from
Theorem 2.7 that the probability of Consistent(S) having a half-collision on some
w.r.t. a uniformly at random chosen hash family H 2 H(l(n);
at most
Thus the probability that for some k - 0, Consistent(S) has a half-collision w.r.t.
H k0+k on some set Incorrect(x; S) which is of size at most c(S)=2 7 l(n) is bounded by
We finally show how T determines an instance x 2 \Sigma -n (if it exists) such that
Consistent(S) has a half-collision on Incorrect(x; S) w.r.t. H kmax . Intuitively, we use
the self-reducibility of A to test the "correctness" w.r.t. A of the "program" hS; H kmax i,
where we say that
ffl a pair hS; Hi accepts an instance x if Consistent(S) has a half-collision on
ffl hS; Hi rejects x if Consistent(S) has a half-collision on Reject(x; S) w.r.t. H.
Notice that an (incorrect) program might accept and at the same time reject an
instance. The main idea to find out whether hS; H kmax i is incorrect on some instance
(meaning that w.r.t. H kmax Consistent(S) has a half-collision on
Incorrect(x; S)) is to test whether the program hS; H kmax i is in accordance with the
output of M self when the oracle queries of M self are answered according to the program
To be more precise, consider the NP set
Hi j there is a computation path - of M self on input z fulfilling
the following properties:
- if a query q is answered 'yes', then hS; Hi accepts q,
- if a query q is answered 'no', then hS; Hi rejects q,
if - is accepting, then hS; Hi rejects z, and
if - is rejecting, then hS; Hi accepts z g.
Then, as shown by the next claim, the correctness of hS; H kmax i on an instance z can
be decided by asking whether hz; belongs to B, provided that hS; H kmax i is
correct on all potential queries of M self on input z.
Claim. Assume that hS; H kmax i is correct on all y OE z. Then hS; H kmax i is
incorrect on z if and only if hz; belongs to B.
Proof. Using the fact that for every instance x 2 \Sigma -n , Consistent(S) has a half-
collision w.r.t. H kmax on either Correct(x; S) or Incorrect(x; S), it is easy to see that if
is incorrect on z, then the computation path - followed by M self (z) under
oracle A witnesses hz; B. For the converse, assume that hz;
belongs to B and let - be a computation path witnessing this fact. Note that all
queries q on - are answered correctly w.r.t. A, since otherwise hS; H kmax i were incorrect
on q OE z. Hence, - is the path followed by M self (z) under oracle A and therefore
decides z correctly. On the other hand, since - witnesses hz;
indeed is incorrect on z.
Now we can define the oracle set O as C \Phi D, where
Hi j there is a -chain of length (at least) k from some
string y 2 \Sigma -n to some string z - x such that hz;
and
Hi j there is an accepting computation path - of M self on input
x such that any query q is only answered 'yes' (`no') if Consistent(S)
has a half-collision on Accept(q; S) (resp., Reject(q; S)) w.r.t. H g.
Note that the proof of the claim above also shows that for any z 2 \Sigma -n such
that is correct on all y OE z, z 2 A if and only if hz;
to D. Now we can complete the description of T . T first asks whether the
string belongs to C. It is clear that a negative answer implies
that is correct on \Sigma -n . Otherwise, by asking queries of the form
computes by binary search i max as the maximum value
belongs to C (a similar idea is used
OBLER AND O. WATANABE
input
loop
randomly from H(l(n); p(n); k),
has a collision w.r.t. H k g
if
else exit(loop) end
loop
output
in [27]). Knowing i determines the lexicographically smallest string xmin
such that h0 is in C. Since hq; holds for all
instances q OE xmin , it follows inductively from the claim that hS; H kmax i is correct on
all q OE xmin . Hence, must be incorrect on xmin , and furthermore, T can
determine the membership of xmin to A by asking whether the string hx min ;
belongs to D.
Theorem 3.2. Every self-reducible set A in the class (NP " co-NP)=poly is low
for ZPP(NP).
Proof. We first show that NP(A) ' ZPP(NP). Let L be a set in NP(A), and let
M be a deterministic polynomial-time oracle machine such that for some polynomial
t,
Let s(n) be a polynomial bounding the length of all oracle queries of M on some
input hx; yi where x is of length n. Then L can be accepted by a probabilistic oracle
machine N using the following NP oracle
O Hi j there is a y 2 \Sigma t(jxj) such that M on input hx; yi has an
accepting path - on which each query q is answered 'yes' (`no') only
if Consistent(S) has a half-collision on Accept(q; S) (resp., Reject(q; S))
w.r.t. H g.
Here is how N accepts L. On input x, N first simulates T on input 0 s(jxj) to compute
a pair hS; H kmax i as described above (T asks questions to some NP oracle O). Then
N asks the query hx; O 0 to find out whether x is in L.
This proves that NP(A) ' ZPP(NP). Since via a proof
that relativizes, it follows that ZPP(NP(A)) is also contained in ZPP(NP), showing
that A is low for ZPP(NP).
4. Collapse consequences. As a direct consequence of Theorem 3.2 we get an
improvement of Karp, Lipton, and Sipser's result [22] that NP is not contained in
P/poly unless the polynomial-time hierarchy collapses to \Sigma P
.
Corollary 4.1. If NP is contained in (NP " co-NP)=poly then the polynomial-time
hierarchy collapses to ZPP(NP).
Proof. Since the NP-complete set SAT is self-reducible, the assumption that NP
is contained in (NP " co-NP)=poly implies that SAT is low for ZPP(NP), and hence
the polynomial-time hierarchy collapses to ZPP(NP).
The collapse of the polynomial-time hierarchy deduced in Corollary 4.1 is quite
close to optimal, at least in some relativized world [17, 39]: there is an oracle relative to
which NP is contained in P/poly but the polynomial-time hierarchy does not collapse
to P(NP).
In the rest of this section we report some other interesting collapses which can be
easily derived using (by now) standard techniques, and which have also been pointed
out independently by several researchers to the second author. First, it is straightforward
to check that Theorem 3.2 relativizes: For any oracle B, if A is a self-reducible
set in the class (NP(B) " co-NP(B))=poly, then NP(A) is contained in ZPP(NP(B)).
Consequently, Theorem 3.2 generalizes to the following result.
Theorem 4.2. If A is a self-reducible set in the class (\Sigma P
)=poly, then
As a direct consequence of Theorem 4.2 we get an improvement of results in
[1, 20] stating (for
k is not contained in (\Sigma P
)=poly unless the
polynomial-time hierarchy collapses to \Sigma P
.
Corollary 4.3. Let k - 1. If \Sigma P
k is contained in (\Sigma P
)=poly, then the
polynomial-time hierarchy collapses to ZPP(\Sigma P
Proof. Since \Sigma P
contains complete self-reducible languages, the assumption that
k is contained in (\Sigma P
)=poly implies that \Sigma P
Yap [40] proved that \Pi P
k is not contained in \Sigma P
=poly unless the polynomial-time
hierarchy collapses to \Sigma P
k+2 . As a further consequence of Theorem 4.2 we get the
following improvement of Yap's result.
Corollary 4.4. For k - 1, if \Pi P
=poly, then
Proof. The assumption that \Pi P
k is contained in \Sigma P
=poly implies that \Sigma P
k+1 is
contained in \Sigma P
=poly ' (\Sigma P
)=poly. Hence we can apply Corollary 4.3.
As corollaries to Theorem 4.2, we also have similar collapse results for many other
complexity classes. What follows are some typical examples.
Corollary 4.5. For K 2 co-NP)=poly then K is low
for ZPP(NP).
Proof. It is well-known that for every set A in UP (FewP), the left set of A [30] is
word-decreasing self-reducible and in UP (resp., FewP). Thus, under the assumption
that UP ' (NP " co-NP)=poly (resp., FewP ' (NP " co-NP)=poly) it follows by
Theorem 3.2 that the left set of A (and since A is polynomial-time many-one reducible
to its left set, also is low for ZPP(NP).
Corollary 4.6. For every k - 1, if C=P ' (\Sigma P
)=poly then
Proof. First, since C=P has complete word-decreasing self-reducible languages
)=poly implies C=P ' ZPP(\Sigma P
OBLER AND O. WATANABE
)=poly implies PH ' (\Sigma P
k )=poly and therefore
PH collapses to ZPP(\Sigma P
k ) by Corollary 4.3. Finally, since C=P(PH) ' BPP(C= P)
[37], it follows that C=P(PH) ' PH, and since
[38]), we get inductively that CH ' PH (' ZPP(\Sigma P
Corollary 4.7. Let K 2. If for some k - 1,
)=poly, then K ' PH and PH collapses to ZPP(\Sigma P
Proof. The proof for K 2 fEXP;PSPACEg is immediate from Theorem 4.2 since
PSPACE has complete (length-decreasing) self-reducible languages, and since EXP
has complete (word-decreasing) self-reducible languages [6].
The proof for K 2 is analogous to the one of Corollary 4.6
using the fact that ModmP has complete word-decreasing self-reducible languages
[29], and that PH ' BPP(Modm P) [37, 35].
Since our proof technique is relativizable, the above results hold for every relativized
world. On the other hand, it is known that for some classes stronger collapse
consequences can be obtained by using non-relativizable arguments.
Theorem 4.8. [28, 4, 3] For K 2 fPP; ModmP;PSPACE;EXPg, if K ' P/poly
then K ' MA.
Harry Buhrman pointed out to us that Corollary 4.7 can also be derived from
Theorem 4.8.
5. Circuit complexity. Kannan [21] proved that for every fixed polynomial s,
there is a set in \Sigma P
which cannot be decided by circuits of size s(n). Using a
padding argument, he obtained the existence of sets in NEXP(NP) " co-NEXP(NP)
not having polynomial-size circuits.
Theorem 5.1. [21]
1. For every polynomial s, there is a set in \Sigma P
2 that does not have circuits
of size s(n).
2. For every increasing time-constructible super-polynomial function f(n), there
is a set in NTIME[f(n)](NP)"co-NTIME[f(n)](NP) that does not have polynomial
size circuits.
As an application of our results in Section 3, we can improve Kannan's results
in every relativized world from the class \Sigma P
2 to ZPP(NP), and from the class
" co-NTIME[f(n)](NP) to ZPTIME[f(n)](NP), respectively. Here
ZPTIME[f(n)](NP) denotes the class of all sets that are accepted by some zero error
probabilistic machine in expected running time O(f(n)) relative to some NP oracle.
Note that for all sets in the class P/poly we may fix the interpreter set to some
appropriate one in P. Let I univ denote such a fixed interpreter set. Furthermore,
P/poly remains the same class, if we relax the notion of an advice function h (w.r.t.
I univ ) as follows: For every x, I univ (x; h(jxj)), i.e., h(n) has to decide correctly
only A =n (instead of A -n ).
A sequence of circuits Cn , n - 0, is called a circuit family for A, if for every n - 0,
Cn has n input gates, and for all n-bit strings x 1
It is well-known (see, e.g., [9]) that I univ can be chosen in such a way that advice
length and circuit size (i.e., number of gates) are polynomially related to each other.
More precisely, we can assume that there is a polynomial p such that the following
holds for every set A.
ffl If h is an advice function for A w.r.t. I univ , then there exists a circuit family
Cn , n - 0, for A of size jCn j - p(n
ffl If Cn , n - 0, is a circuit family for A, then there exists an advice function h
for A w.r.t. I univ of length jh(n)j - p(jC n j).
Moreover, we can assume that for every polynomial-time interpreter set I there is a
constant c I such that if h is an advice function for A w.r.t. I, then there exists an
advice function h 0 for A w.r.t. I univ of length jh 0 (n)j I for all n.
The following lemma is obtained by a direct diagonalization (cf. the corresponding
result in [21]). A set S is called -printable (see [16]) if there is a polynomial-time
oracle transducer T and an oracle set A 2 C such that on any input 0 n , T A outputs
a list of all strings in S -n .
Lemma 5.2. For every fixed polynomial s, there is a \Delta P
3 -printable set A such that
every advice function h for A is of length jh(n)j - s(n), for almost all n.
Proof. For a given n, be the sequence of strings of length n,
enumerated in lexicographic order. Consider the two sets Have-Advice and Find -A
defined as follows:
Have-Advice ,
9 a j+1 \Delta \Delta \Delta a
Since there are only 2 strings w in \Sigma !s(n) , at least one pair of the
form hn; a 1 \Delta \Delta \Delta a s(n) i is not contained in Have-Advice (provided that s(n) - 2 n ). Let
ff n denote the lexicographically smallest such pair hn; a 1 \Delta \Delta \Delta a s(n) i, i.e., there is no
advice of length smaller than s(n) that accepts the strings x according to
A as the set of all strings x i (jx n) such that 1 - i - s(n) - 2 n and the
ith bit of ff n (i.e., a i ) is 1. By a binary search using oracle Find -A, ff n is computable
in polynomial time. Since Have-Advice is in NP and thus Find -A is in NP(NP), it
follows that A is P(NP(NP))-printable. Since furthermore, for almost all n, A =n has
no advice of length smaller than s(n), the lemma follows.
Corollary 5.3. For every fixed polynomial s, there is a set A in ZPP(NP) that
does not have circuits of size s(n).
Proof. If NP does not have polynomial-size circuits, then we can take
Otherwise, by Corollary 4.1, and thus the theorem easily follows from
Lemma 5.2.
Corollary 5.4. Let f be an increasing, time-constructible, super-polynomial
function. Then ZPTIME[f(n)](NP) contains a set A that does not have polynomial-size
circuits.
14 J. K -
OBLER AND O. WATANABE
Proof. If NP does not have polynomial-size circuits, then we can take
Otherwise, by Corollary 4.1, and thus it follows from Lemma 5.2
that there is a set B in ZPTIME[n k ](NP) such that every advice function h for B
is of length jh(n)j - n for almost all n. By the proof technique of Lemma 5.2, we
can assume that in all length n strings of B, 1's only occur at the O(log n) rightmost
positions. Now consider the following set (where n denotes jxj)
and the interpreter set
Clearly, A belongs to ZPTIME[f(n)](NP) and I belongs to P. Furthermore, if h is
an advice function for A, then we have for every y of the form 0 bf(n) 1=k c\Gamman x,
that
where h 0 (n) is a suitable advice function of length jh 0 (n)j I . Thus, it
follows for almost all n that
This shows that the length of h is super-polynomial.
Corollary 5.5. In every relativized world, ZPEXP(NP) contains sets that do
not have polynomial-size circuits.
We remark that the above results are proved by relativizable arguments. On the
other hand, Harry Buhrman [12] and independently Thomas Thierauf [36] pointed out
to us that Theorem 4.8 (which is proved by a non-relativizable proof technique) can be
used to show that MA exp " co-MA exp contains non P/poly sets. Here, MA exp denotes
the exponential-time version of Babai's class MA [5]. That is, MA
where a language L is in MA[f(n)], if there exists a set B 2 DTIME[O(n)] such that
for all x of length n,
where z is chosen uniformly at random from \Sigma f(n) .
Corollary 5.6. [12, 36] MA exp " co-MA exp contains sets that do not have
polynomial size circuits.
Since there exist recursive oracles relative to which all sets in EXP(NP) have polynomial
size circuits [39, 17], it is not possible to extend Corollary 5.5 by relativizing
techniques to the class EXP(NP).
6. Concluding remarks. An interesting question concerning complexity classes
C that are known to be not contained in P/poly but are not known to have complete
sets is whether the existence of sets in C \Gamma P/poly can be constructively shown. For
example, by Corollary 5.5 we know that the class ZPEXP(NP) contains sets that
do not have polynomial-size circuits. But we were not able to give a constructive
proof of this fact. To the best of our knowledge, no explicit set is known even in
Acknowledgments
For helpful discussions and suggestions regarding this work we are very grateful to
H. Buhrman, R. Gavald'a, L. Hemaspaandra, M. Ogihara, U. Sch-oning, R. Schuler,
and T. Thierauf. We like to thank H. Buhrman, L. Hemaspaandra, and M. Ogihara
for permitting us to include their observations in the paper.
--R
On hiding information from an oracle
Queries and concept learning
Arithmetization: A new method in structural complexity
a randomized proof system and a hierarchy of complexity classes
Structural Complexity Theory I
Introduction to the Theory of Complexity
Oracles and queries that are sufficient for exact learning
Universal classes of hash functions
Bounding the complexity of advice functions
Computational complexity of probabilistic complexity classes
Computation times of NP sets of different densities
On relativized exponential and probabilistic complexity classes
How hard are sparse sets?
Random generation of combinatorial structures from a uniform distribution
Some connections between nonuniform and uniform complexity classes
Journal of Computer and System Sciences
Strong nondeterministic polynomial-time reducibilities
Algebraic methods for interactive proof systems
On sparse hard sets for counting classes
On polynomial-time bounded truth-table reducibility of NP sets to sparse sets
Computational Complexity
On simultaneous resource bounds
A complexity theoretic approach to randomness
Probabilistic polynomials
Counting classes are at least as hard as the polynomial-time hierarchy
Complexity classes defined by counting quantifiers
Relativized circuit complexity
Some consequences of non-uniform conditions on uniform classes
Robustness of probabilistic computational complexity classes under definitional perturbations
--TR
--CTR
Christian Glaer , Lane A. Hemaspaandra, A moment of perfect clarity II: consequences of sparse sets hard for NP with respect to weak reductions, ACM SIGACT News, v.31 n.4, p.39-51, Dec. 2000
Valentine Kabanets , Jin-Yi Cai, Circuit minimization problem, Proceedings of the thirty-second annual ACM symposium on Theory of computing, p.73-79, May 21-23, 2000, Portland, Oregon, United States
Lane A. Hemaspaandra , Mitsunori Ogihara , Gerd Wechsung, Reducing the number of solutions of NP functions, Journal of Computer and System Sciences, v.64 n.2, p.311-328, March 2002
Rahul Santhanam, Circuit lower bounds for Merlin-Arthur classes, Proceedings of the thirty-ninth annual ACM symposium on Theory of computing, June 11-13, 2007, San Diego, California, USA
Lane A. Hemaspaandra, SIGACT News complexity theory column 32, ACM SIGACT News, v.32 n.2, June 2001
Jin-Yi Cai , Venkatesan T. Chakaravarthy , Lane A. Hemaspaandra , Mitsunori Ogihara, Competing provers yield improved Karp-Lipton collapse results, Information and Computation, v.198 n.1, p.1-23, April 10, 2005
Piotr Faliszewski , Lane Hemaspaandra, Open questions in the theory of semifeasible computation, ACM SIGACT News, v.37 n.1, March 2006
Lance Fortnow, Beyond NP: the work and legacy of Larry Stockmeyer, Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, May 22-24, 2005, Baltimore, MD, USA
Johannes Kbler , Rainer Schuler, Average-case intractability vs. worst-case intractability, Information and Computation, v.190 n.1, p.1-17, April 10, 2004 | polynomial-size circuits;lowness;advice classes;randomized computation |
298690 | Optimal Parallel Algorithms for Finding Proximate Points, with Applications. | AbstractConsider a set P of points in the plane sorted by x-coordinate. A point p in P is said to be a proximate point if there exists a point q on the x-axis such that p is the closest point to q over all points in P. The proximate point problem is to determine all the proximate points in P. Our main contribution is to propose optimal parallel algorithms for solving instances of size n of the proximate points problem. We begin by developing a work-time optimal algorithm running in O(log log n) time and using ${{n \over {\log \log n}}}$ Common-CRCW processors. We then go on to show that this algorithm can be implemented to run in O(log n) time using ${{n \over {\log n}}}$ EREW processors. In addition to being work-time optimal, our EREW algorithm turns out to also be time-optimal. Our second main contribution is to show that the proximate points problem finds interesting, and quite unexpected, applications to digital geometry and image processing. As a first application, we present a work-time optimal parallel algorithm for finding the convex hull of a set of n points in the plane sorted by x-coordinate; this algorithm runs in O(log log n) time using ${{n \over {\log \log n}}}$ Common-CRCW processors. We then show that this algorithm can be implemented to run in O(log n) time using ${{n \over {\log n}}}$ EREW processors. Next, we show that the proximate points algorithms afford us work-time optimal (resp. time-optimal) parallel algorithms for various fundamental digital geometry and image processing problems. Specifically, we show that the Voronoi map, the Euclidean distance map, the maximal empty circles, the largest empty circles, and other related problems involving a binary image of size nn can be solved in O(log log n) time using$${{{n^2} \over {\log \log n}}}$$Common-CRCW processors or in O(log n) time using ${{{n^2} \over {\log n}}}$ EREW processors. | Introduction
Consider a parallel algorithm that solves an instance of size n of some problem in T p (n) time
using p processors. Traditionally, the main complexity measure for assessing the performance of
Work supported in part by NSF grant CCR-9522093, by ONR grant N00014-97-1-0526, and by Grant-in-Aid for
Encouragement of Young Scientists (08780265) from Ministry of Education, Science, Sports, and Culture of Japan
y Dept. of Electrical and Computer Engineering, Nagoya Institute of Technology, Showa-ku, Nagoya 466, JAPAN,
z Department of Computer Science, Old Dominion University, Norfolk, Virginia 23529, USA, [email protected]
the algorithm is the amount W (n) of work performed by the algorithm, defined as the product
(n). The algorithm is termed work-optimal if W (n) 2 \Theta(T (n)), where T (n) is the running
time of the fastest sequential algorithm for the problem. The algorithm is work-time optimal [20] if
it is work-optimal and, in addition, its running time T p (n) is best possible among the work-optimal
algorithms in that model. Needless to say that one of the challenges of parallel algorithm design is
to produce not only work-optimal but, indeed, whenever possible, work-time optimal algorithms.
Occasionally, an even stronger complexity metric is being used - the so-called time-optimality.
Specifically, an algorithm is time-optimal in a given model, if the problem cannot be solved faster
in that model, even if an unbounded number of processors were available.
In this paper we assume the Parallel Random Access Machine (PRAM, for short) which consists
of synchronous processors, each having access to a common memory. We refer the interested reader
to [20] for an excellent discussion of the PRAM model.
Let P be a set of points in the plane sorted by x-coordinate. A point p is a proximate point
of P if there exists a point on the x-axis closer to p than to any other point in P . The proximate
points problem asks to determine all proximate points in P . Clearly, the proximate points problem
can be solved, using an algorithm for finding the Voronoi diagram. However, as argued in [16],
the computation of the Voronoi diagram
log n) time even if the n points are sorted
by x-coordinate. Thus, this naive approach does not yield an optimal solution to the proximate
points problem. Recently, Breu et al. [6] proposed a linear-time algorithm for the proximate points
problem. In spite of its optimality, the algorithm of Breu et al. [6] relies in crucial ways on stack
operations, notoriously hard to parallelize.
Our first main contribution is to propose parallel algorithms for solving instances of size n of
the proximate points problem. Specifically, we first exhibit an algorithm running in O(log log n)
time using n
log log n Common-CRCW processors. We then go on to show that this algorithm can be
implemented to run in O(log n) time using n
log n EREW processors. Our Common-CRCW algorithm
is work-time optimal; the EREW algorithm turns out to also be time-optimal. We establish the
work-time optimality of our Common-CRCW algorithm by a reduction from the minimum finding
problem; the time-optimality of our EREW algorithm follows by a reduction from the OR problem.
Our second main contribution is to show that the proximate points problem has interesting,
and quite unexpected, applications to digital geometry and image processing. To begin, we present
a work-time optimal parallel algorithm for computing the convex hull of a set of n points in the
plane sorted by x-coordinate. This algorithm runs in O(log log n) time using n
log log n Common-
CRCW processors or in O(log n) time using n
log n EREW processors. We show that this algorithm
is work-time optimal in the CRCW model and, in addition, time-optimal in the EREW.
Numerous parallel algorithms have been proposed for computing the convex hull of sorted points
in the plane [4, 7, 12, 13, 21]. Recently, Chen [7] presented an O(log n)-time algorithm using n
log n
processors. Chen et al. [12] presented work-optimal algorithms running in O(log n)-time
algorithm and using n
log n EREW processors, and in an O(log log n)-time algorithm using n
log log n
Common-CRCW processors. Quite recently, Berkman et al. [3] presented an O(log log n)-time
algorithm using n
log log n Common-CRCW processors. Our algorithm features the same performance
as those in [3, 7, 12]. However, our algorithm is much simpler and more intuitive. Further, to the
best of our knowledge, the work-time optimality of the CRCW version and the time-optimality of
the EREW version algorithm has not been solved yet.
Given a binary image the Voronoi map assigns to each pixel in the image the position of the
nearest black pixel. The Euclidean distance map assigns to each pixel the Euclidean distance to the
nearest black pixel. An empty circle of the image is a circle whose interior contains only white pixels.
A maximal empty circle is an empty circle contained in no other empty circle. A largest empty circle
is an empty circle of the largest radius. We refer the reader to Figure 1 for an illustration. The
largest square, diamond, n-gon, etc. are defined similarly. These computations are known to have
numerous applications ranging from clustering and shape analysis [2, 17] to handoff management
in cellular systems [26] to image compression, decomposition, and reconstruction [5, 23, 27, 28, 31].
As further applications, we propose algorithms for computing the Voronoi map, the Euclidean
distance map, the maximal empty circles, and the largest empty circles of a binary image of size
n \Theta n. We begin by presenting a work-time optimal algorithm that computes the Voronoi map
and the Euclidean distance map of a binary image of size n \Theta n in O(log log n) time using n 2
log log n
Common-CRCW processors or in O(log n) time using n 2
log n EREW processors. We also show that
the distance map for various metrics including the well known L k metrics, (k - 1), can also be
computed in the same manner. We then go on to show that all the maximal empty circles and
a largest empty circle of an n \Theta n binary image can be found in O(log log n) time using n 2
log log n
Common-CRCW processors or in O(log n) time using n 2
log n EREW processors. As it turns out,
with minimal changes, this algorithm is applicable to various other kinds of empty figures including
squares, diamonds, n-gon etc.
Recently, Chen et al. [8, 11] and Breu et al. [6] presented O(n 2 )-time sequential algorithms for
computing the Euclidean distance map. Roughly at the same time, Hirata [19] presented a simpler
sequential algorithm to compute the distance map for various distance metrics including
Euclidean, 4-neighbor, 8-neighbor, chamfer, and octagonal. A number of parallel algorithms for
computing the Euclidean distance map have been developed for various parallel models [1,9,10,14].
In particular, the following results have been reported in the recent literature. Lee et al. [22]
Figure
1: Illustrating the Euclidean distance map and the largest empty circle
presented an O(log 2 n)-time algorithm using n 2 EREW processors. Pavel and Akl [24] presented an
algorithm running in O(log n) time and using n 2 EREW processors. Clearly, these two algorithms
are not work-optimal. Chen [8] presented a work-optimal O( n 2
)-time algorithm using p, (p log p -
n), EREW processors. This yields an O(n log n)-time algorithm using n
log n EREW processors.
Fujiwara et al. [18] presented a work-optimal algorithm running in O(log n) time and using n 2
log n
EREW processors and in O( log n
log log n ) time using n 2 log log n
log n Common-CRCW processors. Although
Fujiwara et al. [18] claim that their algorithm is applicable to various distance maps, a closer
analysis reveals that it only applies to a few distance metrics. The main problem seems to be
that their algorithm uses a geometric transform that depends in a crucial way on properties of the
Euclidean distance and, therefore, does not seem to generalize. As we see it, our Euclidean distance
map algorithm has three major advantages over Fujiwara's algorithm. First, the performance of
our algorithm for the CRCW is superior; second, our algorithm applies to a large array of distance
finally, our algorithm is much simpler and more intuitive.
The remainder of this paper is organized as follows: Section 2 introduces the proximate points
problem for the Euclidean distance metric and discusses a number of technicalities that will be
crucial ingredients in our subsequent algorithms. Section 3 presents our parallel algorithms for the
Common-CRCW and the EREW. Section 4 proves that these algorithms are work-time, respectively
time-optimal. Section 5 presents a work-time optimal parallel algorithm for computing the convex
hull of sorted points in the plane. Section 6 uses the proximate points algorithm to computing the
Voronoi map, the Euclidean distance map, the maximal empty circles, and the largest empty circles
of a binary image. Section 7 offers concluding remarks and open problems. Finally, the Appendix
discusses other distance metrics to which the algorithms presented in Section 3 apply.
2 The proximate points problem: a first look
In this section we introduce the proximate points problem along with a number of geometric results
that will lay the foundation of our subsequent algorithms. Throughout, we assume that a point p
is represented by its Cartesian coordinates (x(p); y(p)). As usual, we denote the Euclidean distance
between the planar points p and q by d(p;
Consider a collection of n points sorted by x-coordinate, that is, such that
We assume, without loss of generality that all the points in P have
distinct x-coordinates and that all of them lie above the x-axis. The reader should have no difficulty
confirming that these assumptions are made for convenience only and do not impact the complexity
of our algorithms.
Recall that for every point p i of P the locus of all the points in the plane that are closer to
than to any other point in P is referred to as the Voronoi polygon associated with p i and is
denoted by V (i). The collection of all the Voronoi polygons of points in P partitions the plane into
the Voronoi diagram of P (see [25] p. 204). Let I i , (1 - i - n), be the locus of all the points q
on the x-axis for which d(q; In other words, q 2 I i if and
only if q belongs to the intersection of the x-axis with V (i), as illustrated in Figure 2. In turn, this
implies that I i must be an interval on the x-axis and that some of the intervals I i ,
may be empty. A point p i of P is termed a proximate point whenever the interval I i is nonempty.
Thus, the Voronoi diagram of P partitions the x-axis into proximate intervals. Since the points of
are sorted by x-coordinate, the corresponding proximate intervals are ordered, left to right, as
point q on the x-axis is said to be a boundary point between p i and p j if q is
equidistant to p i and p j , that is, d(p It should be clear that p is a boundary point
between proximate points p i and p j if and only if the q is the intersection of the (closed) intervals
I i and I j . To summarize the previous discussion we state the following result.
Proposition 2.1 The following statements are satisfied:
ffl Each I i is an interval on the x-axis;
ffl The intervals I 1 ; I lie on x-axis in this order, that is, for any non-empty I i and I j
lies to the left of I
I 1 I 2 I 4 I 6 I 7
Figure
2: Illustrating proximate intervals
I 1 I 2 I 3 I 4
I 0
Figure
3: Illustrating the addition of p to g.
ffl If the non-empty proximate intervals I i and I j are adjacent, then the boundary point between
Referring again to Figure 2, among the seven points, five points are proximate
points, while the others are not. Note that the leftmost point p 1 and the rightmost point p n are
always proximate points.
Given three points we say that dominated by p i and p k whenever
fails to be a proximate point of the set consisting of these three points. Clearly, p j is dominated
by p i and p k if the boundary of p i and p j is to the right of that of p j and p k . Since the boundary
of any two points can be computed in O(1) time, the task of deciding for every triple (p
whether p j is dominated by p i and p k takes O(1) time using a single processor.
Consider a collection of points in the plane sorted by x-coordinate, and a
point p to the right of P , that is, such that x(p 1 x(p). We are interested
in updating the proximate intervals of P to reflect the addition of p to P as illustrated in Figure 3.
We assume, without loss of generality, that all points in P are proximate points and let
I n be the corresponding proximate intervals. Further, let I 0
p be the up-dated
proximate intervals of P [ fpg. Let p i be a point such that I 0
i and I 0
are adjacent. By (iii) in
Proposition 2.1, the boundary point between p i and p separates I 0
i and I 0
. As a consequence, (ii)
implies that all the proximate intervals I 0
n must be empty. Furthermore, the addition of p
to P does not affect any of the proximate intervals I j , 1 In other words, for all 1
I 0
are empty, the points p are dominated by p i and p. Thus, every
point n), is dominated by otherwise, the boundary between
would be to the left of that of that between p j and p. This would imply that the non-empty
interval between these two boundaries corresponds to I 0
j , a contradiction. To summarize, we have
the following result.
Lemma 2.2 There exists a unique point p i of P such that:
ffl The only proximate points of P [ fpg are
ffl For the point p j is not dominated by
I 0
ffl For dominated by and the interval I 0
j is empty.
i and I 0
are consecutive on the x-axis and are separated by the boundary point between
and p,
be a collection of proximate points sorted by x-coordinate and let p
be a point to the left of P , that is, such that x(p) ! x(p 1
reference we now take note of the following companion result to Lemma 2.2. The proof is identical
and, thus, omitted.
Lemma 2.3 There exists a unique point p i of P such that:
ffl The only proximate points of P [ fpg are
ffl For not dominated by p and p j+1 . Moreover, for
I 0
ffl For the point p j is dominated by p and p j+1 and the interval I 0
j is empty.
p and I 0
are consecutive on the x-axis and are separated by the boundary point between p and
The unique point p i whose existence is guaranteed by Lemma 2.2 is termed the contact point
between P and p. The second statement of Lemma 2.2 suggests that the task of determining the
unique contact point between P and a point p to the right or left of P reduces, essentially, to binary
search.
Now, suppose that the set
into two subsets g. We are interested
in updating the proximate intervals in the process or merging PL and PR . For this purpose, let
I 2n be the proximate intervals of PL and PR , respectively. We as-
sume, without loss of generality, that all these proximate intervals are nonempty. Let I 0
be the proximate intervals of . We are now in a position to state and prove the next
result which turns out to be a key ingredient in our algorithms.
Lemma 2.4 There exists a unique pair of proximate points PR such that
ffl The only proximate points in PL [ PR are
are empty, and I 0
ffl The proximity intervals I 0
i and I 0
are consecutive and are separated by the boundary point
between
Proof. Let i be the smallest subscript for which p i 2 PL is the contact point between PL and a
point in PR . Similarly, let j be the largest subscript for which the point PR is the contact
point between PR and some point in PL . Clearly, no point in PL to the left of p i can be a proximate
point of P . Likewise, no point in PR to the left of p j can be a proximate point of P .
Finally, by Lemma 2.2 every point in PL to the left of p i must be a proximate point of P .
Similarly, by Lemma 2.3 every point in PR to the right of p i must be a proximate point of P , and
the proof of the lemma is complete.
The points p i and p j whose existence is guaranteed by Theorem 2.4 are termed the contact
points between PL and PR . We refer the reader to Figure 4 for an illustration. Here, the contact
points between PL and PR are p 4 and p 8 .
Next, we discuss a geometric property that enables the computation of the contact points p i
and p j between PL and PR . For each point p k of PL , let q k denote the contact point between p k
and PR as specified by Lemma 2.3. We have the following result.
Lemma 2.5 The point p k is not dominated by p k\Gamma1 and q k if 2 - k - i, and dominated otherwise.
I 1 I 2 I 3 I 4 I 5
I 6 I 7 I 8 I 9 I 10
I 0
9 I 0
Figure
4: Illustrating the contact points between two sets of points
Proof. If dominated by p k\Gamma1 and q k , then I 0
k must be empty. Thus, Lemma 2.4
guarantees that p k , (2 - k - i), is not dominated by p k\Gamma1 and q k . Suppose that p k , (i
is not dominated by p k\Gamma1 and q k . Then, the boundary point between p k and q k is to the right of
that between p k\Gamma1 and p k . Thus, the non-empty interval between these two boundaries corresponds
to I 0
k , a contradiction. Therefore, p k , (i n), is dominated by p k\Gamma1 and q k , completing the
proof.
Lemma 2.5 suggests a simple, binary search-like, approach to finding the contact points p i
and between two sets PL and PR . In fact, using a similar idea, Breu et al. [6] proposed a
sequential algorithm that computes the proximate points of an n-point planar set in O(n) time.
The algorithm in [6] uses a stack to store the proximate points found and, consequently, seems very
hard to parallelize.
3 Parallel algorithms for the proximate points problem
We begin by discussing a parallel algorithm for solving the proximate points problem on the
Common-CRCW. The algorithm will then be converted to run on the EREW. We rely, in part, on
the solution to the well-known LEFTMOST-ONE problem: given a sequence b 1
determine the smallest i, (1 - i - n), such that b
Lemma 3.1 [20] An instance of size n of the LEFTMOST-ONE problem can be solved in O(1)
time using n Common-CRCW processors.
Consider a set of points such that x(p 1 To capture
the neighboring proximate points of each point use three indices c i , l i and r i
defined as follows:
non-proximate points
proximate points
Figure
5: Illustrating indices l i , c i , and r i for a point p i
1. is an proximate pointg;
2. l is an proximate pointg;
3. r is an proximate pointg.
We refer the reader to Figure 5 for an illustration. Note that we must have l
there is no proximate point p j such that l i
then c
Next, we are interested in finding the contact point between the set and a
new point p with x(p n We assume that for every i, (1 - i - n), c i , l i , and r i are available,
and that m, (m - n), processors are at our disposal. The algorithm is essentially performing m-ary
search using Lemma 2.2.
Algorithm Find-Contact-Point(P; p)
Extract a sample S(P ) of size m consisting of the points p c
in P . For
every k, (k - 0), check whether the point p c k n
is dominated by p l k n
and p, and whether
is dominated by p c k n
and p. If p c k n
is not dominated but p r k n
is dominated,
then
is the desired contact point.
such that the point p r k n
is not dominated by p c k n
and p, and p c (k+1) n
is dominated by p l (k+1) n
and p.
Step 3 Execute recursively this algorithm for the set of points P
g to find the contact point.
1, the set P 0 contains at most l (k+1) n
points. Hence, the depth of the recursion is O( log n
log m ). Notice, further, that
algorithm Find-Contact-Point does not perform concurrent reading or writing. Thus, we have
the following result.
Lemma 3.2 Given a set of n points in the plane sorted by x-coordinate and a
point p, with x(p n the task of finding the contact point between P and p can be performed
in O( log n
log using m EREW processors.
Next, consider two sets of points in the
plane such that x(p 1 Assume that for every i the indices c i , l i and r i are
given and that m processors are available to us. The following algorithm finds the contact points
of PL and PR by
m-ary search using Lemma 2.5.
Algorithm Find-Contact-Points-Between-Sets(PL ; PR )
sample points S(PL
from PL . By using the algorithm
Find-Contact-Point and p m of the processors available each, determine for each
sample point
1), the corresponding contact point q c k n
in PR .
Step 2 For each k, (0 - k -
check whether the point p c k n
is dominated by p l k n
and q c k n
, and whether the point p r k n
is dominated p c k n
and q c k n
. If p c k n
is not dominated, yet p r k n
is, output
and q c k n
as the desired contact points.
Step 3 Find k such that the point p r k n
is not dominated by p c k n
and q c k n
is dominated by p l (k+1) n
and q c (k+1) n
Step 4 Execute recursively algorithm Find-Contact-Points-Between-Sets for the sets P 0
and PR and return the desired contact points.
It is not hard to see that algorithm Find-Contact-Points-Between-Sets involves concurrent
reads (because several processors may access a point concurrently), but does not involve concurrent
operations. By Lemma 3.2, Step 1 can be takes O( log n
log m ) time on the CREW model. Steps 2
and 3 run, clearly, in O(1) time. Since P 0
L contains at most n
the depth of recursion
is O( log n
log m ). Thus, altogether, algorithm Find-Contact-Points-Between-Sets runs in O( log 2 n
using m CREW processors.
Lemma 3.3 Given the sets of points in the
plane such that x(p 1 the task of finding the contact points between PL and
PR can be performed in O( log 2 n
using m CREW processors.
Next, we are interested in designing an algorithm to compute the proximate points of a set P
on n points in the plane sorted by x-coordinate in O(log log n) time on the Common-CRCW. We
assume that n processors are available to us. We begin by determining for every i, the indices c i ,
l i , and r i . With this information available, all that remains to be done is to retain all the points p i
for which c i. The details follow.
Algorithm Find-Proximate-Points(P)
Partition the set P into n 1=3 subsets such that for every k, (0 - k -
g. For every point p i in P k , (0 - k - n 1=3 \Gamma 1),
determine the indices c i , l i , and r i local to P k .
Compute the contact points of each pair of sets P i and using
n 1=3 of the processors available. Let q i;j 2 P i denote the contact point between P i and P j .
Step 3 For every P i , find the rightmost contact point p rc i
among all the points q i;j with
and find the leftmost contact point p lc i
over all points q i;j with j ? i. Clearly, x(p rc i
ig.
Step 4 For each set P i , the proximate points lying between rc i and lc i (inclusive) are proximate
points of P . Update each c
It is clear that Step 2 can be performed in O( log n 2=3
log runs in O(1)
time using Lemma 3.1. At this moment, the reader may wonder how the updating of the indices
can be performed efficiently. In fact, as it turns out, this update can be done
in O(1) time. Since the task of updating l i and r i is, essentially, the same as that of updating c i ,
we will only focus on c i . In each P i , for all the points p j , (rc the value of c j is not
changed. For all the points p j with lc i ! j, the value of c j must be changed to lc i . For all points p j
with the value of c j is changed to lc has an proximate point. However, if P
contains no proximate points, we have to find the nearest subset that contains a proximate point.
To do this, first check whether each P i has a proximate point using n 2=3 processors each. Thus,
totally, processors are used for this task. Next, using Lemma 3.1 we determine
k such that contains a proximate pointg for each P i . Since P has n 1=3
groups, this task can be done in O(1) time and n 1=3 processors each and, totally, n 1=3 \Delta n
processors are used. Thus, Step 4 can be done in O(1) time using n processors.
Let TCRCW (n) be the running time of this algorithm. To find the recurrence describing the worst
case running time of algorithm Find-Proximate-Points, we note that Step 1 executes recursively
this algorithm for n 2=3 points, while Steps 2, 3, and 4 run in O(1) time. Thus, we have
confirming that TCRCW (n) 2 O(log log n). Thus, we have:
Lemma 3.4 An instance of size n of the proximate points problem can be solved in O(log log n)
time using n Common-CRCW processors.
Next, we show that the number of processors can be reduced by a factor of log log n without increasing
the running time. The idea is as follows: begin by partitioning the set P into n
log log n subsets
log log n
each of size log log n. Next, using algorithm Sequential-Proximate-Points
find the proximate points within each subset in O(log log n) sequential time and, in the process,
remove from P all the points that are not proximate points. For every i, (1 - i - n
log log n ), let
proximate points in the set P i .
At this moment, execute algorithm Find-Proximate-Point on P 1
log log n
. Since n
processors are required in order to update the indices c i , l i , and r i in O(1), we will proceed slightly
differently. The idea is the following: while executing the algorithm, some of the (currently)
proximate points will cease to be proximate points. To maintain this information efficiently, we
use ranges
log log n
log log n
such that for each P i , fp i;L
are the current proximate points. While executing the algorithm, P i may contain no proximate
points. To find the neighboring proximate points, we use the pointers L 0
log log n
and
log log n
such that
and the set P j contains a proximate pointg,
and the set P j contains a proximate pointg.
By using this strategy, we can find the contact point between a point and P in O( log n
log using
processors as discussed in Lemma 3.2. Thus, the contact points between two subsets can be
found in the same manner as in Lemma 3.3. Finally, the algorithm for Lemma 3.1 can update
i in Step 4 in O(1) time by using O( n
log log n ) processors. To summarize, we have
the following result.
Theorem 3.5 An instance of size n of the proximate points problem can be solved in O(log log n)
time using n
log log n Common-CRCW processors.
We close this section by pointing out that algorithm Find-Proximate-Points can be implemented
efficiently on the EREW. For this purpose, we rely, in part, on the following well known
result [20].
Lemma 3.6 A single step execution of the m-processor CRCW can be simulated by an m-processor
EREW in O(log m) time.
By Lemma 3.6, Steps 2, 3, and 4 of the algorithm can be performed in O(log n) time using
processors, as the CRCW performs these steps in O(1) time using n processors. Let
TEREW (n) be the worst-case running time on the EREW. Then, the recurrence describing the
confirming that T (n) 2 O(log n). Consequently, we have:
Lemma 3.7 An instance of size n of the proximate points problem can be solved in O(log n) time
using n EREW processors.
Using, essentially, the same idea as for the Common-CRCW, we can reduce the number of
processors by a factor of log n without increasing the computing time. Specifically, in case of the
EREW, the n points are partitioned into n
log n subsets each of size log n. Thus, we have
Theorem 3.8 An instance of size n of the proximate points problem can be solved in O(log n) time
using n
log n EREW processors.
4 Lower Bounds
The main goal of this section is to show that the running time of the Common-CRCW algorithm
for the proximate points problem developed in Section 3 cannot be improved while retaining work-
optimality. This, in effect, will prove that our Common-CRCW algorithm is work-time optimal.
We then show that our EREW algorithm is time-optimal.
The work-optimality of both algorithms is obvious; in order to solve the proximate points
problem every point must be accessed at least once.
n) work is required of any algorithm
solving the problem.
Our lower bound arguments rely, in part, on the following fundamental result of Valiant [30].
Lemma 4.1 The task of finding the minimum (maximum) of n real numbers
log n)
time on the CRCW provided that n log O(1) n processors are available.
We now show that the lower bound of Lemma 4.1 holds even if all the item are non-negative.
Lemma 4.2 The task of finding the minimum (maximum) of n non-negative (non-positive) real
numbers requires
log n) time on the CRCW provided that n log O(1) n processors are available.
Proof. Assume that the minimum (maximum) of n non-negative numbers can be computed in
o(log log n) time using n log O(1) n CRCW processors.
With this assumption, we can find the minimum of n real numbers in o(log log n) time as follows:
first, in O(1) time, check whether there are negative numbers in the input. If not, the minimum
of input items can be computed in o(log log n) time. If negative numbers exist, replace every non-positive
number by 0 and find the maximum of their absolute values in the resulting sequence in
o(log log n) time. The maximum thus computed corresponds to the minimum of the original input.
Thus, the minimum of n real numbers can be computed in o(log log n) time, contradicting Lemma
4.1.
Further, we rely on the following classic result of Cook et al. [15].
Lemma 4.3 The task of finding the minimum (maximum) of n real numbers
on the CREW (therefore, also on the EREW) even if infinitely many processors are available.
We shall reduce the task of finding the minimum of a collection A of n non-negative a 1 ; a
to the proximate points problem. Our plan is to show that an instance of size n of the problem of
finding the minimum of a collection of non-negative numbers can be converted, in O(1) time, to an
instance of size 2n of the the proximate points problem involving sorted points in the plane.
For this purpose, let be a set of arbitrary non-negative real numbers that
are input to the minimum problem. We construct a set of points in the plane
by setting for every i, (1
Notice that this construction guarantees that the points in P are sorted by x-coordinate and that
for every i, (1 - i - n), the distance between the point p i and the origin is exactly
Intuitively, our construction places the 2n points circles centered at
the origin. More precisely, for every i, (1 - i - n), the points p i and p n+i are placed on such a
circle C i with radius
. It is very important to note that the construction above can be
carried out in O(1) time using n EREW processors.
In our subsequent arguments, we find it convenient to rely on the next technical result.
Lemma 4.4 Both p i and p i+n are proximate points if and only if a i is the minimum of A.
Proof. Let a i be the minimum of A and refer to Figure 6. Clearly, C i is the circle of smallest
radius containing p i and p i+n , while all the other points lie outside C i . Hence, p i and p i+n are the
closest points of P from the origin. Thus, the boundary between
lies to the left of the origin: were this not true, p j would be closer to the origin than p i+n . The
following simple facts are proved in essentially the same way.
a i
a i+n
I i+n
I i
O
Figure
Illustrating P for Lemma 4.4
1. The boundary point between p j and p i+n lies to the left of the origin if 1 and to
the right if i
2. The boundary point between p j and p i lies to the left of the origin if 1 and to the
right if
for each point n), the boundary between p j and p i lies to the left of that
between p j and p n+i , p j is not proximate point. Thus, for j 6= i, either p j or p j+n fails to be a
proximate point. Further, for the point p i the boundary with lies to the left of the
origin, and that with n) lies to the right of the origin (or is the origin itself). Thus, p i
is a proximate point. The fact that p i+n is a proximate point follows by a mirror argument. This
completes the proof.
Lemma 4.4 guarantees that the minimum of A can be determined in O(1) time once the proximate
points of P are known. Now, Lemma 4.1 implies the following important result.
Theorem 4.5 Any algorithm that solves an instance of size n of the proximate points problem on
the CRCW must take \Omega\Gammake/ log n) time in the worst case, provided that n log O(1) n processors are
available.
Using exactly the same construction, in combination with Lemma 4.3 we obtain the following lower
bound for the CREW.
Theorem 4.6 Any algorithm that solves an instance of size n of the proximate points problem on
the CREW (also on the EREW) must take \Omega\Gammake/ n) time, even if an infinite number of processors
are available.
Notice that the EREW algorithm for the proximate points problem presented in Section 3
running in O(log n) time using n
log n processors features the same work and time performance on
the CREW-PRAM. By Theorem 4.6 the corresponding CREW algorithm is also time-optimal.
It is straightforward to extend the previous arguments to handle the case of the L k metric.
Specifically, in this case, for every i, (1 - i - n), the points
allow us to find the minimum of A. Thus, Theorems 4.5 and
4.6 provide lower bounds for solving the proximate points problem for the distance metric L k .
5 Computing the convex hull
The main goal of this section is to show that the proximate points algorithms developed in Section 3
yield a work-time optimal (resp. time-optimal) algorithm for computing the convex hull of a set of
points in the plane sorted by x-coordinate. We begin by discussing the details of this algorithm.
In the second subsection we establish its work-time (resp. time) optimality.
5.1 The convex hull algorithm
be a set of n points in the plane with x(p 1
line segment partitions the convex hull of P into the lower hull, lying below the segment, and
the upper hull, lying above it. We focus on the computation of the lower hull only, the computation
of the upper hull being similar.
For a sequence a 1 , a of items, the prefix maxima is the sequence a 1 , maxfa 1 ; a
g. For later reference, we state the following result [20, 29].
Lemma 5.1 The task of computing the prefix maxima (prefix minima) of an n-item sequence can
be performed in O(log n) time using n
log n EREW processors or in O(log log n) time using n
log log n
Common-CRCW processors.
be a set of n points in the plane sorted by x-coordinate as x(p 1
We define a set let of n points by setting
for every i, (1 - i - n), q
It is important to note that the points in Q
Figure
7: Illustrating the proof of Lemma 5.2
are also sorted by x-coordinate. The following surprising result captures the relationship between
the sets P and Q we just defined.
Lemma 5.2 For every j, (1 - j - n), p j is an extreme point of the lower hull of P if and only if
q j is a proximate point of Q.
Proof. If is an extreme point of P and q j is a proximate point of
Q. Thus, the lemma is correct for Now consider an arbitrary j in the range
be arbitrary subscripts such that 1 -
be the boundaries between q i and q j , and between q j and q k , respectively, and refer to Figure 7.
Clearly,
Thus, we have
Similarly, we obtain
It is easy to see that the point q j is not dominated by q i and q k if and only if x(b
Notice that the slopes of the line segments are 2x(b i ) and 2x(b k ), respectively.
Thus, the point p j lies below the segment p i p k if and only if 2x(b Consequently, the
point lies below the segment p i p k if and only if the point q j is not dominated by q i and q k . In
other words, the point p j is an extreme point of the lower hull of P if and only if q j is a proximate
point of Q.
Lemma 5.2 suggests the following algorithm for determining the extreme points of the lower
hull of g.
Algorithm Find-Lower-Hull(P )
Construct the set by setting for every i, (1 - i - n),
Determine the proximate points of Q and report p i as an extreme point of the lower hull
of P whenever q i is a proximate point of Q.
The preprocessing in Step 0 amounts to translating the set P vertically in such a way that for
every This affine transformation does not affect the convex hull
of P . The correctness of this simple algorithm follows directly from Lemma 5.2. To argue for the
running time, we note that by Lemma 5.1 Step 0 takes O(log log n) time and optimal work on the
Common-CRCW or O(log n) time and optimal work on the EREW. Step 1 runs in O(1) time using
optimal work on either the Common-CRCW or the EREW. By Theorems 3.5 and 3.8, Step 2 takes
O(log log n) time and optimal work on the Common-CRCW or O(log n) time and optimal work on
the EREW. Thus, we have proved the following result.
Theorem 5.3 The task of computing the convex hull of a set of n points sorted by x-coordinate
can be performed in O(log log n) time using n
log log n Common-CRCW processors or in O(log n) time
using n
log n EREW processors.
5.2 The optimality of the convex hull algorithm
The main goal of this subsection is to show that the convex hull algorithm described in the previous
subsection is work-time optimal on the Common-CRCW and, in addition, time-optimal on the
CREW and EREW.
Clearly, every point must be read at least once to solve the proximate points problem. Thus,
O(n)-time is required to solve the problem, and our convex hull algorithms (Common-CRCW or
are work-optimal.
Next, we show that given a set of n non-negative integers their maximum
can be determined by using any algorithm for computing the convex hull of a set of sorted points
in the plane. For this purpose, we exhibit an O(1)-time reduction of the maximum problem to the
convex hull problem. The proof technique is similar to the one employed for the proximate points
problem.
With A given construct a set of points in the plane by setting for every i,
is a set of point in
the plane sorted by x-coordinate. The following result relates the sets A and P .
Lemma 5.4 The item a i is the maximum of A if and only if both p i and p i+n are points on the
upper hull of P .
Proof. Let a i be the maximum of A. By construction, both p i and p i+n are points of the upper
hull of P . Further, none of the points p can belong to the upper hull of P .
Thus, there exist no subscript j, (j 6= i), for which both p j and p n+i belong to the upper hull of P .
This completes the proof.
Consequently, to find the maximum of A all we need do is to find an index i such that both p i
and p n+i are points of the upper hull. Therefore, the problem of finding the upper hull of 2n sorted
points in the plane is at least as hard as the problem of finding the maximum of n non-negative
numbers. Thus, we have the following important result.
Theorem 5.5 The task of finding the convex hull of n points
log n) time on the
CRCW, provided that n log O(1) n processors are available.
Similarly, we have the following companion result.
Theorem 5.6 The task of finding the convex hull of n points
n) time on the CREW,
even if infinitely many processors are available.
By Theorems 5.5 and 5.6 the convex hull algorithms developed in the previous section are work-time
optimal. In addition, the EREW algorithm is both work-time and time-optimal.
6 Applications to image processing
A binary image I of size n \Theta n is maintained in an array b i;j , (1 - n). It is customary to refer
to pixel (i; j) as black if b The rows of the image will be numbered
bottom up starting from 1. Likewise, the columns will be numbered left to right, with column 1
being the leftmost. In this notation pixel b 1;1 is in the south-west corner of the image.
The Voronoi map associates with every pixel in I the closest black pixel to it (in the Euclidean
metric). More formally, the Voronoi map of I is a function I such that for every (i; j),
only if
where
is the Euclidean distance between pixels (i;
The Euclidean distance map of image I associates with every pixel in I the Euclidean distance
to the closest black pixel. Formally, the Euclidean distance map is a function R such that
for every (i; j), (1 -
In our subsequent arguments we find it convenient to rely on the solution to the NEAREST-ONE
problem: given a sequence of 0's and 1's, determine the closest 1 to every item
in A. As a direct corollary of Lemma 5.1 we have
Lemma 6.1 An instance of size n of the NEAREST-ONE problem can be solved in O(log log n)
time using n
log log n Common-CRCW processors or in O(log n) time using n
log n EREW processors.
We assume a binary image I of size n \Theta n as discussed above and the availability of n 2
processors, where T log n) for the Common-CRCW and T n) for the EREW.
We now outline the basic idea of our algorithm for computing the Voronoi map and the Euclidean
distance map of image I. We begin by determining, for every pixel in row j, (1 - j - n), the
nearest black pixel, if any, in the same column of the subimage of I. More precisely, with every
pixel (i; j) we associate the value
Next, we construct an instance of the
proximate points problem for every row j, (1 - j - n), in the image I involving the set P j of points
in the plane defined as P
Having solved, in parallel, all these instances of the proximate points problem, we determine, for
every proximate point p i;j in P j its corresponding proximity interval I i . With j fixed, we determine
for every pixel (i; (that we perceive as a point on the x-axis) the identity of the proximity interval
to which it belongs. This allows each pixel (i; j) to determine the identity of the nearest pixel to
it. The same task is executed for all rows in parallel, to determine for every pixel (i;
in row j the nearest black pixel. The details are spelled out in the following algorithm.
Algorithm Voronoi-and-Euclidean-Distance-Map(I)
Step 1 For each pixel (i; j), compute the distances
ng
to the nearest black pixel in the same column as (i; j) in the subimage of I.
Step 2 For every j, (1 ng. Compute the proximate
points E(P j ) of P j .
Step 3 For every point p in E(P j ) determine its proximity interval of P j .
Step 4 For every i, (1 - i - n), determine the proximate intervals of P j to which the point (i;
(corresponding to pixel (i; j)) belongs.
The correctness of this algorithm being easy to see we turn to the complexity. Step 1 can be
performed in O(T (n)) time using the processors available by using Lemma 6.1. Theorem 3.5
and 3.8 guarantee that Step 2 takes O(T using n
processors. By Lemma 6.1, Steps 3
and 4 can be performed in the same complexity. Thus, we have the following important result.
Theorem 6.2 The task of computing the Voronoi map and the Euclidean distance map of a binary
image of size n \Theta n can be performed in O(log log n) time using n 2
log log n Common-CRCW processors
or in O(log n) time using n 2
log n EREW processors.
Recall that an empty circle in the image I is a circle filled with white pixels. The task of
computing the largest empty circles in an image is a recurring theme in pattern recognition, robotics,
and digital geometry [17]. An empty circle is said to be maximal if it is contained in no other empty
circle. An empty circle is said to me maximum if its radius is as large as possible. It is clear that
a maximum empty circle is also a maximal, but not conversely. We now turn to the task of
determining all maximal (resp. maximum) empty circles in an input image I.
Algorithm All-Maximal-Empty-Circles(I)
Compute the Euclidean distance map m of I.
Step 2 For each pixel (i; j), (1 - I compute the smallest distance u
jg to the border of the image. Then, compute r which is
the largest radius of every empty circle centered at the pixel (i; j).
Step 3 For each pixel (i; check whether there exists a neighboring pixel (i
1), such that the circle with radius r i;j and origin (i; j) is included by the circle with
radius r i 0 ;j 0 and origin (i no such circle exists, label the circle of radius r i;j centered at
as a maximal empty circle.
g. Every pixel (i; in I for which r its empty
circle as the largest empty circle of I.
Clearly, all the steps of this simple algorithm can be performed in O(log log n) time using n 2
log log n
Common-CRCW processors or in O(log n) time using n 2
log n EREW processors. Thus we have
Corollary 6.3 The task of labeling all the maximal empty circles and of reporting a maximum
empty circle of a binary image of size n \Theta n can be performed in O(log log n) time using n 2
log log n
Common-CRCW or in O(log n) time using EREW n 2
log n processors.
Conclusions
Our first main contribution is to propose optimal parallel algorithms for solving instances of size
n of the proximate points problem. Our first algorithm runs in O(log log n) time and uses n
log log n
Common-CRCW processors. This algorithm can, in fact, be implemented to run in O(log n) time
using n
log n EREW processors. The Common-CRCW algorithm is work-time optimal; the EREW
algorithm is, in addition, time-optimal. out to also be time-optimal.
Our second main contribution is to show that the proximate points problem finds interesting,
and quite unexpected, applications to digital geometry and image processing. As a first application
we presented a work-time optimal parallel algorithm for finding the convex hull of a set of n points
in the plane sorted by x-coordinate; this algorithm has the same complexity as the proximate points
algorithm. Next, we showed that the proximate points algorithms afford us work-time optimal (resp.
time-optimal) parallel algorithms for various fundamental digital geometry and image processing
problems. Specifically, we show that the Voronoi map, the Euclidean distance map, the maximal
empty circles, the largest empty circles, and other related problems.
Further, we have proved the work-time, respectively, the time optimality of our proximate points
and convex hull algorithms. However, for the image processing problems discussed, it is not known
whether the algorithms developed are optimal. We conjecture that, for these
problems,\Omega\Gammaobl log n)
is a time lower bound on the CRCW, provided that the algorithms are work-time optimal. For the
CREW and EREW, the logical-OR problem can be reduced to these image processing problems
quite easily.
Therefore,\Omega\Gammaher n) is a time lower bound for both the CREW and the EREW.
--R
Euclidean distance transform on polymorphic processor array.
Computer Vision.
A fast parallel algorithm for finding the convex hull of a sorted point set.
Centres of maximal discs in the 5-7-11 distance transform
Linear time Euclidean distance transform algorithms.
Efficient geometric algorithms on the EREW PRAM.
Optimal algorithm for complete Euclidean distance transform.
Designing systolic architectures for complete Euclidean distance trans- form
An efficient algorithm for complete Euclidean distance transform on mesh-connected SIMD
A fast algorithm for Euclidean distance maps of a 2-d binary image
Optimal parallel algorithms for computing convex hulls.
A parallel method for the prefix convex hulls problem.
SIMD hypercube algorithm for complete Euclidean distance transform.
Upper and lower time bounds for parallel random access machines without simultaneous writes.
On computing Voronoi diagrams for sorted point sets.
Pattern Classification and Scene Analysis
An optimal parallel algorithm for the Euclidean distance maps.
A unified linear-time algorithm for computing distance maps
An Introduction to Parallel Algorithms.
Efficient parallel geometric algorithms on a mesh of trees.
Parallel computation of exact Euclidean distance transform.
Modified distance transform with raster scanning value propagation.
Efficient algorithms for the Euclidean distance transform.
Computational Geometry: An Introduction.
A skeletonization algorithm by maxima tracking on Euclidean distance transform.
Finding the maximum
Parallelism in comparison problem.
On the generation of skeletons from discrete Euclidean distance maps.
--TR
--CTR
Ling Chen , Yi Pan , Xiao-hua Xu, Scalable and Efficient Parallel Algorithms for Euclidean Distance Transform on the LARPBS Model, IEEE Transactions on Parallel and Distributed Systems, v.15 n.11, p.975-982, November 2004
Amitava Datta , Subbiah Soundaralakshmi, Fast and scalable algorithms for the Euclidean distance transform on a linear array with a reconfigurable pipelined bus system, Journal of Parallel and Distributed Computing, v.64 n.3, p.360-369, March 2004 | parallel algorithms;proximate points;digital geometry;convex hulls;pattern recognition;largest empty circles;cellular systems;image analysis |
298705 | Basic Operations on the OTIS-Mesh Optoelectronic Computer. | AbstractIn this paper, we develop algorithms for some basic operationsbroadcast, window broadcast, prefix sum, data sum, rank, shift, data accumulation, consecutive sum, adjacent sum, concentrate, distribute, generalize, sorting, random access read and writeon the OTIS-Mesh [1] model. These operations are useful in the development of efficient algorithms for numerous applications [2]. | Introduction
The Optical Transpose Interconnection System ( OTIS ), proposed by Marsden et al. [4], is a hybrid
optical and electronic interconnection system for large parallel computers. The OTIS architecture
space optics to connect distant processors and electronic interconnect to connect
nearby processors. Specifically, to maximize bandwidth, power efficiency, and to minimize system
area and volume [1], the processors of an N 2 processor OTIS computer are partitioned into N
groups of N processors each. Each processor is indexed by a tuple (G; G;
G is the group index ( i.e., the group the processor is in ), and P the processor index within a
group. The inter group interconnects are optical while the intra group interconnects are electronic.
The optical or OTIS interconnects connect pairs of processors of the form [(G; P ); (P; G)]; that is,
the group and processor indices are transposed by an optical interconnect. The electrical or intra
group interconnections are according to any of the well studied electronic interconnection networks
- mesh, hypercube, mesh of trees, and so forth. The choice if the electronic interconnection network
defines a sub-family of OTIS computers - OTIS-Mesh, OTIS-Hypercube, and so forth. Figure 1
shows a 16 processor OTIS-Mesh. Each small square represents a processor. The number inside a
processor square is the processor index P . Some processor squares have a pair (P
The pair gives the row and column index of the processor P within its
N \Theta
N mesh. Each large
This work was supported, in part, by the Army Research Office under grant DAA H04-95-1-0111.
group 3
Figure
1:
square encloses a group of processors. A group index G may also be given as a pair (G x ; G y ) where
G x and G y are the row and column indices of the group assuming a
N \Theta
N layout of groups.
Zane et al. [11] have shown that an N 2 processor OTIS-Mesh can simulate each move of a
N \Theta
N \Theta
N \Theta
four-dimensional ( 4D ) mesh computer using either one electronic move or
one electronic and two OTIS moves ( depending on which dimension of the 4D mesh we are to move
along ). They have also shown that an N 2 processor OTIS-Hypercube can simulate each move of
an N 2 processor hypercube using either one electronic move or one electronic and two OTIS moves.
Sahni and Wang [10, 9] have developed efficient algorithms to rearrange data according to bit-
permute-complement permutations on OTIS-Mesh and OTIS-Hypercube computers,
respectively. Rajasekaran and Sahni [7] have developed efficient randomized algorithms for routing,
selection, and sorting on an OTIS-Mesh.
In this paper, we develop deterministic OTIS-Mesh algorithms for the basic data operations for
parallel computation that are studied in [8]. As shown in [8], algorithms for these operations can
be used to arrive at efficient parallel algorithms for numerous applications, from image processing,
computational geometry, matrix algebra, graph theory, and so forth.
We consider both the synchronous SIMD and synchronous MIMD models. In both, all processors
operate in lock-step fashion. In the SIMD model, all active processors perform the same
operation in any step and all active processors move data along the same dimension or along OTIS
connections. In the MIMD model, processors can perform different operations in the same step
and can move data along different dimensions.
2 Basic Operations
2.1 Data Broadcast
Data broadcast is, perhaps, the most fundamental operation for a parallel computer. In this
operation, data that is initially in a single processor (G; P ) is to be broadcast or transmitted to all
processors of the OTIS-Mesh. Data broadcast can be accomplished using the following three
step algorithm:
its data to all other processors in group G.
Step 2: Perform an OTIS move.
Step 3: Processor G of each group broadcasts the data within its group.
Following Step 2, one processor of each group has a copy of the data, and following Step 3
each processor of the OTIS-Mesh has a copy. In the SIMD model, Steps 1 and 3 take 2(
electronic moves each, and Step 2 takes one OTIS move. The SIMD complexity is 4(
electronic moves and 1 OTIS move, or a total of 4
moves. Note that our algorithm is
optimal because the diameter of the OTIS-Mesh is 4
example, if the data to be
broadcast is initially in processor (0,0), the data needs to reach processor which is
at a distance of 4
3. In the MIMD model, the complexity of Steps 1 and 3 depends on the
value of ranges from a low of approximately
to a high of 2(
The overall complexity is at most 4(
moves and one OTIS move. By contrast,
simulating the 4D-mesh broadcast algorithm using the simulation method of [11] takes 4(
electronic moves and 4(
moves in the SIMD model and up to this many moves in
the MIMD model.
2.2 Window Broadcast
In a window broadcast, we start with data in the top left w \Theta w submesh of a single group G.
Here w divides
N . Following the window broadcast operation, the initial w \Theta w window tiles all
groups; that is, the window is broadcast both within and across groups. Our algorithm for window
broadcast is:
Step 1: Do a window broadcast within group G.
Step 2: Perform an OTIS move.
Step 3: Do an intra group data broadcast from processor G of each group.
Step 4: Perform an OTIS move.
Following Step 1 the initial window properly tiles group G and we are left with the task of
broadcasting from group G to all other groups. In Step 2, data d(G; P ) from (G; P ) is moved to
In Step 3, d(G; P ) is broadcast to all processors
moved to (i; P
Step 1 of our window broadcast algorithm takes 2(
moves in both the SIMD
and MIMD models, and Step 3 takes 2(
moves in the SIMD model and up to
moves in the MIMD model. The total cost is 4
moves in the SIMD model and up to this many moves in the MIMD model. A simulation
of the 4D mesh window broadcast algorithm takes the same number of electronic moves, but also
takes 4(
moves.
2.3 Prefix Sum
The index (G; P ) of a processor may be transformed into a scalar I = GN+P with 0 - I ! N 2 . Let
D(I) be the data in processor I, 0 - I ! N 2 . In a prefix sum, each processor I computes
I
. A simple prefix sum algorithm results from the following observation:
where SD(I) is the sum of D(i) over all processors i that are in a group smaller than the group of
I and LP (I) is the local prefix sum within the group of I. The simple prefix sum algorithm is:
Step 1: Perform a local prefix sum in each group.
Step 2: Perform an OTIS move of the prefix sums computed in Step 1 for all processors (G; N \Gamma 1).
Step 3: Group modified prefix sum of the values, A, received in Step 2. In this
modification, processor P computes
rather than
Step 4: Perform an OTIS move of the modified prefix sums computed in Step 3.
Step 5: Each group does a local broadcast of the modified prefix sum received by its
processor.
Step Each processor adds the local prefix sum computed in Step 1 and the modified prefix sum
it received in Step 5.
The local prefix sums of Steps 1 and 3 take 3(
moves in both the SIMD
and MIMD models, and the local data broadcast of Step 5 takes 2(
moves.
The overall complexity is 8(
moves and 2 OTIS moves. This can be reduced to
moves and 2 OTIS moves by deferring some of the Step 1 moves to Step 5 as
below.
Step 1: In each group, compute the row prefix sums R.
Step 2: Column
of each group computes the modified prefix sums of its R values.
Step 3: Perform an OTIS move on the prefix sums computed in Step 2 for all processors (G; N \Gamma 1).
Step 4: Group modified prefix sum of the values, A, received in Step 3.
Step 5: Perform an OTIS move of the modified prefix sums computed in Step 4.
Step Each group broadcasts the modified prefix sum received in Step 5 along column
of its mesh.
Step 7: The column
processors add the modified prefix sum received in Step 6 and the
prefix sum of R values computed in Step 2 minus its own R value computed in Step 1.
Step 8: The result computed by column
processors in Step 7 is broadcast along mesh
rows.
Step 9: Each processor adds its R value and the value it received in Step 8.
If we simulate the best 4D mesh prefix sum algorithm, the resulting OTIS mesh algorithm takes
moves.
2.4 Data Sum
In this operation, each processor is to compute the sum of the D values of all processors. An
optimal SIMD data sum algorithm is:
Step 1: Each group performs the data sum.
Step 2: Perform an OTIS move.
Step 3: Each group performs the data sum.
In the SIMD model Steps 1 and 3 take 4(
moves, and step 2 takes 1 OTIS
move. The total cost is 8(
moves. Note that since the distance
between processors (0;
moves and since
each needs to get information from the other, at least 8(
are needed ( the moves needed to send information from (0; 0) to (N \Gamma and those from
cannot be overlapped in the SIMD model ). Also, note that a simulation
of the 4D mesh data sum algorithm takes 8(
moves.
The MIMD complexity can be reduced by computing the group sums in the middle processor
of each group rather than in the bottom right processor. The complexity now becomes 4(
electronic and 1 OTIS moves when
N is odd and 4
N electronic and 1 OTIS moves when
is even. The simulation of the 4D mesh, however, takes 4(
moves. Notice that the MIMD algorithm is near optimal as the diameter of the OTIS-Mesh isp
2.5 Rank
In the rank operation, each processor I has a flag S(I) 2 f0; 1g, 0 - I ! N 2 . We are to compute
the prefix sums of the processors with This operation can be performed in 7(
electronic and 2 OTIS moves using the prefix sum algorithm of Section 2.3.
2.6 Shift
Although there are many variations of the shift operation, the ones we believe are most useful in
application development are:
(a) mesh row shift with zero fill - in this we shift data from processor (G x
N . The shift is done with zero fill and end discard ( i.e.,
if
or P y the data from P y is discarded ).
(b) mesh column shift with zero fill - similar to (a), but along mesh column P x .
(c) circular shift on a mesh row - in this we shift data from processor (G x
(d) circular shift on a mesh column - similar to (c), but instead P x is used.
row shift with zero fill - similar to (a), except that G y is used in place of P y .
(f) group column shift with zero fill - similar to (e), but along group column G x .
circular shift on a group row - similar to (c), but with G y rather than P y .
circular shift on a group column - similar to (g), with G x in place of G y .
Shifts of types (a) through (d) are done using the best mesh algorithms while those of types (e)
through (h) are done as below:
1: Perform an OTIS move.
Step 2: Do the shift as a P x ( if originally a G x shift ) or a P y ( if originally a G y shift ) shift.
Step 3: Perform an OTIS move.
Shifts of types (a) and (b) take s electronic moves on the SIMD and MIMD models; (c) and
(d) take
electronic moves on the SIMD model and maxfjsj;
moves on the
MIMD model; (e) and (f) take s electronic and 2 OTIS moves on both SIMD and MIMD models;
and (g) and (h) take
N electronic and 2 OTIS moves on the SIMD model and maxfjsj;
electronic and 2 OTIS moves on the MIMD model.
If we simulate the corresponding 4D mesh algorithms, we obtain the same complexity for (a)
- (d), but (e) and (f) take an additional 2s \Gamma 2 OTIS moves, and (g) and (h) take an additional
2 \Theta maxfjsj;
moves.
2.7 Data Accumulation
Each processor is to accumulate M ,
, values from its neighboring processors along
one of the four dimensions G x , G y , be the data in processor
In a data accumulation along the G x dimension ( for example ), each processor
accumulates in an array A the data values from ((G x
Specifically, we have
Accumulation in other dimensions is similar.
The accumulation operation can be done using a circular shift of \GammaM in the appropriate dimen-
sion. The complexity is readily obtained from that for the circular shift operation ( see Section 2.6
2.8 Consecutive Sum
The N 2 processor OTIS-Mesh is tiled with one-dimensional blocks of size M . These blocks may
align with any of the four dimensions G x , G y , P x , and P y . Each processor has M values X[j],
. The ith processor in a block is to compute the sum of the X[i]s in that block.
Specifically, processor i of a block computes
where i and j are indices relative to a block.
When the one-dimensional blocks of size M align with the P x or P y dimensions, a consecutive
sum can be performed by using M tokens in each block to accumulate the M sums S(i),
Assume the blocks align along P x . Each processor in a block initiates a token labeled with the
processor's intra block index. The tokens from processors 0 through are right bound and
that from M \Gamma 1 is left bound. In odd time steps, right bound tokens move one processor right
along the block, and in even time steps left bound tokens move one processor left along the block.
When a token reaches the rightmost or leftmost processor in the block, it reverses direction. Each
token visits each processor in its block twice - once while moving left and once while moving right.
During the rightward visits it adds in the appropriate X value from the processor. After
time steps ( and hence moves ), all tokens return to their originating processors,
and we are done.
In the MIMD model, the left and right moves can be done simultaneously, and only
electronic moves are needed.
When the one-dimensional size M blocks align with G x or G y , we first do an OTIS move; then
run either a P x or P y consecutive sum algorithm; and then do an OTIS move. The number of
electronic moves is the same as for P x or P y alignment. However, two additional OTIS moves are
needed.
Simulation of the corresponding 4D mesh algorithm takes an additional
for the case of G x or G y alignment in the SIMD model and an additional moves in
the MIMD model.
2.9 Adjacent Sum
This operation is similar to the data accumulation operation of Section 2.7 except that the M
accumulated values are to be summed. The operation can be done with the same complexity as
data accumulation using a similar algorithm.
2.10 Concentrate
A subset of the processors contain data. These processors have been ranked as in Section 2.5. So
the data is really a pair (D; r); D is the data in the processor and r is its rank. Each pair (D; r) is
to be moved to processor r, 0 - r ! b, where b is the number of processors with data. Using the
(G; P ) format for a processor index, we see that (D; r) is to be routed from its originating processor
to processor (br=Nc; r mod N ). We accomplish this using the steps:
Step 1: Each pair (D; r) is routed to processor r mod N within its current group.
Step 2: Perform an OTIS move.
Step 3: Each pair (D; r) is routed to processor br=Nc within its current group.
Step 4: Perform an OTIS move.
Theorem 1 The four step algorithm given above correctly routes every pair (D; r) to processor
Proof Step 1 does the routing on the second coordinate. This step does not route two pairs to
the same processor provided no group has two pairs (D
Since each group has at most N pairs and the ranks of these pairs are contiguous integers, no group
can have two pairs with r 1 mod each processor has at most
one pair and each pair is in the correct processor of the group, though possibly in the wrong group.
To get the pairs to their correct groups without changing the within group index, Step 2 performs
an OTIS move, which moves data from processor (G; P ) to processor (P; G). Now all pairs in a
group have the same r mod N value and different br=Nc values. The routing on the br=Nc values,
as in Step 3, routes at most one pair to each processor. The OTIS move of Step 4, therefore, gets
every pair to its correct destination processor. 2
In group 0, Step 1 is a concentrate localized to the group, and in the remaining groups, Step
1 is a generalized concentrate in which the ranks have been increased by the same amount. In all
groups we may use the mesh concentrate algorithm of [6] to accomplish the routing in 4(
moves. Step 3 is also a concentrate as the br=Nc values of the pairs are in ascending order
from 0;
moves each in the SIMD model and
in the MIMD model [6]. Therefore, the overall complexity of concentrate is 8(
electronic and 2 OTIS moves in the SIMD model and 4(
moves in
the MIMD model.
We can improve the SIMD time to 7(
moves by using a better
mesh concentrate algorithm than the one in [6]. The new and simpler algorithm is given below for
the case of a generalized concentration on a
N \Theta
mesh.
Step 1: Move data that is to be in a column right of the current one rightwards to the proper
processor in the same row.
Step 2: Move data that is to be in a column left of the current one leftwards to the proper processor
in the same row.
Step 3: Move data that is to be in a smaller row upwards to the proper processor in the same
column.
Step 4: Move data that is to be in a bigger row downwards to the proper processor in the same
column.
In a concentrate operation on a square mesh data that begins in two processors of the same row
ends up in different columns as the rank of these two data differs by at most
and 2 do not leave two or more data in the same processor. Steps 3 and 4 get data to the proper
row and hence to the proper processor. Note that it is possible to have up to two data items in
a processor following Step 1 and Step 3. The complexity of the above concentrate algorithm is
on a SIMD mesh and 2(
on an MIMD mesh ( we can overlap Steps 1 and 2 as
well as Steps 3 and 4 on an MIMD mesh ).
For an ordinary concentrate in which the ranks begin at 1, Step 4 can be omitted as no data
moves down a column to a row with bigger index. So an ordinary concentrate takes only 3(
moves. This improves the SIMD concentration algorithm of [6], which takes 4(
moves to
do an ordinary concentrate.
Actually, we can show that the four step concentration algorithm just stated is optimal for the
SIMD model. Consider the ordinary concentrate instance in which the selected elements are in
processors (0;
0). The ranks are 0, 1, \Delta \Delta \Delta,
1. So the data
in processor (0;
is to be moved to processor (0,0). This requires moves that yield a net of
moves. Also, the data in processor (
is to be moved to processor (0;
This requires a net of
moves and
moves. None of these moves
can be overlapped in the SIMD model. So every SIMD concentrate algorithm must take at least
moves in each of the directions left, right, and up; a total of at least 3(
moves.
For the generalized concentrate algorithm, the ranks need not start at zero. Suppose we have
two elements to concentrate. One is at processor (0,0) and has rank N \Gamma 1, and the other is at
processor (
has rank N . The data in (0,0) is to be moved to (
at a cost of
right and down moves. The data in (
is to be moved to
(0,0) at a cost of
net left and up moves. So at least 4(
are needed.
Theorem 2 The OTIS-Mesh data concentration algorithm described above is optimal for both the
SIMD and MIMD models; that is, (a) every SIMD concentration algorithm must make 7(
electronic and 2 OTIS moves in the worst case, and (b) every MIMD concentration algorithm must
make 4(
moves.
Proof (a) Suppose that the data to be concentrated are in the processors shown in Table 1. Let
a denote processor (
and let c denote processor (0,1,0,0). The ranks of a, b, and c are N 3=2 , N 3=2
respectively. Therefore, following the concentration the data D(a), D(b), and D(c) initially
in processors a, b, and c will be in processors (0,1,0,0), (0;
respectively. Figure 2 shows the initial and concentrated data layout for the case when
The change in G x , G y , P x , and P y values between the final and initial locations of D(a), D(b), and
D(c) is shown in Table 2.
a
c
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
(b)
a
c
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
(a)
Figure
2: Data Configuration: (a) Initial; (b) Concentrated
Table
1: Processors with data to concentrate
data G x G y
D(a) \Gamma(
Table
2: Net change in G x , G y , P x , and P y
The maximum net negative change in each of G x , G y , P x , and P y is \Gamma(
1). Since a net
negative change in G x can only be overlapped with a net negative change in P x and since D(b)
needs \Gamma(
negative change in both G x and P x , we must make at least 2(
moves that decrease the row index within a mesh. Similarly, because of D(a)'s requirements, at
least 2(
moves that increase the column index within a
N \Theta
mesh must be
made. Turning our attention to net positive changes, we see that because of D(b)'s requirements
there must be at least 2(
moves that increase the column index. D(c) requires
moves that increase the row index. Since positive net moves cannot be overlapped
with negative net moves, and since net moves along G x and P x cannot be overlapped with net moves
along G y and P y , the concentration of the configuration of Table 1 must take at least 7(
moves.
In addition to 7(
moves, we need at least 2 OTIS moves to concentrate the
data of Table 1. To see this consider the data initially in group (0,1). This data is in group (0,0)
following the concentration. At least one OTIS move is needed to move the data out of group
(0,1). A nontrivial OTIS-Mesh has - 2 processors on a row of a
N \Theta
N submesh. For such
an OTIS-Mesh, at least two pieces of data must move from group (0,1) to group (0,0). A single
OTIS move scatters data from group (0,1) to different groups with each data going to a different
group. At least one additional OTIS move must be made to get the data back into the same group.
Therefore the concentration of the configuration of Table 1 cannot be done with fewer than 2 OTIS
moves.
(b) Consider the initial configuration of Table 1. Since the shortest path between processor
b and its destination processor is 4(
and one OTIS move, at least that many
electronic moves are made, in the worst case, by every concentration algorithm. The reason that
at least 2 OTIS moves are needed to complete the concentration is the same as for (a). 2
2.11 Distribute
This is the inverse of the concentrate operation of Section 2.10. We start with pairs (D
in the first q +1 processors 0; and are to route pair (D i ; d i ) to processor
q. The algorithm of Section 2.10 tells us how to start with pairs (D i ; i) in processor
move them so that D i is in i. By running this backwards, we can start with
D i in i and route it to d i . The complexity of the distribute operation is the same as that of the
concentrate operation. We have shown that the concentrate algorithm of Section 2.10 is optimal;
it follows that the distribute algorithm is also optimal.
2.12 Generalize
We start with the same initial configuration as for the distribute operation. The objective is to
have D i in all processors j such that d i we simulate the 4D
mesh algorithm for generalize using the simulation strategy of [11], it takes 8(
and 8(
moves to perform the generalize operation on an SIMD OTIS-Mesh. We can
improve this to 8(
moves if we run the generalize algorithm of [6]
adapted to use OTIS moves as necessary. The outer loop of the algorithm of [6] examines processor
index bits from 2p \Gamma 1 to 0 where . So in the first p iterations we are moving along bits
of the G index and in the last p iterations along bits of the P index. On an OTIS-Mesh we would
break this into two parts as below:
1: Perform an OTIS move.
Step 2: Run the GENERALIZE procedure of [6] from bit while maintaining the original
index.
Step 3: Perform an OTIS move.
Step 4: Run the GENERALIZE algorithm of [6] from bit
On an MIMD OTIS-Mesh the above algorithm takes 4(
moves.
We can reduce the SIMD complexity to 7(
moves by using a
better algorithm to do the generalize operation on a 2D SIMD mesh. This algorithm uses the
same observation as used by us in Section 2.10 to speed the 2D SIMD mesh concentrate algorithm;
that is, of the four possible move directions, only three are possible. When doing a generalize on
a 2D
N \Theta
N mesh the possible move directions for data are to increasing row indexes and to
decreasing and increasing column indexes. With this observation, the algorithm to generalize on a
2D mesh becomes:
Step 1: Move data along columns to increasing row indexes if the data is needed in a row with
higher index.
Step 2: Move data along rows to increasing column indexes if the data is needed in a processor in
that row with higher column index.
Step 3: Move data along rows to decreasing column indexes if the data is needed in a processor
in that row with smaller column index.
The correctness of the preceding generalize algorithm can be established using the argument of
Theorem 1, and its optimality follows from Theorem 2 and the fact that the distribute operation,
which is the inverse of the concentrate operation, is a special case of the generalize operation.
The new and more efficient generalize algorithm may be used in Step 2 of the OTIS-Mesh
generalize algorithm. It cannot be used in Step 4 because the generalize of this step requires the
full capability of the code of [6] which permits data movement in all four directions of a mesh.
When we use the new generalize algorithm for Step 2 of the OTIS-Mesh generalize algorithm,
we can perform a generalize on a SIMD OTIS-Mesh using 7(
moves.
The new algorithm is optimal for both SIMD and MIMD models. This follows from the lower
bound on a concentrate operation established in Theorem 2 and the observation made above that
the distribute operation, which is a special case of the generalize operation, is the inverse of the
concentrate operation and so has the same lower bound.
2.13 Sorting
As was the case for the operations considered so far, an O(
time algorithm to sort can be
obtained by simulating a similar complexity 4D mesh algorithm. For sorting a 4D Mesh, the
Figure
3: Row-Column Transformation of Leighton's Column Sort
algorithm of Kunde [2] is the fastest. Its simulation will sort into snake-like row-major order usingp
N) electronic and 12
OTIS moves on the SIMD model and 7
electronic and 6
OTIS moves on the MIMD model. To sort into row-major order,
additional moves to reverse alternate dimensions are needed. This means that an OTIS-Mesh
simulation of Kunde's 4D mesh algorithm to sort into row-major order will take
electronic and 16
OTIS moves on the SIMD model. We show that Leighton's column
sort [3] can be implemented on an OTIS-Mesh to sort into row-major order using 22
electronic and O(N 3=8 ) OTIS moves on the SIMD model and 11
N) electronic and O(N 3=8 )
OTIS moves on the MIMD model.
Our OTIS-Mesh sorting algorithm is based on Leighton's column sort [3]. This sorting algorithm
sorts an r \Theta s array, with r - using the following seven steps:
Step 1: Sort each column.
Step 2: Perform a row-column transformation.
Step 3: Sort each column.
Step 4: Perform the inverse transformation of Step 2.
Step 5: Sort each column in alternating order.
Step Apply two steps of comparison-exchange to adjacent rows.
Step 7: Sort each column.
Figure
3 shows an example of the transformation of Step 2, and its inverse. Figure 4 shows a
step by step example of Leighton's column sort.
\Gamma! 11
\Gamma!
Figure
4: Example of Leighton's Column Sort
Although Leighton's column sort is explicitly stated for r \Theta s arrays with r -
can be used to sort arrays with s - into row-major order by interchanging the roles of
rows and columns. We shall do this and use Leighton's method to sort an N 1=2 \Theta N 3=2 array. We
interpret our N 2 OTIS-Mesh as an N 1=2 \Theta N 3=2 array with G x giving the row index and G y
giving the column index of an element processor. We shall further subdivide G x ( G y , P x , P y
, and G x 4
from left to right. We use G x 2\Gamma4
, for example, to
. Since bits and G x i
has p=8 bits. These notations
are helpful in describing the transformations in Steps 2 and 4 of the column sort, as we use the
BPC permutations of [5] to realize these transformations. A BPC permutation [5] is specified by a
vector
(a) A i 2 f\Sigma0;
(b) [jA is a permutation of [0;
The destination for the data in any processor may be computed in the following manner. Let
be the binary representation of the processor's index. Let d be that
of the destination processor's index. Then,
In this definition, \Gamma0 is to be regarded as ! 0, while +0 is - 0. Table 3 shows an example
of the BPC permutation defined by the permutation vector \Gamma3] on a 16 processor
OTIS-Mesh.
Source Destination
Processor (G; P ) Binary Binary (G; P ) Processor
9 (2,1) 1001 0000 (0,0) 0
Table
3: Source and destination of the BPC permutation [\Gamma0; 1; 2; \Gamma3] in a 16 processor OTIS-
Mesh
In describing our sorting algorithm, we shall, at times, use a 4D array interpretation of an
OTIS-Mesh. In this interpretation, processor of the OTIS-Mesh corresponds to
processor of the 4D mesh. We use g x to denote the bit positions of G x , that is
the leftmost p=2 bits in a processor index, g x1 to represent the leftmost p=8 bit positions, p y to
represent the rightmost p=2 bit positions, p y 3\Gamma4
to represent the rightmost p=4 bit positions, and
so on. Our strategy for the sorting steps 1, 3, 5, and 7 of Leighton's method is to collect each row
( recall that since we are sorting an N 1=2 \Theta N 3=2 array, the column-sort steps of Leighton's method
become row-sort steps ) of our N 1=2 \Theta N 3=2 array into an N 3=8 \Theta N 3=8 \Theta N 3=8 \Theta N 3=8 4D submesh
of the OTIS-Mesh, and then sort this row by simulating the 4D mesh sort algorithm of [2]. This
strategy translates into the following sorting algorithm:
rows of the N 1=2 \Theta N 3=2 array into N 3=8 \Theta N 3=8 \Theta N 3=8 \Theta N 3=8 4D submeshes
Perform the BPC permutation P
2: [ Sort each row of the N 1=2 \Theta N 3=2 array
Sort each 4D submesh of size N 3=8 \Theta N 3=8 \Theta N 3=8 \Theta N 3=8 .
3: [ Do the inverse of Step 1, perform a column-row transformation, and move rows into
3=8 \Theta N 3=8 \Theta N 3=8 \Theta N 3=8 submeshes
Perform the BPC permutation P
each row of the N 1=2 \Theta N 3=2 array
Sort each 4D submesh of size N 3=8 \Theta N 3=8 \Theta N 3=8 \Theta N 3=8 .
5: [ Do the inverse of Step 1, perform a row-column transformation, and move rows into
3=8 \Theta N 3=8 \Theta N 3=8 \Theta N 3=8 submeshes
Perform the BPC permutation P 0
x 1\Gamma3
each row in alternating order ]
Sort each 4D submesh of size N 3=8 \Theta N 3=8 \Theta N 3=8 \Theta N 3=8 .
7: [ Move rows back from 4D submeshes
Perform the BPC permutation P 0
Step 8: Apply two steps of comparison-exchange to adjacent rows.
into submeshes of size N 3=8 \Theta N 3=8 \Theta N 3=8 \Theta N 3=8 ]
Perform the BPC permutation P
each row of the N 1=2 \Theta N 3=2 array
Sort each 4D submesh of size N 3=8 \Theta N 3=8 \Theta N 3=8 \Theta N 3=8 .
rows back from 4D submeshes
Perform the BPC permutation P 0
y 2\Gamma4
Notice that the row to 4D submesh transform is accomplished by the BPC permutation P
y 2\Gamma4
Elements in the same row of our N 1=2 \Theta N 3=2 array interpretation
have the same G x value; but in our 4D mesh interpretation, elements in the same
3=8 \Theta N 3=8 \Theta N 3=8 \Theta N 3=8 submesh have the same G x 1
value. P a results in this prop-
erty. To go from Step 2 to Step 3 of Leighton's method, we need to first restore the N 1=2 \Theta N 3=2
array interpretation using the inverse permutation of P a , that is, perform the BPC permutation
y 2\Gamma4
]; then perform a column-row transform using BPC permutation
finally map the rows of our N 1=2 \Theta N 3=2 array into 4D submeshes of
size N 3=8 \ThetaN 3=8 \ThetaN 3=8 \ThetaN 3=8 using the BPC permutation P a . The three BPC permutation sequence
a a is equivalent to the single BPC permutation P
The preceding OTIS-Mesh implementation of column sort performs 6 BPC permutations, 4
4D mesh sorts, and two steps of comparison-exchange on adjacent rows. Since the sorting steps
take O(N 3=8 ) time each ( use Kunde's 4D mesh sort [2] followed by a transform from snake-like
row-major to row-major ), and since the remaining steps take O(N 1=2 ) time, we shall ignore the
complexity of the sort steps.
We can reduce the number of BPC permutations from 6 to 3 as follows. First note that the P a
of Step 1 just moves elements from rows of the N 1=2 \Theta N 3=2 array into N 3=8 \Theta N 3=8 \Theta N 3=8 \Theta N 3=8
4D submeshes. For the sort of Step 2, it doesn't really matter which N 3=2 elements go to each 4D
submesh as the initial configuration is an arbitrary unsorted configuration. So we may eliminate
note that the BPC permutations of Steps 7 and 9 cancel each other and we
can perform the comparison-exchange of Step 8 by moving data from one N 3=8 \Theta N 3=8 \Theta N 3=8 \Theta N 3=8
4D submesh to an adjacent one and back in O(N 3=8 ) time.
With these observations, the algorithm to sort on an OTIS-Mesh becomes:
1: Sort in each subarray of size N 3=8 \Theta N 3=8 \Theta N 3=8 \Theta N 3=8
Step 2: Perform the BPC permutation P c .
Step 3: Sort in each subarray.
Step 4: Perform the BPC permutation P 0
c .
Step 5: Sort in each subarray.
Step Apply two steps of comparison-exchange to adjacent subarrays.
Step 7: Sort in each subarray.
Step 8: Perform the BPC permutation P 0
a .
Using the BPC routing algorithm of [10], the three BPC permutations can be done usingp
N electronic and 3 log moves on the SIMD model and
N electronic and
moves on the MIMD model. A more careful analysis based on the development
in [5] and [10] reveals that the permutations P 0
a , P c , and P 0
c can be done with 28
N electronic and
log moves on the SIMD model and 14
N electronic and 3 log
on the MIMD model. By using p 0
y 2\Gamma4
], the permutation
cost becomes 22
N electronic and log 2 N+5 OTIS moves on the SIMD model and 11
and log 2 N+5 OTIS moves on the MIMD model. The total number of moves is thus 22
electronic and O(N 3=8 ) OTIS moves on the SIMD model and 11
O(N 3=8 ) OTIS moves on the MIMD model. This is superior to the cost of the sorting algorithm
that results from simulating the 4D row-major mesh sort of Kunde [2].
2.14 Random Access Read ( RAR )
In a random access read (RAR) [8] processor I wishes to read data variable D of processor d I ,
. The steps suggested in [8] for this operation are:
Step 0: Processor I creates a triple (I; D; d I ) where D is initially empty.
1: Sort the triples by d I .
Step 2: Processor I checks processor I +1 and deactivates if both have triples with the same third
coordinate.
Step 3: Rank the remaining processors.
Step 4: Concentrate the triples using the ranks of Step 3.
Step 5: Distribute the triples according to their third coordinates.
Step Load each triple with the D value of the processor it is in.
Step 7: Concentrate the triples using the ranks in Step 3.
Step 8: Generalize the triples to get the configuration we had following Step 1.
Step 9: Sort the triples by their first coordinates.
Using the SIMD model the RAR algorithm of [8] take 79(
moves and O(N 3=8 )
OTIS moves. On the MIMD model, it takes 45(
moves.
2.15 Random Access Write ( RAW )
Now processor I wants to write its D data to processor d I , 0 - I ! N 2 . The steps in the RAW
algorithm of [8] are:
Step 0: Processor I creates the tuple (D(I); d I
1: Sort the tuples by their second coordinates.
Step 2: Processor I deactivates if the second coordinate of its tuple is the same as the second
coordinate of the tuple in I
Step 3: Rank the remaining processors.
Step 4: Concentrate the tuples using the ranks of Step 3.
Step 5: Distribute the tuples according to their second coordinates.
implements the arbitrary write method for a concurrent write. In this, any one of the
processors wishing to write to the same location is permitted to succeed. The priority model may
be implemented by sorting in Step 1 by d I and within d I by priority. The common and combined
models can also be implemented, but with increased complexity.
On the SIMD model, an RAW takes 43(
moves while on
the MIMD model, it takes 26(
moves.
3 Conclusion
We have developed OTIS-Mesh algorithms for the basic parallel computing algorithms of [8]. Our
algorithms run faster than the simulation of the fastest algorithms known for 4D meshes. Table 4
summarizes the complexities of our algorithms and those of the corresponding ones obtained by
simulating the best 4D-mesh algorithms. Note that the worst case complexities are listed for the
broadcast and window broadcast operation, and that of the case when
N is even is presented for
the data sum operation on the MIMD model. Also, the complexities listed for circular shift, data
accumulation, and adjacent sum assume that the shift distance is -
N=2 on the MIMD model.
Table
4 gives only the dominating
N terms for sorting. Our algorithms for data broadcast, data
sum, concentrate, distribute, and generalize are optimal.
--R
Routing and sorting on mesh-connected arrays
Tight bounds on the complexity of parallel sorting.
Optical transpose interconnection system architectures.
An optimal routing algorithm for mesh-connected parallel computers
Data broadcasting in SIMD computers.
Randomized routing
Hypercube Algorithms with Applications to Image processing and Pattern Recognition.
BPC permutations on the OTIS-Hypercube optoelectronic computer
BPC permutations on the OTIS-Mesh optoelectronic computer
Scalable network architectures using the optical transpose interconnection system (OTIS).
--TR
--CTR
A. Al-Ayyoub , A. Awwad , K. Day , M. Ould-Khaoua, Generalized methods for algorithm development on optical systems, The Journal of Supercomputing, v.38 n.2, p.111-125, November 2006
Behrooz Parhami, The Hamiltonicity of swapped (OTIS) networks built of Hamiltonian component networks, Information Processing Letters, v.95 n.4, p.441-445, 31 August 2005
Ahmad M. Awwad, OTIS-star an attractive alternative network, Proceedings of the 4th WSEAS International Conference on Software Engineering, Parallel & Distributed Systems, p.1-6, February 13-15, 2005, Salzburg, Austria
Khaled Day , Abdel-Elah Al-Ayyoub, Topological Properties of OTIS-Networks, IEEE Transactions on Parallel and Distributed Systems, v.13 n.4, p.359-366, April 2002
Xiaofan Yang , Graham M. Megson , David J. Evans, An oblivious shortest-path routing algorithm for fully connected cubic networks, Journal of Parallel and Distributed Computing, v.66 n.10, p.1294-1303, October 2006
Behrooz Parhami, Swapped interconnection networks: topological, performance, and robustness attributes, Journal of Parallel and Distributed Computing, v.65 n.11, p.1443-1452, November 2005
Ahmad M. Awwad, OTIS-star an attractive alternative network, Proceedings of the 4th WSEAS International Conference on Software Engineering, Parallel & Distributed Systems, p.1-6, February 13-15, 2005, Salzburg, Austria
Khaled Day, Optical transpose k-ary n-cube networks, Journal of Systems Architecture: the EUROMICRO Journal, v.50 n.11, p.697-705, November 2004
Prasanta K. Jana, Polynomial interpolation and polynomial root finding on OTIS-mesh, Parallel Computing, v.32 n.4, p.301-312, April 2006
Chih-fang Wang , Sartaj Sahni, Matrix Multiplication on the OTIS-Mesh Optoelectronic Computer, IEEE Transactions on Computers, v.50 n.7, p.635-646, July 2001
Chih-Fang Wang , Sartaj Sahni, Image Processing on the OTIS-Mesh Optoelectronic Computer, IEEE Transactions on Parallel and Distributed Systems, v.11 n.2, p.97-109, February 2000 | optoelectronic;random access write;distribute;adjacent sum;random access read;prefix sum;OTIS-Mesh;concentrate;data accumulation;window broadcast;consecutive sum;sorting;broadcast;data sum;generalize;shift |
298770 | Co-Evolution in the Successful Learning of Backgammon Strategy. | Following Tesauros work on TD-Gammon, we used a 4,000 parameter feedforward neural network to develop a competitive backgammon evaluation function. Play proceeds by a roll of the dice, application of the network to all legal moves, and selection of the position with the highest evaluation. However, no backpropagation, reinforcement or temporal difference learning methods were employed. Instead we apply simple hillclimbing in a relative fitness environment. We start with an initial champion of all zero weights and proceed simply by playing the current champion network against a slightly mutated challenger and changing weights if the challenger wins. Surprisingly, this worked rather well. We investigate how the peculiar dynamics of this domain enabled a previously discarded weak method to succeed, by preventing suboptimal equilibria in a meta-game of self-learning. | Introduction
It took great chutzpah for Gerald Tesauro to start wasting computer cycles on temporal
difference learning in the game of Backgammon (Tesauro, 1992). Letting a machine
learning program play itself in the hopes of becoming an expert, indeed! After all, the
dream of computers mastering a domain by self-play or "introspection" had been
around since the early days of AI, forming part of Samuel's checker player
(Samuel, 1959) and used in Donald Michie's MENACE tic-tac-toe learner (Michie, 1961);
but such self-conditioning systems had later been generally abandoned by the field due
to problems of scale and weak or non-existent internal representations. Moreover, self-
playing learners usually develop eccentric and brittle strategies which appear clever but
fare poorly against expert human and computer players.
Yet Tesauro's 1992 result showed that this self-play approach could be powerful,
and after some refinement and millions of iterations of self-play, his TD-Gammon program
has become one of the best backgammon players in the world (Tesauro, 1995). His
derived weights are viewed by his corporation as significant enough intellectual property
to keep as a trade secret, except to leverage sales of their minority operating system
(International Business Machines, 1995). Others have replicated this TD result in backgammon
both for research purposes (Boyan, 1992) and commercial purposes.
While reinforcement learning has had limited success in other areas (Zhang and
Dietterich, 1996, Crites and Barto, 1996, Walker et al., 1994), with respect to the goal of a
self-organizing learning machine which starts from a minimal specification and rises to
great sophistication, TD-Gammon stands alone. How is its success to be understood,
explained, and replicated in other domains?
Our hypothesis is that the success of TD-gammon is not principally due to the
back-propagation, reinforcement, or temporal-difference technologies, but to an inherent
bias from the dynamics of the game of backgammon, and the co-evolutionary setup of
the training, by which the task dynamically changes as the learning progresses. We test
this hypothesis by using a much simpler co-evolutionary learning method for backgammon
namely hill-climbing.
2. Implementation Details
We use a standard feedforward neural network with two layers and the sigmoid
set up in the same fashion as (Tesauro, 1992) with 4 units to represent the number
of each player's pieces on each of the 24 points, plus 2 units each to indicate how
many are on the bar and off the board. In addition, we added one more unit which
reports whether or not the game has reached the endgame or "race" situation, making a
total of 197 input units. These are fully connected to 20 hidden units, which are then connected
to one output unit that judges the position. Including bias on the hidden units,
this makes a total of 3980 weights. The game is played by generating all legal moves,
converting them into the proper network input, and picking the position judged as best
by the network. We started with all weights set to zero.
Our initial algorithm was hillclimbing:
1. add gaussian noise to the weights
2. play the network against the mutant for a number of games
3. if the mutant wins more than half the games, select it for the next generation.
The noise was set so each step would have a 0.05 RMS distance (which is the euclidean
distance divided by ).
Surprisingly, this worked reasonably well. The networks so evolved improved rapidly
at first, but then sank into mediocrity. The problem we perceived is that comparing
two close backgammon players is like tossing a biased coin repeatedly: it may take dozens
or even hundreds of games to find out for sure which of them is better. Replacing a
well-tested champion is dangerous without enough information to prove the challenger
is really a better player and not just a lucky novice. Rather than burden the system with
so much computation, we instead introduced the following modifications to the algorithm
to avoid this "Buster Douglas Effect": 2
Firstly, the games are played in pairs, with the order of play reversed and the same
random seed used to generate the dice rolls for both games. This washes out some of the
unfairness due to the dice rolls when the two networks are very close - in particular, if
they were identical, the result would always be one win each - though, admittedly, if
they make different moves early in the game, what is a good dice roll at a particular
move of one game may turn out to be a bad roll at the corresponding move of the parallel
game. Secondly, when the challenger wins the contest, rather than just replacing the
champion by the challenger, we instead make only a small adjustment in that direction:
champion
This idea, similar to the "inertia" term in back-propagation (Rumelhart et al., 1986)
was introduced on the assumption that small changes in weights would lead to small
changes in decision-making by the evaluation function. So, by just "biting the ear" off
the challenger and adding it to the champion, most of the current decisions are preserved
, and we would be less likely to have a catastrophic replacement of the champion
by a lucky novice challenger. In the initial stages of evolution, two pairs of parallel
games were played and the challenger was required to win 3 out of 4 of these games.
Although we would have liked to rank our players against the same players
used - Neurogammon and Gammontool - these were not available to us.
Figure
1 shows the first 35,000 players rated against PUBEVAL, a moderately good public-domain
player trained by Tesauro using human expert preferences. There are three
things to note: (1) the percentage of wins against PUBEVAL increases from 0% to about
33% by 20,000 generations, (2) the frequency of successful challengers increases over
time as the player improves, and (3) there are epochs (e.g. starting at 20,000) where the
performance against PUBEVAL begins to falter. The first fact shows that our simple self-
2. Buster Douglas was world heavyweight boxing champion for 9 months in 1990.
playing hill-climber is capable of learning. The second fact is quite counter-intuitive - we
expected that as the player improved, it would be harder to challenge it! This is true with
respect to a uniform sampling of the 4000 dimensional weight space, but not true for a
sampling in the neighborhood of a given player: once the player is in a good part of weight
space, small changes in weights can lead to mostly similar strategies, ones which make
mostly the same moves in the same situations. However, because of the few games we
were using to determine relative fitness, this increased rate of change allows the system
to drift, which may account for the subsequent degrading of performanceTo counteract
the drift, we decided to change the rules of engagement as the evolution proceeds
according to the following "annealing schedule": after 10,000 generations, the number of
games that the challenger is required to win was increased from 3 out of 4 to 5 out of 6;
after 70,000 generations, it was further increased to 7 out of 8 (of course each bout was
abandoned as soon as the champion won more than one game, making the average
number of games per generation considerably less than 8). The numbers 10,000 and
70,000 were chosen on an ad hoc basis from observing the frequency of successful challenges
and the Buster Douglas effect in this particular run, but later experiments showed
how to determine the annealing schedule in a more principled manner (see Section 3.2
below).
After 100,000 games using this simple hill-climb, we have developed a surprising
player, capable of winning 40% of the games against PUBEVAL. The networks were sampled
every 100 generations in order to test their performance. Networks at generation
1,000, 10,000 and 100,000 were extracted and used as benchmarks. Figure 2 shows the
percentage of wins for the sampled players against the three benchmark networks. Note
that the three curves cross the 50% line at 1, 10, and 100, respectively and show a general
improvement over time.
The end-game of backgammon, called the "bear-off," can be used as another yardstick
of the progress of learning. The bear-off occurs when all of a player's pieces are in
their home board, or first 6 points, and then the dice rolls can be used to remove pieces
Figure
1: Percentage of wins of our first 35,000 generation players
against PUBEVAL. Each match consisted of 200 games.
Generation
%win
from the board. To test our network's ability at the end-game, we set up a racing board
with two pieces on each player's 1 through 7 point and one piece on the 8 point. The
graph in Figure 3 shows the average number of rolls to bear-off for each network playing
itself using a fixed set of 200 random dice-streams. We note that PUBEVAL is stronger at
16.6 rolls, and will discuss its strengths and those of Tesauro's 1992 results in Section 5.
Figure
2: Percentage of wins against benchmark networks 1,000
[upper], 10,000 [middle] and 100,000 [lower]. This shows a noisy
but nearly monotonic increase in player skill as evolution proceeds.
%win
Generation
Generation
Figure
3: Average number of rolls to bearoff by each generation, sampled with 200 dice-streams.
PUBEVAL averaged 16.6 rolls for the task.
3. Analysis
3.1. Learnability and Unlearnability
Learnability can be formally defined as a time constraint over a search space. How
hard is it to randomly pick 4000 floating-point weights to make a good backgammon
evaluator? It is simply impossible. How hard is it to find weights better than the current
Initially, when all weights are random, it is quite easy. As the playing improves, we
would expect it to get harder and harder, perhaps similar to the probability of a tornado
constructing a 747 out of a junkyard. However, if we search in the neighborhood of the current
weights, we will find many similar players which make mostly the same moves but
which can capitalize on each other's slightly different choices and exposed weaknesses
in a tournament. Note that this is a different point than Tesauro originally made - that
the feedforward neural network could exploit similarity of positions.
Although the setting of parameters in our initial runs involved some guesswork,
now that we have a large set of "players" to examine, we can try to understand the phe-
nomenon. Taking the champion networks at generation 1,000, 10,000, and 100,000 from
our run, we sampled random players in their neighborhoods at different RMS distances
to find out how likely is it to find a winning challenger. A thousand random neighbors at
each of 11 different RMS distances played 8 games against the corresponding champion,
and
Figure
4 plots the fraction of games won by these challengers, as a function of RMS
distance. This graph shows that as the players improve over time, the probability of finding
good challengers in their neighborhood increases, which accounts for why the frequency
of successful challenges goes up. 3 Each successive challenger is only required to
3. But why does the number of good challengers in a neighborhood go up, and if so, why does our algorithm
falter nonetheless? There are several factors which require further study. It may be due to the general
growth in weights, to less variability in strategy among mature players, or less ability simply to tell expert
players apart with a few games.
Figure
4: Distance versus probability of random challenger winning
against champions at generation 1,000, 10,000 and 100,000.
distance from champion
100k
%wins
for
challenger
take the small step of changing a few moves of the champion in order to beat it. The
hope, for co-evolution, is that what was apparently unlearnable becomes learnable as we
convert from a single question to a continuous stream of questions, each one dependent
on the previous answer.
3.2. Replication Experiments
After our first successful run, we tried to evolve ten more players using the same
parameters and the same annealing schedule (10,000 and 70,000), but found that only
one of these ten players was even competitive. Closer examination suggested that the
other nine runs were failing because they were being annealed too early, before the frequency
of successful challenges had reached an appropriate level. This premature
annealing then made the task of the challengers even harder, so the challenger success
rate fell even lower. We therefore abandoned the fixed annealing schedule and instead
annealed whenever the challenger success rate exceeded 15% when averaged over 1000
generations. All ten players evolved under this regime were competitive (though not
quite as good as our original player, which apparently benefitted from some extra inductive
bias due to having its own tailor-made annealing schedule). Refining other heuristics
and schedules could lead to superior players, but was not our goal.
3.3. Relative versus Absolute Expertise
Does Backgammon allow relative expertise or is there some absolutely optimal
strategy? Theoretically there exists a perfect "policy" for backgammon which would
deliver the minimax optimal move for any position, and this perfect policy could exactly
rate every other player on a linear scale, in practice, and especially without running
10,000 games to verify, it seems there are many relative cycles and these help prevent
early convergence.
In cellular studies of iterated prisoner's dilemma following (Axelrod, 1984) a stable
population of "tit for tat" can be invaded by "all cooperate," which then allows exploitation
by "all defect". This kind of relative-expertise dynamics, which can be seen clearly
in the simple game of rock/paper/scissors (Littman, 1994) might initially seem very bad
for self-play learning, because what looks like an advance might actually lead to a cycle
of mediocrity. A small group of champions in a dominance circle can arise and hold a
temporal oligopoly preventing further advance. On the other hand, it may be that such a
basic form of instability prevents the formation of sub-optimal oligopolies and allows
learning to progress. These problems are specific to non-zero-sum games; in zero sum
games, appropriate use of self-play can be shown to converge to optimal play for both
parties
4. Discussion
We believe that our evidence of success in learning backgammon using simple hill-climbing
in a relative fitness environment indicates that the reinforcement and temporal
difference methodology used by Tesauro in his 1992 paper which led to TD-Gammon,
while providing some advantage, was not essential for its success. Rather, a major contribution
came from the co-evolutionary learning environment and the dynamics of back-
gammon. Our result is thus similar to the bias found by Mitchell et al in Packard's
evolution of cellular automata to the "edge of chaos" (Packard, 1988, Mitchell
et al., 1993).
Obviously, we are not suggesting that 1+1 hillclimbing is an advanced machine
learning technique which others should bring to many tasks! Without internal cognition
about an opponent's behavior, co-evolution usually requires a population. Therefore,
there must be something about the domain itself which is helpful because it permitted
both TD learning and hill-climbing to succeed through self-play, where they would
clearly fail on other problem-solving tasks of this scale. In this section we discuss some
issues about co-evolutionary learning and the dynamics of backgammon which may be
critical to learning success.
4.1. Evolution versus Co-evolution
TD-Gammon is a major milestone for a kind of evolutionary machine learning in
which the initial specification of the model is far simpler than expected because the
learning environment is specified implicitly, and emerges as a result of the co-evolution
between a learning system and its training environment: the learner is embedded in an
environment which responds to its own improvements - hopefully in a never-ending
spiral, though this is an elusive goal to achieve in practice. While this co-evolutionary
effect has been seen in population models, it is completely unexpected for a "1+1" hill-climbing
evolution. Co-evolution has been explored on the sorting network problem
(Hillis, 1992), on tic-tac-toe and other strategy games (Angeline and Pollack, 1994, Rosin
and Belew, 1995, Schraudolph et al., 1994), on predator/prey games (Cliff and
Miller, 1995, Reynolds, 1994) and on classification problems such as the intertwined spirals
problem (Juille and Pollack, 1995). However, besides Tesauro's TD-Gammon, which
has not to date been viewed as an instance of co-evolutionary learning, Sims' artificial
robot game (Sims, 1994) is the only other domain as complex as backgammon to have
had substantial success.
Since a weak player can sometimes defeat a strong one, it should in theory be possible
for a network to learn backgammon in a static evolutionary environment (playing
against a fixed opponent) rather than a co-evolutionary one (playing against itself). Of
course this is not as interesting an acheivement as learning without an expert on hand,
and if TD-gammon had simply learned from Neurogammon, it wouldn't have been as
startling a result. In order to further isolate the contribution of co-evolutionary learning,
we had to modify our training setup because our original algorithm was only appropriate
to self-play. In this new setup the current champion and mutant both play a number
of games against the same opponent (called the foil) with the same dice-streams, and the
weights are adjusted only if the champion loses all of these games while the mutant wins
all of them. The number of pairs of games was initially set to 1 and incremented whenever
the challenger success rate exceeded 15% when averaged over 1000 generations.
The lower three plots in Figure 5, which track the performance of this algorithm with
each of the three benchmark networks from our original experiments acting as foil, seem
to show a relationship between learning rate and probability of winning.
Against a weak foil (1k) learning is fast initially, when the probability of winning is
around 50%, then tapers off as this probability increases. Against a strong foil (100k)
learning is very slow initially, when the probability of winning is small, but speeds up as
it increases towards 50%. All of these evolutionary runs were outperformed by a co-evolutionary
version of the foil algorithm (co-ev) in which the champion network itself
plays the role of the foil. Co-evolution seems to maintain a high learning rate throughout
the run by automatically providing, for each new generation player, an opponent of the
appropriate skill level to keep the probability of winning near 50%. Moreover, weaknesses
in the foil are less likely to bias the learning process because they can be automatically
corrected as the co-evolution proceeds (see also Section 4.3).
4.2. The Dynamics of Backgammon
In general, the problem with learning through self-play discovered repeatedly in
early AI and ML is that the learner could keep playing the same kinds of games over and
over, only exploring some narrow region of the strategy space, missing out on critical
areas of the game where it would then be vulnerable to other programs or human
experts. This problem is particularly prevalent in deterministic games such as chess or
tic-tac-toe. Tesauro (1992) pointed out some of the features of backgammon that make it
suitable for approaches involving self-play and random initial conditions. Unlike chess,
a draw is impossible and a game played by an untrained network making random
moves will eventually terminate (though it may take much longer than a game between
competent players). Moreover the randomness of the dice rolls leads self-play into a
much larger part of the search space than it would be likely to explore in a deterministic
game. We have worked on using a population to get around the limitations of self-play
(Angeline and Pollack, 1994). Schraudolph et al., 1994 added non-determinism to the
game of Go by choosing moves according to the Boltzmann distribution of statistical
mechanics. Others, such as Fogel, 1993, expanded exploration by forcing initial moves.
Epstein, 1994, has studied a mix of training using self-play, random testing, and playing
against an expert in order to better understand these aspects of game learning.
Generation
Figure
5: Performance against PUBEVAL of players evolved by playing benchmark networks from our original
run at generation 1k, 10k and 100k, compared with a co-evolutionary variant of the same algorithm. Each of these
plots is an average over four runs. The performance of our original algorithm is included for comparison.
original
co-ev
100k
We believe it is not enough to add randomness to a game or to force exploration
through alternative training paradigms. There is something critical about the dynamics
of backgammon which sets its apart from other games with random elements like
Monopoly - namely, that the outcome of the game continues to be uncertain until all contact
is broken and one side has a clear advantage. In Monopoly, an early advantage in
purchasing properties leads to accumulating returns. What many observers find exciting
about backgammon, and what helps a novice sometimes overcome an expert, is the
number of situations where one dice roll, or an improbable sequence, can dramatically
reverse which player is expected to win.
In order to quantify this "reversibility" effect we collected some statistics from
games played by our 100,000th generation network against itself. For each n between 0
and 120 we collected 100 different games in which there was still contact at move n, and,
for n>6, 100 other games which had reached the racing stage by move n (but were still in
Move Number
Standard
Deviation
1200.5Move Number
contact
racing
game over
Probability
Figure
(a) Standard deviation in the probability of
winning for contact positions and racing positions.
contact
racing
Figure
(b) Probability of a game still being
in the contact or racing stage at move n.
Figure
7: Smoothed distributions of the probability of winning as a function of move number,
for contact positions (left) and racing positions (right).
Density
Density
Probability of Winning
Move Number
Move
Probability
progress). We then estimated the probability of winning from each of these 100 positions
by playing out 200 different dice-streams. Figure 6 shows the standard deviation of this
probability (assuming a mean of 0.5) as a function of n, as well as the probability of a
game still being in the contact or racing stage at move n. Figure 7 shows the distribution
in the probability of winning, as a function of move number, symmetrized and smoothed
out by convolution with a gaussian function.
These data indicate that the probability of winning tends to hover near 50% in the
early stages of the game, gradually moving out as play proceeds, but typically remaining
within the range of about 15% to 85% as long as there is still contact, thus allowing a reasonable
chance for a reversal. These numbers could be different for other players, less
reversability for stronger players perhaps and more for weaker ones, but we believe the
effect remains an integral part of the game dynamics regardless of expertise. Our conjecture
is that these dynamics facilitate the learning process by providing, in almost every
situation, a nontrivial chance of winning and a nontrivial chance of losing, therefore
potential to learn from the consequences of the current move. This is in deep contrast to many
other domains in which early blunders could lead to a hopeless situation from which
learning is virtually impossible because the reward has already become effectively unat-
tainable. It seems this feature of backgammon may also be shared by other tasks for
which TD-learning has been successful (Zhang and Dietterich, 1996, Crites and
Barto, 1996, Walker et al., 1994).
4.3. Avoiding Suboptimal Equilibria in the Meta-Game of Learning
A learning system can be viewed as an interaction between teacher and student in
which the teacher's goal is to expose the student's weaknesses and correct them, while
the student's goal is to placate the teacher and avoid further correction.
We can build a model of this teacher/student interaction as a formal game, which
we will call the Meta-Game of Learning (MGL) to avoid confusion with the game being
learned. In this meta-game, the teacher T presents the student S with a sequence of questions
prompting responses R i from the student. (In the backgammon domain, all the
questions and responses would be legal positions, rolls and moves). S and T each receive
payoffs in the process, which they attempt to maximize through their choices of questions
and answers, and their limited abilities at self-modification.
We generally assume the goal of learning is to prepare the student for interaction
with a complex environment E that will provide an objective measure of its perfor-
mance. 4 E and T thus play similar roles but are not assumed to be identical. The question
then is: Can we find a payoff matrix for S and T which will enable the performance
of S to continually improve (as measured by E)? If the rewards for T are too closely correlated
with those for S, T may be tempted to ask questions that are too easy. If they are
anti-correlated (for example if T=E), the questions might be too difficult. In either case it
will be hard for S to learn (see Section 4.1).
4. For a general theory of evolution or self-organization, E is not necessary.
An attractive solution to this problem is to have two or more students play the role
of teacher for each other, or indeed a single student act as its own teacher, thus providing
itself with questions that are always at the appropriate level of difficulty. The dynamics
of the MGL, under such a self-teaching or co-evolutionary situation, would hopefully
lead to a continuing spiral of improvement but may instead get bogged down by antagonistic
or collusive dynamics, depending on the payoff structure.
In our hillclimbing setup we may think of the mutant (teacher) trying to gain
advantage (adjustment in the weights) by exploiting weaknesses in the champion, while
the champion (student) is trying to avoid such an adjustment by not allowing its weaknesses
to be exploited. Since the student and teacher are of approximately equal ability, it
is to the advantage of the student to narrow the scope of the search, thus limiting the
domain within which the teacher is able to look for a weakness. In most games, such as
chess or tic-tac-toe, the student could achieve this by aiming for a draw instead of a win,
or by always playing a particular style of game. If draws are not allowed, the teacher and
student may figure out some other way to collude with each other - for example, each
"throwing" alternate games (Angeline, 1994) by making a suboptimal sequence of early
moves. These effects in self-learning systems, which may appear as early convergence in
evolutionary algorithms, narrowing of scope, drawing or other collusion between
teacher and student, are in fact Nash equilibria in the MGL, which we call Mediocre Stable
States. 5
Our hypothesis is that certain features of backgammon operate against the formation
of mediocre stable states in the MGL: backgammon is ergodic in the sense that any
position can be reached from any other position 6 by some sequence of moves, and the
dice rolls apparently create enough randomness to prevent either player from following
a strategy that narrows the scope of the game appreciably. Moreover, early suboptimal
moves are unlikely to provide the opponent with an easy win (see Section 4.2), so collusion
by the throwing of alternate games is prevented.
Mediocre stable states can also arise in human education systems, for example
when the student gets all the answers right and rewards the teacher with positive teaching
evaluations for not asking harder questions. In further work, we hope to apply the
same kind of MGL equilibrium analysis to issues in human education.
5. Conclusions
TD-Gammon remains a tremendous success in Machine Learning, but the causes
for its success have not been well understood. The fundamental research in Tesauro's
1992 paper which was the basis for TD-Gammon, reportedly beat Sun's Gammontool 60-
65% of the time (depending on number of hidden units) and achieved parity against
Neurogammon 1.0.
Following this seminal 1992 paper, Tesauro incorporated a number of hand-crafted
expert-knowledge features, eventually engineering a network which achieved world
5. MSS follows Maynard Smith's ESS (Maynard Smith, 1982)
6. with the exception of racing situations and positions with some pieces out of play.
master level play (Tesauro, 1995). These features included concepts like existence of a
prime, probability of blots being hit, and probability of escape from behind the oppo-
nent's barrier. The evaluation function was also improved using multiple ply search.
The best players we've been able to evolve can win about 45% of the time against
PUBEVAL, which we believed to be about the same level at Tesauro's 1992 networks.
Because Tesauro had never compared his 1992 networks to PUBEVAL, and because he
used Gammontool's heuristic endgame in the ratings, the level of play achieved by these
players has been somewhat murky:
"the testing procedure is to play out the game with the network until it becomes a
race, and then use Gammontool's algorithm to move for both sides until the end.
This also does not penalize the TD net for having learned rather poorly the racing
phase of the game."(p 272)
When we compare our network's performance to PUBEVAL, it must be noted that
we use our network's own (weak) endgame, rather than substituting in a much stronger
expert system like Gammontool. Gerald Tesauro, in a commentary in this issue, has graciously
cleared up the matter of comparing PUBEVAL to his 1992 results, and differs
somewhat from our conclusions below.
There are two other phenomena fom the 1992 paper, which are relevant to our
work:
"Performance on the 248-position racing test set reached about 65%. (This is substantially
worse than the racing specialists described in the previous section.)" (p.
"The training times .were on the order of 50,000 training games for the networks
with games for the 20-hidden unit net, and 200,000
games for the 40-hidden unit net." (p. 273)
Because we achieve similar levels of skills, and observe these same phenomena in
training, endgame weakness, and convergence, we believe we have achieved results
substantially similar to Tesauro's 1992 result, without any advanced learning algorithms.
We could make stronger players by tuning the learning parameters, and adding more
input features, but that is not our point.
We do not claim that our 100,000th generation player is anywhere near as good as
the current enhanced versions of TD-Gammon, ready to challenge the best humans, but
it is surprisingly good considering its humble origins from hill-climbing with a relative
fitness measure. Tuning our parameters or adding more input features would make
more powerful players, but that was not the point of this study.
We also do not claim there is anything "wrong" with TD learning, or that hillclimbing
is just as good as reinforcement learning in general! Of course it isn't! Our point is
that once an environment and representation have been refined to work well with a
machine learning method, it should be benchmarked against the weakest possible algorithm
so that credit for learning power can be properly distributed.
We have noticed several weaknesses in our player that stem from the training
which does not yet reward or punish the double and triple costs associated with severe
losses ("gammoning" and "backgammoning") nor take into account the gambling process
of "doubling." We are continuing to develop the player to be sensitive to these
issues in the game. Interested players can challenge our 100,000th network using a web
browser through our home page at:
In conclusion, replicating some of Tesauro's 1992 TD-Gammon success under a
much simpler learning paradigm, we find that the reinforcement and temporal difference
methods are not the primary cause for success; rather it is the dynamics of backgammon
combined with the power of co-evolutionary learning. If we can isolate the
features of the backgammon domain which enable co-evolutionary and reinforcement
learning to work so well, it may lead to a better understanding of the conditions neces-
sary, in general, for complex self-organization.
Acknowledgments
This work was supported by ONR grant N00014-96-1-0418 and a Krasnow Foundation
Postdoctoral fellowship. Thanks to Gerry Tesauro for providing PUBEVAL and subsequent
means to calibrate it, Jack Laurence and Pablo Funes for development of the
WWW front end to our evolved player, and comments from the Brandeis DEMO group,
the anonymous referees, Justin Boyan, Tom Dietterich, Leslie Kaelbling, Brendan Kitts,
Michael Littman, Andrew Moore, Rich Sutton and Wei Zhang.
--R
Competitive environments evolve better solutions for complex tasks.
An alternate interpretation of the iterated prisoner's dilemma and the evolution of non-mutual cooperation
The evolution of cooperation.
Modular neural networks for learning context-dependent game strategies
Tracking the red queen: Measurements of adaptive progress in co-evolutionary simulations
Improving elevator performance using reinforcement learning.
Massively parallel genetic programming.
Markov games as a framework for multi-agent reinforcement learning
Algorithms for Sequential Decision Making.
Revisiting the edge of chaos: Evolving cellular automata to perform computations.
Adaptation towards the edge of chaos.
some studies of machine learning using the game of checkers.
Temporal difference learning of position evaluation in the game of go.
Evolving 3d morphology and behavior by competition.
Learning to predict by the methods of temporal differences.
Connectionist learning of expert preferences by comparison training.
Practical issues in temporal difference learning.
Temporal difference learning and td-gammon
Temporal difference
--TR
--CTR
Gerald Tesauro, Comments on Co-Evolution in the Successful Learning of Backgammon Strategy, Machine Learning, v.32 n.3, p.241-243, Sept. 1998
David B. Fogel, Beyond Samuel: evolving a nearly expert checkers player, Advances in evolutionary computing: theory and applications, Springer-Verlag New York, Inc., New York, NY,
Ji Grim , Petr Somol , Pavel Pudil, Probabilistic neural network playing and learning tic-tac-toe, Pattern Recognition Letters, v.26 n.12, p.1866-1873, September 2005
multi-agent system integrating reinforcement learning, bidding and genetic algorithms, Web Intelligence and Agent System, v.1 n.3,4, p.187-202, December
multi-agent system integrating reinforcement learning, bidding and genetic algorithms, Web Intelligence and Agent System, v.1 n.3-4, p.187-202, March
Gerald Tesauro, Programming backammon using self-teaching neural nets, Artificial Intelligence, v.134 n.1-2, p.181-199, January 2002
Elizabeth Sklar , Mathew Davies, Multiagent simulation of learning environments, Proceedings of the fourth international joint conference on Autonomous agents and multiagent systems, July 25-29, 2005, The Netherlands
Yeo Keun Kim , Jae Yun Kim , Yeongho Kim, A Tournament-Based Competitive Coevolutionary Algorithm, Applied Intelligence, v.20 n.3, p.267-281, May-June 2004
Elizabeth Sklar , Mathew Davies , Min San Tan Co, SimEd: Simulating Education as a Multi Agent System, Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, p.998-1005, July 19-23, 2004, New York, New York
Edwin de Jong, The MaxSolve algorithm for coevolution, Proceedings of the 2005 conference on Genetic and evolutionary computation, June 25-29, 2005, Washington DC, USA
Jordan B. Pollack , Hod Lipson , Gregory Hornby , Pablo Funes, Three generations of automatically designed robots, Artificial Life, v.7 n.3, p.215-223, Summer 2001
Pablo Funes , Jordan Pollack, Evolutionary Body Building: Adaptive Physical Designs for Robots, Artificial Life, v.4 n.4, p.337-357, October 1998
Frans A. Oliehoek , Edwin D. de Jong , Nikos Vlassis, The parallel Nash Memory for asymmetric games, Proceedings of the 8th annual conference on Genetic and evolutionary computation, July 08-12, 2006, Seattle, Washington, USA
Jordan B. Pollack , Hod Lipson , Sevan Ficici , Pablo Funes , Greg Hornby , Richard A. Watson, Evolutionary techniques in physical robotics, Creative evolutionary systems, Morgan Kaufmann Publishers Inc., San Francisco, CA, 2001
Edwin D. de Jong, A Monotonic Archive for Pareto-Coevolution, Evolutionary Computation, v.15 n.1, p.61-93, Spring 2007
John Cartlidge , Seth Bullock, Combating Coevolutionary Disengagement by Reducing Parasite Virulence, Evolutionary Computation, v.12 n.2, p.193-222, June 2004
Stephan K. Chalup , Alan D. Blair, Incremental training of first order recurrent neural networks to predict a context-sensitive language, Neural Networks, v.16 n.7, p.955-972, September
Edwin D. De Jong , Jordan B. Pollack, Ideal Evaluation from Coevolution, Evolutionary Computation, v.12 n.2, p.159-192, June 2004
Cooperative Multi-Agent Learning: The State of the Art, Autonomous Agents and Multi-Agent Systems, v.11 n.3, p.387-434, November 2005
Darse Billings , Lourdes Pea , Jonathan Schaeffer , Duane Szafron, Learning to play strong poker, Machines that learn to play games, Nova Science Publishers, Inc., Commack, NY, 2001 | coevolution;reinforcement;temporal difference learning;self-learning;backgammon |
298803 | Locality Analysis for Parallel C Programs. | AbstractMany parallel architectures support a memory model where some memory accesses are local and, thus, inexpensive, while other memory accesses are remote and potentially quite expensive. In the case of memory references via pointers, it is often difficult to determine if the memory reference is guaranteed to be local and, thus, can be handled via an inexpensive memory operation. Determining which memory accesses are local can be done by the programmer, the compiler, or a combination of both. The overall goal is to minimize the work required by the programmer and have the compiler automate the process as much as possible. This paper reports on compiler techniques for determining when indirect memory references are local. The locality analysis has been implemented for a parallel dialect of C called EARTH-C, and it uses an algorithm inspired by type inference algorithms for fast points-to analysis. The algorithm statically estimates when an indirect reference via a pointer can be safely assumed to be a local access. The locality inference algorithm is also used to guide the automatic specialization of functions in order to take advantage of locality specific to particular calling contexts. In addition to these purely static techniques, we also suggest fine-grain and coarse-grain dynamic techniques. In this case, dynamic locality checks are inserted into the program and specialized code for the local case is inserted. In the fine-grain case, the checks are put around single memory references, while in the coarse-grain case the checks are put around larger program segments. The static locality analysis and automatic specialization has been implemented in the EARTH-C compiler, which produces low-level threaded code for the EARTH multithreaded architecture. Experimental results are presented for a set of benchmarks that operate on irregular, dynamically allocated data structures. Overall, the techniques give moderate to significant speedups, with the combination of static and dynamic techniques giving the best performance overall. | Introduction
One of the key problems in parallel processing is to provide
a programming model that is simple for the pro-
grammer. One would like to give the programmer a
familiar programming language, and have the programmer
focus on high-level aspects such as coarse-grain parallelism
and perhaps some sort of static or dynamic data
distribution. Compiler techniques are then required to
effectively map the high-level programs to actual parallel
architectures. In this paper we present some compiler
techniques that simplify the programmer's job when expressing
locality of pointer data structures.
As reported previously, we have developed a high-level
parallel language called EARTH-C [1], and an associated
compiler that translates EARTH-C programs to
low-level threaded programs that execute on the EARTH
multithreaded architecture [2, 3]. Our main emphasis is
on the effective compilation of programs that use irreg-
ular, dynamically-allocated data structures. Our initial
approach provided high-level parallel constructs, and
type extensions to express locality. The compiler then
used the type declarations and dependence analysis to
automatically produce low-level threads.
Although our initial approach did provide a good,
high-level, basis for programming the EARTH multi-threaded
architecture, we found that the programmer
was forced to make many function specializations, and
to declare the appropriate pointer parameters and locally-
scoped pointer variables as local pointers. Thus, in order
to experiment with various locality approaches, the
programmer needed to edit many places in his/her pro-
gram, and to make several copies of the same function,
each copy specialized for a particular type of locality. In
order to ease the burden on the programmer we have developed
some new compiler techniques to infer the locality
of pointer variables, and then automatically produce
the specialized versions of the functions. This allows the
programmer to make very minimal changes to his/her
high-level program in order to try various approaches
to a problem. It also leads to shorter source programs
because the programmer does not need to make several
similar copies of the same function.
The main idea behind our approach is that we use
the information about the context of function calls and
memory allocation statements to infer when indirect
memory references must refer to local memory. We then
automatically create specializations of functions with
the appropriate parameters and locally-scoped variables
explicitly declared as local pointers. This information is
then used by the thread generator to reduce the number
of remote operations required in the low-level threads.
In order to test our approach we implemented the
techniques in the EARTH-McCAT C compiler, and we
experimented with a collection of pointer-based bench-
marks. We present experimental measurements on the
EARTH-MANNA machine to compare the performance
of benchmarks without locality analysis, with locality
analysis, and hand-coded versions with "the best" locality
The rest of the paper is organized as follows. Section
presents an overview of the EARTH-C language, the
EARTH-McCAT compiler, and the EARTH-MANNA
architecture. Section 3 provides some examples to motivate
the locality analysis, and Section 4 describes the
analysis itself. In Section 5 we give experimental results
for our set of benchmarks programs. Finally, in Section
6 we discuss some related work, and in Section 7 we give
conclusions and some suggestions on further work.
2 The EARTH-C Language
The EARTH-C compiler has been designed to accept
a high-level parallel C language called EARTH-C, and
to produce a low-level threaded-C program that can be
executed on the EARTH-MANNA multithreaded archi-
tecture. In this section we provide an overview of the
important points about the language, and the target ar-
chitecture. More complete descriptions of the EARTH
project can be found elsewhere [2].
2.1 The EARTH-C Language
The EARTH-C language has been designed with simple
extensions to C. These extensions can be used to express
parallelism via parallel statement sequences and a
general type of forall loop; to express concurrent access
via shared variables; and to express data locality via
data declarations of local pointers. Any C program is
a valid EARTH-C program, and the compiler will automatically
produce a correct low-level threaded program.
However, usually the programmer will make some minimal
modifications to the program to expose coarse-grain
parallelism, and to add information about data locality.
Figure
1 gives two sample list processing functions,
written in EARTH-C. In both cases the functions take
a pointer to a list head, and a pointer to a node x, and
return the number of times x occurs in the list. Figure
1(a) uses a forall loop to indicate that all interactions
of the loop body may be performed in parallel. Since a
loop must not have any loop-carried dependences
on ordinary variables, we have used the shared variable
count to accumulate the counts. Shared variables must
always be accessed via atomic functions and in this case
we have used the built-in functions writeto, addto and
valueof. Figure 1(b) presents an alternative solution
using recursion. In this example we use a parallel sequence
(denoted using -" . "), to indicate that the
call to equal node and the recursive call to count rec
can be performed in parallel.
The EARTH-C compiler captures coarse-grain parallelism
at the level of function invocations. Ordinary C
function calls are translated into lower-level threaded-
C TOKEN calls. Such TOKEN calls are handled by the
EARTH runtime load balancer, and the call will be
mapped to a processor at runtime. However, in EARTH-
C, it is also possible for the programmer to explicitly
specify where the invocation should be executed using
the syntax p(.)@expr. In this case the underlying
threaded-C INVOKE mechanism is used to explicitly
map the invocation to the processor specified by expr.
For example, in Figure 1(b), the call to equal node is
mapped to the processor owning node x, whereas the
recursive call to count rec is not explicitly mapped to
a processor, and it will be assigned by the runtime load
balancer.
In both cases, using the TOKEN and INVOKE mecha-
nisms, the activation frame is allocated on the processor
assigned the invocation, and the invocation will remain
on the same processor for the lifetime of the invoca-
tion. Thus, the EARTH-C compiler can assume that all
parameters and locally-scoped variables are local memory
accesses. On the contrary, since the invocations are
mapped at runtime (either using the runtime load bal-
ancer, or according to an expression that is evaluated at
runtime), the compiler must assume that all accesses to
global variables and all memory accesses via pointer in-
directions, are to remote memory. Using these assumptions
for function count rec in Figure 1(b), we can see
that accesses to head, x, c1, and c2 are local accesses,
but the access to head-?next is a remote memory ac-
cess. Note that to make the locality easier to see, we
underline all remote memory accesses, and we put local
pointer declarations in bold type.
As the target architecture for EARTH is a distributed-
memorymachine, this distinction between local memory
accesses and remote memory accesses is very important.
Local memory accesses are expressed in the generated
lower-level threaded-C program as ordinary C variables
that are handled efficiently, and they may be assigned
to registers or stored in the local data cache. However,
remote memory references must be resolved by calls to
the underlying EARTH runtime system. Thus, for remote
memory accesses, there is the additional cost of
the call to the appropriate EARTH primitive operation,
plus the cost of accessing the communication network.
If the remote memory access turns out to be actually on
the same processor as the request, the communication
time will be minimal, but it is still significantly more
expensive than making a direct local memory access.
Even though multithreaded architectures can hide
some communication costs, it is clearly advantageous
to maximize the use of local memory, whenever possi-
ble. In order to expose more locality to the compiler,
EARTH-C has the concept of local pointers. If the programmer
knows that a pointer always points to local
memory, then the keyword local may be added to the
pointer type declaration. In Figure 1(a), all calls to
equal node were made to the owner of the first argu-
ment. Thus, in the declaration of equal node we have
declared the first parameter to have type node local
*p, which reading right to left, says that p is a pointer
to a local node. Thus, in the body of equal node, the
EARTH compiler may assume that p-?value is a local
memory reference, but q-?value is potentially a remote
memory reference. Figure 1(b) illustrates the opposite
case, where the second parameter of equal node is a local
pointer, and in this case the EARTH compiler must
assume that p-?value is potentially remote, whereas
q-?value is local.
EARTH-C also includes another form of function
declaration that also expresses locality. Functions may
be declared using the keyword basic. These basic
functions must only reference local memory, and they
may not call any ordinary (remote) functions. Basic
functions are translated into very cheap function invocations
in the target threaded-C code, and all memory
references within their bodies are ordinary C variable
references. Thus, sometimes programmers use basic
functions to indicate locality for all variable references
int count(node *head, node *x)
shared int count;
node *p;
if (equalnode(p,x)@OWNEROF(p))
int equalnode(node local *p, node *q)
return(p-?value == q-?value);
int countrec(node *head, node *x);
node *next;
int c1, c2;
if (head != NULL)
else
int equalnode(node *p, node local *q)
(a) iterative solution (b) recursive solution
Figure
1: Example functions written in EARTH-C
within a function body.
The purpose of this paper is to help automate the
generation of the local pointer declarations and to automatically
provide specialized versions of functions for
different calling contexts. Thus, the programmer concentrates
on expressing where the computation should
be mapped, and the compiler infers the locality information
for pointers, and inserts the correct local pointer
declarations. This reduces the burden on the program-
mer, leads to shorter source programs, and makes changing
the source program less error prone. In the examples
in
Figure
1, the programmer would only need to declare
one version of equal node, and the compiler would automatically
generate the appropriate specializations depending
on the calling context.
2.2 The EARTH-McCAT C Compiler
This paper builds upon the existing EARTH-McCAT C
compiler. The overall structure of the compiler is given
in
Figure
2. The compiler is split into three phases.
Phase I contains our standard transformations and anal-
yses. The important points are that the source program
is simplified into an AST-based SIMPLE intermediate
representation [4]. At this point programs have been
made structured via goto-elimination, and each statement
has been simplified into a series of simple, basic
statements. For each statement (including assignment
statements, conditionals, loops, and function calls), we
have the results of side-effect analysis that gives that set
of locations read/written by the statement. The availability
of this read/write information allows our locality
analysis to be simple, and efficient.
The methods presented in this paper are found in
Phase II, where parallelization and locality enhancement
is done. In these phases we use the results of
the analyses from Phase I in order to transform the
SIMPLE program representation into a semantically-
equivalent program. The transformations presented in
this paper introduce locality declarations, and produce
new specialized versions of some functions.
Phase III takes the transformed SIMPLE program
from Phase II, generates threads, and produces the tar-
Simplify
Goto-Elimination
Local Function Inlining
Heap Analysis
R/W Set Analysis
Array Dependence Tester
PHASE I
Analyses and
Transformations
(Parallelization and
Function Specialization
Loop Partitioning
Locality Analysis
Points-to Analysis
Thread Generation
Build Hierarchical DDG
Code Generation
Locality Enhancement)
Figure
2: Overall structure of the compiler
get threaded-C code. By exposing more locality in Phase
II, we allow the thread generator to deal with fewer remote
memory accesses, and this should lead to fewer
threads, fewer calls to EARTH primitives, and more efficient
parallel programs.
2.3 The EARTH-MANNA Architecture
In the EARTH model, a multiprocessor consists of multiple
EARTH nodes and an interconnection network [2,
3]. As illustrated in Figure 3, each EARTH node consists
of an Execution Unit (EU) and a Synchronization
Unit (SU), linked together by buffers. The SU and EU
share a local memory, which is part of a distributed
shared memory architecture in which the aggregate of
the local memories of all the nodes represents a global
memory address space. The EU processes instructions
in an active thread, where an active thread is initiated
for execution when the EU fetches its thread id from the
ready queue. The EU executes a thread to completion
before moving to another thread. It interacts with the
SU and the network by placing messages in the event
queue. The SU fetches these messages, plus messages
coming from remote processors via the network. The
SU responds to remote synchronization commands and
requests for data, and also determines which threads are
to be run and adds their thread ids to the ready queue.
EU SU
EU SU
Network
Figure
3: The EARTH architecture
Our experiments have been performed on a multi-threaded
emulator built on top of the MANNA parallel
machine[5]. Each MANNA node consists of two Intel
CPUs, clocked at 50MHz, 32MB of dynamic
RAM and a bidirectional network interface capable of
transferring 50MB/S in each direction. The two processors
on each node are mapped to the EARTH EU
and SU. The EARTH runtime system supports efficient
remote operations. Sequentially, loading a remote word
takes about 7s, calling a remote function can be performed
in 9s, and spawning a new remote thread takes
about 4s. When issued in a pipeline these operation
take only one third of these times.
Motivating Examples
In the preceding section (Figure 1) we presented an example
where locality analysis could be used to make
specialized versions of the equal node function. In this
section we present some more typical examples of where
locality information is used in EARTH-C programs, and
we show how locality analysis and specialization can
lead to better programs. These examples should give
the intuitive ideas behind the actual locality analysis as
presented in Section 4. In each of the example programs,
all remote variable references are underlined. Thus, a
program with fewer underlined references exhibits more
locality, and will be more efficient.
3.1 Pointers to Local Variables and Parameters
As outlined in Section 2.1, the underlying EARTH run-time
system maps a function's activation frame to the
processor executing the invocation, and thus it is safe to
assume that parameters and locally-scoped variables are
references to local memory. This assumption can be extended
to pointer variables, if it can be shown that the
pointer must point to locally-scoped variables and/or
parameters. Figure 4(a) gives a somewhat contrived
example that serves to illustrate the basic point. In
function foo, pointer p points-to x, and x is a parame-
ter. Since all parameters are allocated in local memory,
it is safe to assume that *p points to local memory.
Pointer q points either to parameter x, or to locally-
scoped variable y. Since both x and y are local, we can
assume that *q is local as well. Figure 4(b) gives the
localized version of function foo. Note that the indirections
*p and *q are remote (underlined) references in
the original version of foo, but are local references in
the localized version.
int foo(int x)
int y, *p,
int *q;
if (expr())
else
int foo(int x)
int y, local *p;
int local *q;
if (expr())
else
(a) no locality inference (b) after locality inference
Figure
4: Locality for Pointers
3.2 Dynamic Memory Allocation
A rich source of locality information comes from the fact
that dynamic memory allocation always allocates memory
on the processor from which the allocator is called.
Thus, if function f calls a memory allocation function
like malloc, then the memory returned by malloc is
local within the body of f. Consider the example in
Figure
5(a). Without any locality inference, or type
declarations, the compiler must assume that pointer t
may refer to remote memory. Thus, as indicated by the
underlined sections, all indirect references via t must be
assumed to be possibly remote. However, one can note
that t only points to the memory returned from malloc,
and thus t can safely be declared as a local pointer, as
illustrated in Figure 5(b). In this case, all memory accesses
in the body of alloc point can be assumed to
be local.
3.3 Mapping Computation to the OWNER OF Data
The most common kind of locality information comes
from the programmer mapping function invocations to
the owner of a piece of data, using the @OWNER OF ex-
pression. A typical example is given in Figure 6(a). The
function count equal recursively descends through a binary
tree t, counting the number of nodes with value
v. The first recursive call, to the left sub-tree, is not
explicitly mapped to any particular processor, and so
there is no locality information for it. However, the
second recursive call is explicitly sent to the owner of
the right sub-tree. This means that these invocations
can assume that all references via pointer t are local.
As illustrated in Figure 6(b), to express this properly, a
specialized copy of count equal must be created (called
node *allocpoint(double x, double y, int colour)
node * t;
node *allocpoint(double x, double y, int colour)
node local *t;
(a) no locality inference (b) after locality inference
Figure
5: Locality for Dynamic Memory Allocation
count equal spec in the example), and in that copy the
parameter t is declared to be a local pointer, and thus
all memory accesses in the body are local.
3.4 Mapping Computation to HOME
Another common method for mapping computation to
specific processors is to use function calls of the form
f(.)@HOME. This indicates that f should be invoked
on the processor executing the call. From a locality
standpoint, this gives us two kinds of information. First,
if f returns a pointer value that is local within f, it
must also be local within the body of function calling f.
Second, if an argument to f is local in the caller, then
the corresponding parameter must be local in the body
of f.
Figure
7(a) gives an example, and Figure 7(b) gives
the result of applying locality analysis. First note that
we can infer that t is a local pointer in the function
newnode using the ideas presented for dynamic allocation
given in section 3.2. Thus, the two calls to newnode
in f must return local pointers, and both p and q must
be local pointers. Now, consider the call to lessthan
in f. Since both arguments, p and q, are local point-
ers, the corresponding formals, a and b, in the body of
lessthan must also be local pointers.
4 Locality Analysis
In the last section, we identified the language/program
features which are the sources of locality information. In
this section we present the complete algorithm and associated
analysis rules for locality analysis. The overall
algorithm is presented in Figure 8. It works iteratively
in two inter-related intra- and inter-procedural steps.
At the beginning of the analysis, all the functions
in the program are considered as candidates for specialization
and put in the set spclPool. Further, the
locality attribute for all formal parameters and global
variables is initialized to Remote, unless the programmer
has given explicit local pointer declarations. For
pointers explicitly declared as local in the program, the
locality attribute is initialized to Local. After this ini-
tialization, the analysis proceeds in the following two
steps:
Step I: This step individually analyzes each function
in the pool of functions to be specialized (spclPool).
It starts with the current locality attribute of variables,
and propagates this information throughout the procedure
using a flow-insensitive intraprocedural approach.
The details of this step are given in Subsection 4.1.
Step II: This step performs interprocedural propagation
of locality, and procedure specialization when ap-
plicable. It looks at each call site in the functions belonging
to spclPool, which is called with either HOME or
OWNER OF primitive. Based on the locality information
at the call-site, it infers locality information for formal
parameters of the callee function as illustrated in sections
3.3 and 3.4. If a specialized version of the callee
function with this locality already exists, the call-site is
modified to invoke this function instead. Otherwise, a
newly specialized version of the callee function is created
for the given call-site. The locality attributes of
the parameters of the newly-created function are appropriately
initialized. As call-sites within the specialized
function can trigger further specializations, the newly-created
function is put in spclPool.
At the end of this step, if spclPool is non-empty
we go back to the first step. Clearly this process will
terminate because we have only a finite number of func-
tions/parameters that can be specialized, and specializations
always add locality information.
In the actual implementation, we just create a new
locality context to represent a specialized function, and
we do not actually create the complete new function.
The decision to actually create specialized functions is
taken after the analysis, depending upon the benefit
achievable from a particular specialization. The details
of the specialization step are given in Subsection 4.2.
4.1 Intraprocedural Locality Propagation
We perform intraprocedural propagation of locality information
using type-inference techniques [6], which have
been previously adapted to perform almost linear points-to
analysis[7]. The basic idea of the type inference
algorithm is to partition program variables into a set
of equivalence classes. To achieve this classification, a
merging-based approach is used.
For example, a simple assignment y, leads to
assignment of the same type class to variables x and
y, or in more general terms merging of the current type
classes of x and y. If one wants to collect points-to information
instead, the assignment would lead to merging
of the points-to classes of variables x and y, where the
points-to class of a variable contains the set of locations
it may point to at runtime. Fast union/find data struc-
int countequal(tree *t, int v)
int c1, c2, c3;
else
int countequal(tree *t, int v)
int c1, c2, c3;
else
int countequalspec(tree local *t, int v)
int c1, c2, c3;
else
(a) no locality inference (b) after locality inference and specialization
Figure
Locality generated using OWNER OF
tures can be used to make merging fast. This is the
technique used by Steensgaard [7].
To collect locality information, we enhance this technique
by attaching an additional locality attribute with
each points-to class. The locality attribute can have one
of the three possible values: (i) ?: indicating that the
locality information is not yet determined, (ii) Local: all
locations are definitely allocated on local memory, and
(iii) Remote: some locations may be allocated on remote
memory. When two points-to classes are merged,
the new locality attribute is obtained by merging the
locality attributes of the two classes using the merge
operator ./ defined as follows:
Remote Remote Remote Remote
Below we give a small program fragment, and the
points-to and locality information obtained by the type-inference
based algorithm for it.
int *a, *b, *c;
int x, y, z;
else
The points-to and locality information at different program
points is as follows:
After
After
y, Localg
y, Localg
After
y, Localg
y, Localg
The locality attribute is Local for all three pointers
as they contain addresses of local variables. One
can also note that the information provided is flow-
insensitive, and there is no kill information (otherwise
x should not be in the points-to class of a or b after
statement S2). Thus the final information after S3 is
conservatively valid for the entire program fragment.
Our locality analysis uses the above type-inference
based algorithm in an intraprocedural setting. The focus
of the analysis is on accurately computing locality
attributes, and not on computing complete points-to
information. Thus, our analysis does not account for
points-to information that holds due to aliasing between
parameters and globals. However, since we make the
worst case assumption about the locality of parameters
and globals, this loss of information does not affect the
correctness of our technique. Further, we have found
that this loss of information does not affect the quality
of the locality information that we find. Thus, it appears
that this very inexpensive intraprocedural propagation
is a good choice.
In the following subsections we provide detailed rules
for the intraprocedural locality analysis. Our analysis
is performed at the SIMPLE intermediate representation
of the EARTH-McCAT compiler. The SIMPLE
representation provides eight basic statements that can
affect points-to/locality information. Below, we provide
void f()
node *p, *q;
if (lessthan(p,q)@HOME)
- .
node *newnode(int val)
node *t;
int lessthan(node *a, node *b)
void f()
node local *p, local *q;
if (lessthan(p,q)@HOME)
- .
node *newnode(int val)
node local *t;
int lessthan(node local *a, node local *b)
(a) no locality inference (b) after locality inference
Figure
7: Locality generated using HOME
locality analysis rules for each statement.
4.1.1 Address Assignment
points to variable y, so we merge
the points-to class of x with the class to which variable
y belongs.
if
else
4.1.2 Dynamic Allocation
For each statement S containing a mal-
loc() call (or a related memory allocation call), we create
a new variable called HeapS, and also create a class
for it. The locality attribute of this class is initialized
as Local. This is done because the EARTH programming
model, requires a malloc call to always be allocated
memory from the local processor. After creating
this new class, we merge it with the points-to class of
variable x, giving the following rule:
4.1.3 Pointer Assignments
The statements belonging to this category include
y, y, and the rules for analyzing
them are discussed below.
For the statement
the operands y and z on right-hand side. This is conservatively
safe, and the result does not depend on the
operation being performed, or the type of the operands.
For the statement y, we need to follow an additional
level of indirection on left-hand side. We need
to know what x points-to to perform the appropriate
mergings, i.e. find the points-to-class of the points-to-
class of x. If such a class does not yet exist, we simply
create such a class, which gets filled in as the analysis
proceeds. The same argument applies to statement x
*y with respect to its right-hand side. The following
table summarizes these rules.
4.1.4 Function Calls
function call can considerably
affect locality information. By using pointer
arguments and global variables, it can modify the locality
attribute, of any set of points-to classes. To avoid
always making worst-case assumptions at function calls,
locality analysis uses the results of interprocedural read-write
(mod/ref) analysis, which is computed by our
read-write set analysis in Step I of the compiler (re-
fer to
Figure
2 in Section 2.2). Based on the read-write
information we have two important cases:
Case I: The function call does not write to any pointer
variable visible in the caller (including globals). This
guarantees that the call does not affect the points-to and
hence also locality information in the caller. In this case
the statement statement
only affect the points-to relationship/locality attribute
of variable x. The locality attribute of the points-to
class of x is updated depending upon the locality attribute
of the points-to class of return f (return f is a
symbolic name which represents the value returned by
a function f), and the optional @expr used for the call.
If the function f is a basic function, or called @HOME,
and f returns a pointer (return f) pointing to Local,
i.e. Locality(PointsToClass(return f)) is Local, it implies
that the location pointed to by return f resides
fun locality
need to analyze all functions initially
initialize locality(prog);
while (spclPool != of functions to be analyzed is non\Gammaempty
propagate locality(spclPool);
fun propagate locality(prog,
foreach func in spclPool = intraprocedural propagation
propagate intraprocedural locality(func);
deletFromSplcPool(func, no need to be analyzed again
foreach callSite in prog = interprocedural propagation
newly specialized function is created
addToSpclPool(callSite.func, spclPool);=\Lambdanew func needs to be analyzed =
fun propagate intraprocedural locality(func) =
foreach assignmentStmt in func
locality analyze stmt(assignmentStmt, localitySet);
foreach callStmt in
locality analyze call(callStmt, localitySet);
if (callSite.type == @HOME jj callSite.type == @OWNER OF)
find which params will be Local in the callee function
specialzed version already exists with same locality set
if (specialFuncExists(callSite.func,
return new function is created
else
new func for callSite
locality for new
return new specialized function is created
foreach func in prog
conservatively assume all parameters and globals point\Gammato remote memory
Figure
8: Overall Algorithm for Locality Analysis
on the same processor. In this case, the new locality
attribute of points-to class of x is obtained by merging
it with Locality(PointsToClass(return f)). Otherwise
it is simply assigned Remote, as per the rule below.
if (expr == HOME Or IsBasicFunc(f))
else
Case II: The other alternative is that the function
call possibly writes to pointer variables of the caller. In
this case we make worst-case assumptions and set the
locality attribute of points-to classes of all arguments
as Remote. We do it recursively for points-to classes of
variables reachable via indirection on arguments : i.e.
we also set Locality(P ointsT oClass (arg)) as Remote.
We do not need to consider globals here as we already
initialize their locality attribute as Remote at the start
of the analysis.
4.2 Specializing Functions
After completion of Step II of the algorithm, we have
computed a set of possible specializations, and the associated
locality. We use static estimates of the number
of remote accesses saved to decide which specialized
functions to actually create. For a given locality context
of a function, we compute the following data: (i)
weight of the corresponding call-site that reflects its potential
execution frequency, and (ii) count of the remote
accesses that can be eliminated by creating the specialized
function. A specialized function is created for the
given locality context if we find its (Weight * Count)
estimate greater than our threshold, which we set to 20
by default.
To compute the weight for call-sites, we first initialize
the weight of all call sites to one. For each loop or recursion
cycle, in which a call-site is embedded, its weight is
multiplied by ten. The count of remote accesses saved
is similarly estimated. A simple remote access saved
is counted as one, while a remote access saved inside
a loop is counted as ten. Further if some call sites inside
the specialized function can also be specialized, we
also add the number of remote accesses saved from this
chain-specialization to the count.
5 Experimental Results
In order to evaluate our approach, we have experimented
with five benchmarks from the Olden suite [8], described
in
Table
1. All benchmarks use dynamic data structures
(trees and lists) except quicksort, which uses
dynamically-allocated arrays. The benchmark suite is
suitable to evaluate our locality analysis focused on pointers
Benchmark Description Problem Size
power Optimization Problem based
on a variable k-nary tree
10,000 leaves
perimeter Computes the perimeter of
a quad-tree encoded raster
image
Maximum tree-
depth 11
quicksort Parallel version of quicksort 256K integers
tsp Find sub-optimal tour for traveling
problem
cites
health Simulates the
Colombian health-care system
using a 4-way tree
6 levels and 100
iterations
Table
1: Benchmark Programs
For each benchmark we provide results for three ver-
sions: a simple EARTH-C version, a localized EARTH-
C version and an advanced EARTH-C version. Hence-
forth, we will refer to them respectively as simple, localized
and advanced versions. The simple version implements
the benchmark with the best data distribution
that we have discovered to date for these benchmarks,
and exploits locality using the @OWNER OF and @HOME
primitives. It uses neither local pointers nor basic functions
for this purpose. However, it can use basic functions
for performing computations. The localized version
is the benchmark obtained by applying our locality
analysis and subsequent function specialization on the
simple version. This version tries to find as many local
pointers as possible. The advanced version is the hand-coded
benchmark where the user tries to optimally exploit
locality using local pointers, basic functions and
other possible tricks. This version is based on the best
efforts of our group to produce good speedups. None of
these advanced versions were implemented by the authors
Note that all three versions use the same general dynamic
data distribution. However, the generated low-level
program exploits locality to different degrees. Stat-
ically, we can divide memory references into those that
must be local, and those that might be remote. Each
reference that must be local is translated into an ordinary
C variable reference, which may be allocated to
registers by the target compiler, or may be cached by
the architecture. Each reference that might be remote
is translated into a call to the EARTH runtime system.
These calls to the runtime system may be resolved into
memory requests to local memory, or to remote mem-
ory, depending on the calling context. If at runtime,
it resolves to a local memory reference, then we call
this a pseudo-remote memory access. Note that pseudo-
remote references are much more expensive than local
references, but not as expensive as real remote references
that must read or write data over the communication
network.
Our locality analysis and specializations effectively
introduce more static declarations for local pointer vari-
ables, and thus at runtime we execute fewer pseudo-
remote references and more local references. As explained
in the previous sections, this is done by automatically
introducing local pointer declarations, and by
introducing specialized versions of functions that capture
locality for specific calling contexts.
In
Table
2 we summarize the static effect of applying
our locality analysis and specializations to the simple
versions of our benchmarks. For each benchmark, the
first two columns list the number of local pointer declarations
introduced, and the number of function specializations
made in producing the localized version of
the benchmark. The third column gives the relative
sizes (lines of code) of the simple and advanced versions.
Note that simple versions are all shorter, and sometimes
substantially shorter than the advanced, hand-
specialized programs.
Benchmark #locals #spcls size(Simple)
power
perimeter
Table
2: Static Measurements
In
Table
3 we provide data on the actual execution
time, in milliseconds, for all three versions of each
benchmark. These experiments were performed on the
EARTH-MANNA architecture as described in Section
2.3. The last two columns give the percentage speedup
obtained over the simple version for the localized and
advanced versions respectively. For example, the column
labeled "Localized vs. Simple" reports (T simple \Gamma
localized )=T simple 100, and the column labelled "Ad-
vanced vs. Simple" reports (T simple \GammaT advanced )=T simple
100. We provide data for benchmark runs on 1, 8 and
processors.
In
Table
4 we present the actual number of remote
data accesses and remote calls performed by different
versions of each benchmark. The last two columns of
this table give the percentage reduction in the number
of remote data accesses/remote calls over the simple version
for the localized and advanced versions respectively.
Benchmark Simple Localized Advanced Localized Advanced
EARTH-C EARTH-C EARTH-C vs. Simple vs. Simple
(msec) (msec) (msec) (% impr) (%impr)
power 1 proc 67158.06 64659.42 63482.45 3.72 5.47
8 procs 9132.92 8846.54 8651.86 3.14 5.27
perimeter 1 proc 7095.55 5966.37 5255.03 15.91 25.90
8 procs 1220.71 894.86 872.59 26.70 28.50
procs 748.31 546.23 523.32 27.00 30.06
8 procs 5394.66 5020.13 4587.74 6.94 15.00
8 procs 17193.55 15104.78 1116.16 12.10 93.50
Table
3: Execution Time
Note that the number of remote accesses performed is
independent of the number of processors used for a given
program, when we do not differentiate between
a real remote access and a pseudo-remote access.
The data in Tables 3 and 4 indicates that the localized
version always performs better than the simple
version, i.e. our locality analysis is always able to identify
some additional locality. Further, the percentage
improvement can vary a lot, depending upon the bench-
mark. The localized version comes very close to the
advanced version for the first three of the five bench-
marks. In the last two cases the localized version does
give an improvement, but does not compete with the
hand-coded advanced version. We analyze these results
in detail individually for each benchmark below.
Power: This benchmark implements the power system
optimization problem [9]. It uses a four-level tree structure
with different branching widths at each level.
Our locality analysis achieves 3-4% improvement over
the simple version in execution time, which is quite
close to the advanced version (5%). However, it is able
to achieve an 80% reduction in the number of remote
data accesses (Table 4). This happens because function
calls in this benchmark are typically of the format
compute node(node)@OWNER OF(node), and the function
performs numerous scalar data accesses of the form
(*node).item. Our analysis captures the locality of the
pointer node and eliminates all remote data accesses
with respect to it.
The significant reduction in remote accesses does not
reflect in the execution time, as the benchmark spends
most of the time in performing floating point operations,
which far exceeds the time spent in data accesses. The
advanced version achieves 93% reduction in the number
of remote calls over both the simple and localized
version, by using basic functions. This factor enables
it to achieve slightly better speedup than the localized
version.
Perimeter: This benchmark computes the perimeter of
a quad-tree encoded raster image [9]. The unit square
image is recursively divided into four quadrants until
each one has only one point. The tree is then traversed
bottom-up to compute the perimeter of of each quadrant
The localized version achieves 15-27% speedup and
comes very close to the advanced version for the 8 and
processor runs. The localized version has 32% fewer remote
accesses. This reduction more significantly affects
the execution time than for power benchmark, because
the benchmark does not involve much computation and
spends most of its time in traversing the quad-tree and
hence performing data accesses.
It is an irregular benchmark, and each computation
requires accesses to tree nodes which may not be physically
close to each other. Due to this characteristic,
the advanced version cannot exploit additional locality
using basic functions. The localized version thus competes
very well with it.
Quicksort: This benchmark is a parallel version of the
standard quicksort algorithm. The two recursive calls
to quicksort are executed in parallel, with the call for
the bigger subarray invoked @HOME. Because the size of
subarrays in each recursive sorting phase is unknown in
advance, dynamically-allocated arrays are used.
In the simple version, function qsort copies the in-coming
array to a local array using a blkmov, and at
the end copies back the local array to the incoming
array using another blkmov. Our locality analysis is
able to identify the locality of the incoming array for
the @HOME call. It generates a specialized version of the
recursive qsort function, with the incoming array declared
as local pointer, and the two blkmov instructions
substituted by calls to the basic function memcpy. This
transformation enables the localized version to achieve a
significant 80% speedup over the simple version, which
is within 1% of the speedup obtained by the advanced
version. The advanced version uses some additional basic
functions to completely eliminate remote calls, but
performs just a little better than the localized version.
Benchmark Simple Localized Advanced Localized Advanced
EARTH-C EARTH-C EARTH-C vs. Simple vs. Simple
power 2294179 451179 204179 80.33 91.10
perimeter 2421800 1635111 1586323 32.48 34.49
quicksort 8498635 29128 216 99.65 99.99
tsp 4421050 2672068 829790 39.56 81.23
health 41409726 33148575 606 19.94 99.99
Table
4: Remote Accesses Saved
Tsp: This benchmark solves the traveling salesman problem
using a divide-and-conquer approach based on close-
point algorithm [9]. This algorithm first searches a sub-optimal
tour for each subtree(region) and then merges
subtours into bigger ones. The tour found is built as
a circular linked list sitting on top of the root nodes of
subtrees.
Similar to perimeter, this benchmark is irregular in
nature and spends a significant amount time on data
accesses. The localized version achieves 6-8% speedup
from a 39% reduction in remote data accesses.
The localized version, however, fails to compete with
the advanced version, which achieves upto 20% speedup
resulting from an 81% reduction in remote data ac-
cesses. This happens because the linked lists representing
tours, are distributed in segments and there are only
very few links across processors. With the knowledge
that an entire sublist is local, the advanced version exploits
significant data locality by using basic functions
to traverse these local sublists. Our locality analysis is
not designed to identify locality of recursive fields. This
kind of locality is implicit in the programmer's organization
of the data, and is very difficult to find with
compiler analyses.
This benchmark simulates the Colombian health-care
system using a 4-way tree [9]. Each village has four
child villages, and a village hospital, treating patients
from the villages in the same subtree. At each time
step, the tree is traversed, and patients, once assessed,
are either treated or passed up to the parent tree node.
The 4-way tree is evenly distributed among the processors
and only top-level tree nodes have their children
spread among different processors.
With the locality analysis, we are able to achieve
up to 12% speedup, resulting from a 19% reduction
in remote data accesses. This benchmark is similar to
power in its call pattern. However, for a call of the format
foo(village node)@OWNER OF(village node), it
recursive data structures through village node
(linked list of patients) as opposed to scalar data items
like in power. Our analysis is able to eliminate all remote
accesses with respect to the village node like
list (*village node).list, but not further accesses
like (*list).patient.
Thus the localized version gets decent speedup over
the simple version, but does not compete with the advanced
version. With the knowledge that only top level
tree nodes can have remote children, the advanced version
eliminates almost all the remote data accesses and
calls. In this regard, this benchmark is similar to tsp.
5.1
Summary
In summary, we find that our locality analysis does give
significant improvements in all cases. However, in some
cases locality analysis cannot compete with hand-coded
locality mapping and declarations provided by the pro-
grammer. As the programmer may have some implicit
knowledge of data locality, it is important to retain the
ability to explicitly declare local pointers.
Since in many cases the declarations can be auto-
mated, and the locality analysis and specialization is
a source-to-source transformation, one could imagine
that the programmer could use the output of the compiler
to produce a localized version of the program,
and then test the program to see if acceptable speedup
is achieved. If there is more locality in the program,
then the programmer could add further locality declarations
in order to further improve the program. Finally,
note that our experiments are performed on EARTH-
MANNA. Compared with other distributed memory systems
like the IBM SP2, or a network of workstations,
EARTH-MANNA has a much smaller memory latency,
which sometimes can be further hidden by multi-threading
techniques. Therefore, we can expect better speedup
from locality analysis for those machines with larger
memory latencies and those that do not support multi-threading
techniques.
6 Related Work
Our intraprocedural locality propagation approach is
similar to Steensgaard's [7] linear points-to analysis al-
gorithm. Other related work in the area of parallelizing
programs with dynamic data structures is that of
Carlisle [9] for distributed memory machines, and of
Rinard and Diniz [10] for shared memory machines.
In particular, Carlisle's [9] affinity analysis has similar
goals as our locality analysis, albeit we target different
kinds of locality.
Steensgaard proposed a type-inference based algorithm
for points-to analysis with almost linear time com-
plexity. His algorithm is both flow-insensitive and context-
insensitive. On encountering a call-site he simply merges
the formal parameters with their respective actual argu-
ments. Thus his algorithm cannot distinguish between
information arriving at a function from different calling
contexts.
For our locality analysis, the calling context information
is crucial. The invocation specification (@HOME,
@OWNER OF) of the call site is a major source of locality
information. Thus we do not want to merge locality
information arriving from different calling contexts.
On the contrary, we want to create specialized versions
of a function for each calling context that provides us
substantial locality. To this end, we use type-inference
to only propagate information intraprocedurally, and
employ a different technique for interprocedural propagation
as explained in section 4. However, to ensure
that our intraprocedural propagation collects conservatively
safe information, we need to make conservative
assumptions at the start of the procedure and on encountering
call-sites (using read-write set information).
Thus, although our approach was originally inspired by
the points-to analysis, it is really specifically tailored to
capture the information most relevant to locality analysis
Carlisle's affinity analysis is designed to exploit the
locality with respect to linked fields. His analysis relies
upon the information regarding the probability (affin-
ity) that nodes accessible by traversing a linked field
are residing on the same processor. If the affinity is
high, then he puts runtime checks to eliminate remote
accesses. However, the affinity information is not infered
by the compiler, but is provided via programmer
annotations.
Our locality analysis is not designed to exploit the
locality achievable via linked fields (as discussed in section
because we wanted our analysis to be an automatic
compiler analysis, and we did not want to burden
the user for additional detailed information. Locality
of recursive fields can be explicitly declared using local
pointer declarations in EARTH-C.
7 Conclusions and Further Work
In this paper we have presented a locality analysis for
parallel C programs, based on type-inference techniques.
Our analysis tries to exploit additional fine-grain locality
from the coarse-grain locality information already
provided by the user and from other program characteristics
like malloc sites. We evaluated its effectiveness
for a set of benchmarks and found that it can eliminate
significant number of pseudo-remote accesses and provide
speedups ranging from a modest 4% up to 80% over
the original parallel program. For several benchmarks,
this speedup also comes very close to the speedup obtained
by an advanced hand-coded version. Further-
more, we found that the locality analysis reduced the
burden on the programmer, and allowed us to develop
shorter, more general benchmarks.
Based on the encouraging results from this paper, we
plan to evaluate our analysis on a wider set of bench-
marks. We also plan to use flow-sensitive locality propagation
techniques, that can exploit the locality of a
pointer, even if it is local within only a specific section
of the program. Another goal is to automatically identify
basic functions.
Finally, as the locality information for linked fields
can sometimes provide significant speedups (for benchmarks
health and tsp), we plan to extend our analysis to
capture this type of locality, using profile information,
and efficiently-scheduled runtime checks.
Acknowledgements
We gratefully acknowledge the support from people in
the EARTH group, specially Prof. Guang R. Gao and
Olivier Maquelin. This research was funded in part by
NSERC, Canada.
--R
"Compiling C for the EARTH multithreaded architecture,"
"A study of the EARTH-MANNA multithreaded system,"
"Polling Watchdog: Combining polling and interrupts for efficient message handling,"
"Designing the McCAT compiler based on a family of structured intermediate representations,"
"La- tency hiding in message-passing architectures,"
"Efficient type inference for higher-order binding-time analysis,"
"Points-to analysis in almost linear time,"
"Supporting dynamic data structures on distributed-memory machines,"
Olden: Parallelizing Programs with Dynamic Data Structures on Distributed-Memory Machines
"Commutativity analysis: A new analysis framework for parallelizing compilers,"
--TR
--CTR
Francisco Corbera , Rafael Asenjo , Emilio L. Zapata, A Framework to Capture Dynamic Data Structures in Pointer-Based Codes, IEEE Transactions on Parallel and Distributed Systems, v.15 n.2, p.151-166, February 2004
Oscar Plata , Rafael Asenjo , Eladio Gutirrez , Francisco Corbera , Angeles Navarro , Emilio L. Zapata, On the parallelization of irregular and dynamic programs, Parallel Computing, v.31 n.6, p.544-562, June 2005 | compiling for parallel architectures;locality analysis;multithreaded architectures |
298815 | An Index-Based Checkpointing Algorithm for Autonomous Distributed Systems. | AbstractThis paper presents an index-based checkpointing algorithm for distributed systems with the aim of reducing the total number of checkpoints while ensuring that each checkpoint belongs to at least one consistent global checkpoint (or recovery line). The algorithm is based on an equivalence relation defined between pairs of successive checkpoints of a process which allows us, in some cases, to advance the recovery line of the computation without forcing checkpoints in other processes. The algorithm is well-suited for autonomous and heterogeneous environments, where each process does not know any private information about other processes and private information of the same type of distinct processes is not related (e.g., clock granularity, local checkpointing strategy, etc.). We also present a simulation study which compares the checkpointing-recovery overhead of this algorithm to the ones of previous solutions. | Introduction
Checkpointing is one of the techniques for providing
fault-tolerance in distributed systems [6]. A global check-point
consists of a set of local checkpoints, one for each pro-
cess, from which a distributed computation can be restarted
after a failure. A local checkpoint is a state of a process
saved on stable storage. A global checkpoint is consistent
if no local checkpoint in that set happens before [9] another
one [4, 10].
Three classes of algorithms have been proposed in the
literature to determine consistent global checkpoints: un-
coordinated, coordinated and communication induced [6].
In the first class, processes take local checkpoints independently
on each other and upon the occurrence of a failure,
a procedure of rollback-recovery tries to build a consistent
global checkpoint. Note that, a consistent global checkpoint
might not exist producing a domino effect [1, 12] which, in
the worst case, rollbacks the computation to its initial state.
In the second class, an initiator process forces other pro-
cesses, during a failure-free computation, to take a local
This work is partially supported by Scientific Cooperation Net-work
of the European Community "OLOS" under contract No.
ERB4050PL932483.
checkpoint by using control messages. The coordination
can be either blocking [4] or non-blocking [8]. However, in
both cases, the last local checkpoint of each process belongs
to a consistent global checkpoint.
In the third class, the coordination is done in a lazy
fashion by piggybacking control information on application
messages. Communication-induced checkpointing algorithms
can be classified in two distinct categories: model-based
and index-based [6]. Algorithms in the first cate-
gory, for example [2, 14], have the target to mimic a piece-wise
deterministic behavior for each process [7, 13] as well
as providing the domino-free property. Index-based algorithms
associate each local checkpoint with a sequence
number and try to enforce consistency among local checkpoints
with the same sequence number [3, 5, 11]. Index-based
algorithms ensure domino-free rollback with, gener-
ally, less overhead, in terms of number of checkpoints and
control information, than model-based ones.
In this paper we present an index-based checkpointing
protocol that reduces the checkpointing overhead in terms
of number of checkpoints compared to previous index-based
algorithms. Our protocol is well suited for autonomous
environments where each process does not have
any private information of other processes.
To design our algorithm, we extract the rules, used by
index-based algorithms, to update the sequence number.
This points out that checkpoints are due to the process of
increasing of the sequence numbers. Hence, we derive an
algorithm that, by using an equivalence relation between
pair of successive checkpoints of a process, allows a recovery
line to advance without increasing its sequence number.
In the worst case, our algorithm takes the same number of
checkpoints as in [11]. The advantages of our algorithm are
quantified by a simulation study showing that the check-pointing
overhead can be reduced up to 30% compared to
the best previous solution. The price we pay is that each
application message piggybacks more control information
(one vector of integers) compared to previous proposals.
The paper is organized as follows. Section 2 presents the
system model. Section 3 shows the class of index-based
checkpointing algorithms in the context of autonomous en-
vironments. Section 4 describes the equivalence relation
and the proposed algorithm. In Section 5 a simulation study
is presented.
2. Model of the Distributed Computation
We consider a distributed computation consisting of n
which interact by messages ex-
changing. Each pair of processes is connected by a two-
ways reliable channel whose transmission delay is unpredictable
but finite.
are autonomous in the sense that: they do not
share memory, do not share a common clock value 1 and do
not have access to private information of other processes
such as clock drift, clock granularity, clock precision and
speed. Recovery actions due to process failures are not considered
in this paper.
A process produces a sequence of events; each event
moves the process from one state to another. We assume
events are produced by the execution of internal, send or
receive statements. Moreover, for simplicity, we consider
a checkpoint C as a particular type of internal event of a
process, which dumps the current process state onto stable
storage. The send and receive events of a message m
are denoted respectively with send(m) and receive(m). A
distributed execution -
can be modeled as a partial order of
events -
is the set of all events and !
is the happened-before relation [9].
A checkpoint in process P i is denoted as C i;sn where sn
is called the index, or sequence number, of a checkpoint.
Each process takes checkpoints either at its own pace (ba-
sic checkpoints) or induced by some communication pattern
(forced checkpoints). We assume that each process P i takes
an initial basic checkpoint C i;0 and that, for the sake of sim-
plicity, basic checkpoints are taken by a periodic algorithm.
We use the notation next(C i;sn ) to indicate the successive
checkpoint, taken by P i , after C i;sn . A checkpoint interval
I i;sn is the set of events between C i;sn and next(C i;sn ).
message m sent by P i to P j is called orphan with
respect to a pair (C i;sn its receive event occurred
before C j;sn j while its send event occurred after
checkpoint C is a set of local checkpoints
one for each process. A global
checkpoint C is consistent if no orphan message exists in
any pair of local checkpoints belonging to C. In the fol-
lowing, we denote with C sn a global checkpoint formed by
checkpoints with sequence number sn 2 .
1 The index-based algorithm presented in [5] assumes, for example, a
standard clock synchronization algorithm, which provides a common clock
value to each process.
We use the term consistent global checkpoint Csn and recovery line
Lsn interchangeably.
3. Index-based Checkpointing Algorithms
The simplest way to form a consistent global checkpoint
is, each time a basic checkpoint C i;sn is taken by process
to start an explicit coordination. This coordination results
in a consistent global checkpoint C sn associated to that
local checkpoint. This strategy induces checkpoints
(one for each process) per basic checkpoint.
Briatico at al. [3] argued that the previous "centralized"
strategy can be "decentralized" in a lazy fashion by piggy-backing
on each application message m the index sn of the
last checkpoint taken (denoted m:sn).
Let us assume each process P i endows a variable sn i
which represents the sequence number of the last check-
point. Then, the Briatico-Ciuffoletti-Simoncini (BCS) algorithm
can be sketched by using the following rules associated
with the action to take a local checkpoint:
When a basic checkpoint is scheduled, sn i is increased by
one and a checkpoint C i;sn i is taken;
Upon the receipt of a message m, if sn
checkpoint Ci;m:sn is taken and sn i is set to m:sn, then the message is
processed.
By using the above rules, it has been proved that
C sn is consistent [3]. Note that, due to the rule
take-forced(BCS), there could be some gap in the index
assigned to checkpoints by a process. Hence, if a process
has not assigned the index sn, the first local checkpoint
of the process with sequence number greater than sn can be
included in the consistent global checkpoint C sn .
In the worst case of BCS algorithm, the number of forced
checkpoints induced by a basic one is n \Gamma 1. In the best case,
if all processes take a basic checkpoint at the same physical
time, the number of forced checkpoints per basic one
is zero. However, in an autonomous environment, local indices
of processes may diverge due to many causes (process
speed, different period of the basic checkpoint etc). This
pushes the indices of some processes higher and each time
one of such processes sends a message to another one, it is
extremely likely that a number of forced checkpoints, close
to will be induced.
To reduce the number of checkpoints, an interesting observation
comes from the Manivannan-Singhal algorithm
[11] which has been designed for non-autonomous distributed
systems. There is no reason to take a basic check-point
if at least one forced checkpoint has been taken during
the current checkpoint interval. So, let us assume process
indicates if at least one
forced checkpoint is taken in the current checkpoint period
(this flag is set to FALSE each time a basic checkpoint is
scheduled, and set to TRUE each time a forced checkpoint
is taken). A version of Manivannan-Singhal (MS) algorithm
well suited for autonomous environment can be sketched by
the following rules:
When a basic checkpoint is scheduled, if skip
then increased by one and a checkpoint
Upon the receipt of a message m, if sn i ! m:sn then a
checkpoint Ci;m:sn is taken, sn i is set to m:sn and skip
the message is processed.
Even though MS algorithm produces a reduction
of the total number of checkpoints, the number of
forced checkpoints caused by a basic one is equal
to BCS as take-forced(MS) is actually similar to
take-forced(BCS).
In this section we propose an algorithm that includes the
take-basic(MS) rule, however, when a basic check-point
is taken, the local sequence number is updated only
if there was the occurrence of a particular checkpoint and
communication pattern. The rationale behind this solution
is that each time a basic checkpoint is taken without increasing
the sequence number, it does not force checkpoints and
this reduces the total number of checkpoints. At this end, let
us first introduce a relation of equivalence defined on pairs
of successive checkpoints of a process.
4.1. Equivalence Relation Between Checkpoints
Definition 4.1
Two local checkpoints C i;sn and next(C i;sn ) of process P i
are equivalent with respect to the recovery line L sn , denoted
Lsn
Lsn
sn
sn
I 2;sn
Figure
1. Examples of pairs of equivalent
checkpoints.
As an example, let consider the recovery line L sn depicted
in Figure 1. If in I 2;sn process P 2 executes either
send events or receive events of messages which have been
sent before the checkpoints included in the recovery line
Lsn
recovery line L 0
sn
is created by replacing C 2;sn with next(C 2;sn ) from L sn .
Figure
also shows the construction of the recovery line
sn starting from L 0
sn by using the equivalence between
C 1;sn and next(C 1;sn ) with respect to L 0
sn .
As shown in the above examples, the equivalence relation
has a simple property (see Lemmas 4.1 and 4.2 of Section
Lsn
then the set L 0
fC i;sn g[fnext(C i;sn )g is a recovery line. Hence, the presence
of a pair of equivalent checkpoints allows a process
to locally advance a recovery line without updating the sequence
4.2. Sequence and Equivalence Numbers of a Recovery
line
We suppose process P i owns two local variables: sn i
(sequence number) and en i (equivalence number). The
variable sn i stores the number of the current recovery line.
The variable en i represents the number of equivalent local
checkpoints with respect to the current recovery line (both
sn i and en i are initialized to zero).
Hence, we denote as C i;sn;en the checkpoint of P i with
the sequence number sn and the equivalence number en;
the en ? is also called the index of a check-
point. Thus, the initial checkpoint of process P i will be
denoted as C i;0;0 . The index of a checkpoint is updated according
to the following rule: if C i;sn;en
Lsn
then next(C i;sn;en
C i;sn+1;0 .
Process P i also endows a vector EQ i of n integers. The
j-th entry of the vector represents the knowledge of P i
about the equivalence number of P j with the current sequence
number sn i (thus the i-th entry corresponds to en i ).
EQ i is updated according to the following rule: each
application message m sent by process P i piggybacks the
current sequence number sn i (m:sn) and the current EQ i
vector (m:EQ). Upon the receipt of a message m, if
is updated from m:EQ by taking a
component-wise maximum. If m:sn ? sn i , the values in
m:EQ and m:sn are copied in EQ i and sn i .
Let us remark that the set L
recovery line (a sketch proof of this property is given in
Lemma 4.4). So, to the knowledge of P i , the vector EQ i
actually represents the most recent recovery line with sequence
number sn i .
4.3. Tracking the Equivalence Relation
Upon the arrival of a message m, sent by P j , at P i in
the checkpoint interval I i;sn;en , one of the following three
cases is true:
(m has been sent from the left side of the recovery line
has been sent
from the right side of the recovery line [ 8j C j;sn;EQ i [j] );
been sent from the right side of a
recovery line of which P i was not aware).
According to previous cases, at the time of the check-point
next(C i;sn;en ), in one of the following three
alternatives:
(i) If no message m is received in I i;sn;en that falls in
case 2 or 3, then C i;sn;en
Lsn
next(C i;sn;en ). That equivalence
can be tracked by a process using its local context
at the time of the checkpoint next(C i;sn;en ). Thus
next(C i;sn;en equivalence between
shown in Figure 2, is an example
of such a behavior).
(ii) If there exists a message m which falls in case 3
then C i;sn;en is not equivalent to next(C i;sn;en ) and thus
next(C i;sn;en
(iii) If no message falls in case 3 and there exists at least
a message m received in I i;sn;en which falls in case 2,
then the checkpoint next(C i;sn;en ) is causally related to
one checkpoint belonging to the recovery line formed by
communication pattern is shown in
Figure
where, due to m, C 2;sn;0 ! next(C 1;sn;0 )).
The consequence is that process P i cannot determine,
at the time of taking the checkpoint next(C i;sn;en ), if
Lsn
optimistically
(and provisionally) that C i;sn;en
Lsn
As provisional indices cannot be propagated in the system,
if at the time of the first send event after next(C i;sn;en ) the
equivalence is still undetermined, then next(C i;sn;en
0). Otherwise, the provisional index becomes permanent.
Figure
2 shows a case in which message m 0 brings the information
(encoded in m 0 :EQ) to P 1 (before the sending of
Lsn
and the recovery line was
advanced, by P 2 , from L sn to L 0
sn . In such a case, P 1 can
determine C 1;sn;0 is equivalent to next(C 1;sn;0 ) with respect
to L 0
sn and, then, advances the recovery line to L 00
sn .
4.4. The Algorithm
The checkpointing algorithm we propose (BQF) takes
basic checkpoints by using the take-basic(MS) rule.
However, it does not update the sequence number by optimistically
assuming that a basic checkpoint is equivalent to
the previous one. So we have:
Lsn
sn
sn
Figure
2. Upon the receipt of m 0 , P 1 detects
sn
When a basic checkpoint is scheduled,
If skip i then skip
else en checkpoint Ci;sn;en is taken with
index provisionally set to ! sn
Due to the presence of provisional indices caused by the
equivalence relation, our algorithm needs a rule, when sending
a message, in order to disseminate only permanent indices
of checkpoints.
before sending a message m in I i;sn i ;en i ,
if there has been no send event in I i;sn i ;en i and the index is provisional then
Lsn
then the index ! sn permanent
else sn and the index of the last checkpoint
is replaced permanently with ! sn
the message m is sent;
The last rule of our algorithm take-forced(BQF)
refines BCS's one by using a simple observation. Upon
the receipt of a message m such that m:sn ? sn i , there
is no reason to take a forced checkpoint if there has been
no send event in the current checkpoint interval I i;sn;en . In-
deed, no causal relation can be established between the last
checkpoint C i;sn i ;en i and any checkpoint belonging to the
recovery line L m:sn and, thus, the index of C i;sn i ;en i can
be replaced permanently with the index ! m:sn; 0 ?.
take-forced(BQF): Upon the receipt of a message m in I i;sn i ;en
(a) If sn i ! m:sn and there has been at least a send event in I i;sn i ;en i then
begin
a forced checkpoint Ci;m:sn;0 is taken and its index is permanent;
(b) If sn i ! m:sn and there has been no send event in I i;sn i ;en i then
begin
the index of the last checkpoint C i;sn i ;en i is replaced permanently
the message m is processed;
For example, in Figure 3.a, the local checkpoint
C 3;sn;en3 can belong to the recovery line L sn+1 (so the index
can be replaced with
forced checkpoint
(b)
C3;sn;en 3 C3;sn+1;0
next(C2;sn;en
C3;sn;en 3
(a)
Figure
3. Upon the receipt of m, C 3;sn;en3 can
be a part of L sn+1 (a); C 3;sn;en3 cannot belong
to L sn+1 (b).
the contrary, due to the send event in I 3;sn;en3 depicted in
Figure
3.b, a forced checkpoint with index
has to be taken before the processing of message m.
Point (b) of take-forced(BQF) decreases the number
of forced checkpoints compared to BCS. The else alternative
of send-message(BQF) and the part (a) of
take-forced(BQF), represent the cases in which the
action to take a basic checkpoint leads to update the sequence
number with the consequent induction of checkpoints
in other processes.
Data Structures and Process Behavior. We assume each
process P i has the following data structures:
after first send i , skip i , provisional
past
The boolean variable after first send i is set to TRUE if
at least one send event has occurred in the current check-point
interval. It is set to FALSE each time a checkpoint is
taken.
The boolean variable provisional i is set to TRUE whenever
a provisional index assignement occurs. It is set to
FALSE whenever the index becomes permanent.
present i [j] represents the maximum equivalence number
en j piggybacked on a message m received in the current
checkpoint interval by P i and that falls in the case 2
of Section 4.3. Upon taking a checkpoint or when updating
the sequence number, present i is initialized to -1. If the
checkpoint is basic, present i is copied in past i before its
initialization. Each time a message m is received such that
past past i [j] is set to -1. So, the predicate
past indicates that there is a message
received in the past checkpoint interval that has been sent
from the right side of the recovery line (case 2 of Section
currently seen by P i .
Below the process behavior is shown (the procedures and
the message handler are executed in atomic fashion). This
implementation assumes that there exist at most one provisional
index in each process. So each time two successive
provisional indices are detected, the first index is permanently
replaced with ! sn
init
en i := 0; after first send i := FALSE;
past i [h] := \Gamma1; 8h present i [h] := \Gamma1;
when (m) arrives at P i from
begin
if after first send i then
begin
take a checkpoint C; % forced checkpoint %
after first send i := FALSE;
assign the index ! sn to the last checkpoint C;
provisional i := FALSE; % the index is permanent %
past i [h] := \Gamma1; 8h present i [h] := \Gamma1;
present i [j] := m:EQ[j];
else if
begin
if present i [j] ! m:EQ[j] then present i [j] := m:EQ[j];
8h if past i [h] ! m:EQ[h] then past i [h] := \Gamma1;
process the message m;
when sends data to
if provisional i past
begin
assign the index ! sn to the last checkpoint C;
provisional i := FALSE; % the index is permanent %
past i [h] := \Gamma1; 8h present i [h] := \Gamma1; 8h EQ i [h] := 0;
packet the message %
send (m) to P
after first send i := TRUE;
when a basic checkpoint is scheduled from
if skip i then skip i := FALSE % skip the basic checkpoint
else
begin
if provisional i then % two successive provisional indices %
past
begin
past i [h] := \Gamma1;
assign the index ! sn to the last checkpoint C; % the index is permanent %
else 8h past i [h] := present i [h];
take a checkpoint C; % taking a basic checkpoint
en
assign the index ! sn to the last checkpoint C;
provisional i := TRUE; % the index is provisional %
present i [h] := \Gamma1;
after first send i := FALSE;
4.5. Correctness Proof
Let us first introduce the following simple observations
that derive directly from the algorithm:
Observation 1 For any checkpoint C i;sn;0 , there not
exists a message m with m:sn - sn such that
observation derives from rule
take-forced(BQF) when considering C i;sn;0 is the
first checkpoint with sequence number sn).
Observation 2 For any checkpoint C i;sn;en , there not
exists a message m with m:sn ? sn such that m is
received in I i;sn;en (this observation derives from rule
take-forced(BQF)).
Observation 3 For any message m sent by
derives from the rule send-message(BQF)).
Lemma 4.1 The set
with sn - 0 is a recovery line. If process P i does not
have a checkpoint with index ! sn; 0 ?, the first check-point
must be included in the set
S.
Proof If sn = 0, S is a recovery line by definition. Otherwise
suppose, by the way of contradiction that S is not
a recovery line. Then, there exists a message m, sent by
some process P j to a process P k , that is orphan with respect
to the pair (C j;sn;0 ; C k;sn;0 ). Hence, we have: C
contradicts observation 1.
Suppose process P k does not have a checkpoint with sequence
number sn, in this case, from lemma's assumption,
we replace C k;sn;0 with C k;sn 0 ;0 where sn 0 ? sn. As m
is orphan wrt the pair (C received by
in a checkpoint interval I k;sn 00 ;en such that m:sn ? sn 00
contradicting observation 2.
Suppose process P j does not have a checkpoint with sequence
number sn, in this case, from lemma's assumption,
we replace C j;sn;0 with C j;sn 0 ;0 where sn 0 ? sn. As m
is orphan wrt the pair (C j;sn 0
This contradicts observation 1.
Hence, in all cases the assumption is contradicted and
the claim follows. 2
Lemma 4.2 Let C i;sn;en i and next(C i;sn;en i ) be two local
checkpoints such that C i;sn;en i
Lsn
If the set
with en i - 0, is a recovery line L sn then the set S
is a recovery line.
Proof If C i;sn;en i
Lsn
then from definition
4.1, for each message m sent by P j such that
thus no orphan message can ever exist with respect
to any pair of checkpoints in S 0 . 2
From Lemma 4.1, it trivially follows:
Lemma 4.3 The set L
and 8j EQ i recovery line.
From Lemma 4.1 and Lemma 4.2, we have each check-point
belongs to at least one recovery line. In particular,
belongs to
all recovery lines having sequence number sn 00 such that
Lemma 4.4 The set is a recovery
line.
Proof (Sketch) Let us assume, by the way of contradiction,
S is not a recovery line. If 8j EQ i
the assumption is contradicted. Otherwise, there exists a
message m, sent by some process P j to a process P k , that is
orphan with respect to the pair (C j;sn;EQ i
and there exists a causal message chain - that brings this
information to P i encoded in EQ i . Hence, we have:
upon the arrival
of message m, it falls in case 2 of Section 4.3. In this
case, the index associated to C k;sn;EQk [j] is provisional (see
the third point of Section 4.3). Before P k sends the first
message m 0 forming the causal message chain -, the index
has to be permanent. Hence, according to the algorithm,
the index is replaced by ! sn is reset and
piggybacked on m 0 . As soon as the last message of the
causal message chain m 00 arrives at
which is consistent by lemma 4.3. So the
initial assumption is contradicted and the claim follows. 2
5. A Performance Study
The Simulation Model. The simulation compares BCS,
MS and the proposed algorithm (BQF) in an uniform point-
to-point environment in which each process can send a message
to any other and the destination of each message is a
uniformly distributed random variable. We assume a system
with processes, each process executes internal, send
and receive operations with probability
respectively. The time to execute an operation
in a process and the message propagation time are
exponentially distributed with mean value equal to 1 and 10
time units respectively.
We also consider a bursted point-to-point environment in
which a process with probability enters a burst
state and then executes only internal and send events (with
probability
interval (when we have the uniform point-to-
point environment described above).
Basic checkpoints are taken periodically. Let bcf (basic
checkpoint frequency) be the percentage of the ratio t=T
where t is the time elapsed between two successive periodic
checkpoints and T is the total execution time. For example,
bcf= 100% means that only the initial local checkpoint is a
basic one, while bcf= 0.1% means that each process takes
1000 basic checkpoints.
We also consider a degree of heterogeneity among processes
H . For example, means
all processes have the same checkpoint period
means 25%
(resp. 75%) of processes have the checkpoint period
while the remaining 75% (resp. 25%) has a checkpoint period
A first series of simulation experiments were conducted
by varying bcf from 0:1% to 100% and we measured the
ratio Tot between the total number of checkpoints taken by
an algorithm and the total number of checkpoints taken by
BCS.
In a second series of experiments we varied the degree
of heterogeneity H of the processes and then we measured
the ratio E between the total number of checkpoints taken
by BQF and MS.
As we are interested only in counting how many local
states are recorded as checkpoints, the overhead due to the
taking of checkpoints is not considered. Each simulation
run contains 8000 message deliveries and for each value of
bcf and H , we did several simulation runs with different
seeds and the result were within 4% of each other, thus,
variance is not reported in the plots.
Results of the Experiments. Figure 4 shows the ratio Tot
of MS and BQF in an uniform point-to-point environment.
For small values of bcf (below 1.0%), there are only few
send and receive events in each checkpoint interval, leading
to high probability of equivalence between checkpoints.
Thus BQF saves from 2% to 10% of checkpoints compared
to MS. As the value of bcf is higher than 1.0%, MS and BQF
takes the same number of checkpoints as the probability that
two checkpoints are equivalent tends to zero.
The reduction of the total number of checkpoints is amplified
by the bursted environment (Figure 5) in which the
equivalences between checkpoints on processes running in
the burst mode are disseminated to the other processes causing
other equivalences. In this case, for all values of bcf,
BQF saves from a 7% to 18% checkpoints compared to MS.
Performance of BQF are particularly good in an heterogeneous
environment in which there are some processes
with a shorter checkpointing period. These processes would
push higher the sequence number leading to a very high
checkpointing overhead using either MS or BCS.
In
Figure
6, the ratio E as a function of the degree of heterogeneity
H of the system is shown in the case of uniform
bursted point-to-point environment
The best performance (about 30% less checkpointing than
MS) are obtained when only one
process has a checkpoint frequency ten times greater than
the others) and 2.
In
Figure
7 we show the ratio Tot as a function of bcf
in the case of which is the environment
where BQF got the maximum gain (see Figure 6).
Due to the heterogeneity, bcf is in the range between 1%
and 10% of the slowest processes. We would like to remark
that in the whole range the checkpointing overhead of BQF
is constantly around 30% less than MS.
bcf (% checkpoint period / total execution time)0.20.40.60.8
Tot
total
ckpt
total
ckpt
MS
Figure
4. Tot versus bcf in the uniform point-to-
point environment
bcf (% checkpoint period / total execution time)0.40.8
Tot
total
ckpt
total
ckpt
MS
Figure
5. Tot versus bcf in the bursted point-to-
point environment
6. Conclusion
In this paper we presented an index-based checkpointing
algorithm well suited for autonomous distributed systems
that reduces the checkpointing overhead compared to previous
algorithms. It lies on an equivalence relation that allows
to advance the recovery line without increasing its sequence
number. The algorithm optimistically (and provisionally)
assumes that a basic checkpoint C in a process is equivalent
to the previous one in the same process by assigning
a provisional index. Hence, if at the time of the first send
total
ckpt
total
ckpt
MS)
Figure
in both the uniform
point-to-point environment and the
bursted point-to-point environment
1.0 3.0 5.0 7.0 9.0
bcf (% checkpoint period total execution time)0.600.80Tot
total
ckpt
total
ckpt
MS
Figure
7. Tot versus bcf of the slowest processes
in a bursted point-to-point environment
event after C that equivalence is verified, the provisional index
becomes permanent. Otherwise the index is increased,
as in [3, 11], and this directs forced checkpoints in other
processes.
We presented a simulation study which quantifies the
saving of checkpoints in different environments compared
to previous proposals. The price to pay is each application
message piggybacks information
compared to one integer used by previous algorithms.
Acknowledgements
. The authors would like to thank
Bruno Ciciani, Michel Raynal, Jean-Michel Helary, Achour
Mostefaoui and the anonymous referees for their helpful
comments and suggestions.
--R
On Modeling Consistent Checkpoints and the Domino Effect in Distributed Systems
A Communication-Induced Checkpointing Protocol that Ensures Rollback-Dependency Trackability
A Distributed Domino-Effect Free Recovery Algorithm
Determining Global States of Distributed Systems
A Timestamp-Based Check-pointing Protocol for Long-Lived Distributed Computations
A Survey of Rollback-Recovery Protocols in Message-Passing Systems
Manetho: Transparent Rollback Recovery with Low Overhead
Checkpointing and Rollback-Recovery for Distributed Systems
Finding Consistent Global Checkpoints in a Distributed Computa- tion
System Structure for Software Fault Tolerance
Volatile Logging in n-Fault-Tolerant Distributed Systems
Consistent Global Checkpoints that Contains a Set of Local Checkpoints
--TR
--CTR
D. Manivannan , M. Singhal, Asynchronous recovery without using vector timestamps, Journal of Parallel and Distributed Computing, v.62 n.12, p.1695-1728, December 2002
B. Gupta , S. K. Banerjee, A Roll-Forward Recovery Scheme for Solving the Problem of Coasting Forward for Distributed Systems, ACM SIGOPS Operating Systems Review, v.35 n.3, p.55-66, July 1 2001 | checkpointing;rollback-recovery;performance evaluation;global snapshot;distributed systems;timestamp management;causal dependency;fault tolerance;protocols |
298827 | Dynamically Configurable Message Flow Control for Fault-Tolerant Routing. | AbstractFault-tolerant routing protocols in modern interconnection networks rely heavily on the network flow control mechanisms used. Optimistic flow control mechanisms, such as wormhole switching (WS), realize very good performance, but are prone to deadlock in the presence of faults. Conservative flow control mechanisms, such as pipelined circuit switching (PCS), ensure the existence of a path to the destination prior to message transmission, achieving reliable transmission at the expense of performance. This paper proposes a general class of flow control mechanisms that can be dynamically configured to trade-off reliability and performance. Routing protocols can then be designed such that, in the vicinity of faults, protocols use a more conservative flow control mechanism, while the majority of messages that traverse fault-free portions of the network utilize a WS like flow control to maximize performance. We refer to such protocols as two-phase protocols. This ability provides new avenues for optimizing message passing performance in the presence of faults. A fully adaptive two-phase protocol is proposed, and compared via simulation to those based on WS and PCS. The architecture of a network router supporting configurable flow control is also described. | Introduction
Modern multiprocessor interconnection networks feature the use of message pipelining coupled
with virtual channels to improve network throughput and insure deadlock freedom
[6,9,21,24]. Messages are broken up into small units called flits or flow control digits [9]. In
wormhole switching (WS), data flits immediately follow the routing header flit(s) into the network
[9]. Routing algorithms using WS can be characterized as optimistic. Network resources (e.g.,
buffers and channels) are committed as soon as they become available. This optimistic nature
leads to high network throughput and low average message latencies. However, in the presence of
I. This research was supported in part by a grant from the National Science Foundation under grant CCR-9214244
and by a grant from Spanish CICYT under grant TIC94-0510-C02-01. A preliminary version of this paper was presented
in part at the 22nd Annual International Symposium on Computer Architecture, Santa Margherita Ligure,
Italy, June 1995.
faults, this behavior can lead to situations where the routing header can become blocked, no
longer make progress, and hence cause the network to become deadlocked. Typically, additional
routing restrictions and/or network resources are required to ensure deadlock freedom in the presence
of faults [4,5,8,11]. For example, fault rings are constructed around convex faulty regions
using additional virtual channels and attendant routing restrictions [4]. Additionally, source hardware
synchronization mechanisms have been proposed to change routing decisions in the presence
of faults [20], and partially adaptive routing around convex fault regions with no additional
channels are feasible [5], while more recently the use of time-outs and deadlock recovery mechanisms
have been proposed [22].
Alternatively, in the pipelined circuit switching (PCS) flow control mechanism, the path setup
and data transmission stages are decoupled [15]. The header flit(s) is first routed to construct a
path. In the presence of faults, the header may perform controlled and limited backtracking. As
opposed to WS, routing algorithms based on PCS are conservative in nature, not committing data
into the network until a complete path has been established. The result is an extremely robust and
reliable communication protocol. However, path setup can exact significant performance penalties
in the form of increased message latencies and decreased network throughput, especially for short
messages.
This paper proposes the use of configurable flow control mechanisms for fully adaptive routing
in pipelined networks. The paper contributes dynamically configurable flow control mechanisms
at the lowest level, and two-phase routing protocols at the routing layer. Routing protocols
can be designed such that in the vicinity of faulty components messages use PCS style flow con-
trol, where controlled misrouting and backtracking can be used to avoid faults and deadlocked
configurations. At the same time messages use WS flow control in fault-free portions of the net-work
with the attendant performance advantages. Such protocols will be referred to as Two-Phase
protocols. A fully adaptive, deadlock-free, two-phase protocol for fault-tolerant routing in
meshes and tori is proposed and analyzed in this paper. Formal properties of Two-Phase routing
are established and the results of experimental evaluation are presented. The evaluation establishes
the performance impact of specific design decisions, addresses the choice of conservative
vs. configurable flow control for fault-tolerant routing, and discusses related deadlock/livelock
freedom issues. Finally, the paper describes the architecture and operations of a single chip router
for implementing Two-Phase routing protocols.
The distinguishing features of this approach are, i) it does not rely on additional virtual channels
over that already needed for fully adaptive routing, ii) the performance is considerably better
than conservative fault-tolerant routing algorithms with equivalent reliability, iii) it is based on a
more flexible fault model, i.e., supports link and/or node faults and does not require convex fault
regions, iv) supports existing techniques for recovery from dynamic or transient failures of links
or switches, and vi) provides routing protocols greater control over hardware message flow con-
trol, opening up new avenues for optimizing message passing performance in the presence of
faults.
The following section introduces a few definitions, and the network, channel, and fault mod-
els. A new class of flow control mechanisms is introduced in Section 3. Section 4 introduces fault
tolerant routing while Section 4.1 provides an analysis of routing properties required for deadlock
freedom. Section 4.2 introduces a fully adaptive two-phase routing protocol for meshes and tori.
Architectural support is discussed in Section 5 and the results of simulation experiments are presented
in Section 6. The paper concludes with plans for implementation of the router and future
research directions.
Preliminaries
2.1 Network Model
Although Two-Phase routing can be used in any topology, the theoretical results are generally
topology specific. The class of networks considered in this paper are the torus connected, bidirec-
tional, k-ary n-cubes and multi-dimensional meshes. A k-ary n-cube is a hypercube with n dimensions
and k processors in each dimension. In torus connected k-ary n-cubes, each processor is
connected to its immediate neighbors modulo k in every dimension. A multidimensional mesh is
similar to a k-ary n-cube, without the wrap around connections. A message is broken up into small
units referred to as flow control digits or flits. A flit is the smallest unit on which flow control is
performed, and represents the smallest unit of communication in a pipelined network. Each processing
element (PE) in the network is connected to a routing node. The PE and its routing node
can operate concurrently. We assume that one of the physical links of the routing node is used for
the PE connection. The network communication links are full-duplex links, and the channel width
and flit size are assumed to be equivalent. A number of virtual channels are implemented in each
direction over each physical channel. Each virtual channel is realized by independently managed
flit buffers, and share the physical channel bandwidth on a flit-by-flit basis. A mechanism as
described in [6] is used to allocate physical channel bandwidth to virtual channels in a demand-driven
manner. Flits are moved from input channel buffers to output channel buffers within a node
by an internal crossbar switch.
Given a header flit that is being routed through the network, at any intermediate node a routing
function specifies the set of candidate output virtual channels that may be used by the message.
The selection function is used to pick a channel from this set [12]. A profitable link is a link over
which a message header moves closer to its destination. A backtracking protocol is one which
may acquire and release virtual channels during path setup. Releasing a virtual channel that is
used corresponds to freeing buffers and crossbar ports used by the message on that channel.
2.2 Virtual Channel Model
The following virtual channel model is used in this paper. A unidirectional virtual channel, v i ,
is composed of a data channel, a corresponding channel, and a complementary channel
is referred to as a virtual channel trio [15]. The routing header will traverse
while the subsequent data flits will traverse . The complementary channel is reserved for
use by special control flits. The corresponding channels and complementary channels essentially
form a control network for coordinating fault recovery and adaptive routing of header flits including
limited and controlled backtracking of header flits. The complementary channel of a trio
traverses the physical channel in the direction opposite to that of its associated data channel. The
channel model is illustrated in Figure 1(a). There are two virtual channels v i (v r ) and v j (v s ) from
(R2) to (R1). Only one message can be in progress over a data channel. Therefore compared
to existing channel models, this model requires exactly 2 extra flit buffers for each data
channel - one each for the corresponding channel and complementary channel respectively.
Since control flit traffic is a small percentage of the overall flit traffic, in practice all control channels
across a physical link are multiplexed through a single virtual control channel [1] as shown in
Figure
1(b). For example, control channel c 1 in Figure 1(b) corresponds to flit buffers v r *, v s *, v j
c
and v i
c .
2.3 Fault Model
On-line fault detection is a difficult problem. In this paper we assume the existence of fault
c
detection mechanisms, and focus on how such information may be used for robust, reliable com-
munication. The detection mechanisms identify two different types of faults. Either the entire processing
element and its associate router can fail or a communication channel may fail. When a
physical link fails, all virtual channels on that particular physical link are marked as faulty. When
a PE and its router fail, all physical links incident on the failed PE are also marked as being faulty.
In addition to marking physical channels incident on the failed PE as being faulty, physical channels
incident on PEs which are adjacent to the failed PEs and/or communication channel may be
marked as unsafe. The unsafe channel [23] designation is useful because routing across them may
lead to an encounter with a failed component. Some of the protocols we will present in
Section 4.2 use unsafe channels. Figure 2 shows failed PEs, failed physical links and unsafe channels
in a two dimensional mesh network. The failed PE can no longer send or receive any messages
and thus is removed from the multi-processor network.
Failures can be either static or dynamic. Static failures are present in the network when the
system is powered on. Dynamic failures occur at random during operation. Both types of failures
are considered to be permanent, i.e., they remain in the system until repaired. For static failures
and dynamic failures that occur on idle links and routers, only header flits encounter failed links
and routing protocols can attempt to find alternative paths.
Figure
1. Inter-router virtual channel model
a) Logical channel model for 2 virtual
channels between routers R1 and
b) Implementation of the logical channel
model
d
c
c
d
d
c
c
d
d
c
c
d
d
r
c
d
r
s
However, dynamic failures can occur on busy links and interrupt a message transmission. Fur-
thermore, failure during the transmission of a flit across a channel can cause the flit to be lost.
Since only header flits contain routing information, data flits whose progress is blocked by a failure
cannot progress. They will remain in the network, holding resources, and can eventually cause
deadlock. We rely on the existence of a recovery mechanism for removing such "dead" flits from
the network. There exist at least two techniques for implementing distributed recovery [16, 22]
under dynamic faults. In both cases, the failure of a link will generate control information that is
propagated upstream and/or downstream along the message path. All resources along the path can
be recovered. Alternatively, a third approach to recovering from messages interrupted by a fault
can be found in [8]. All of these schemes are non-trivial, require hardware support, and have been
developed elsewhere [8, 22, 16]. We will assume the existence of such a technique and evaluate its
performance impact in Section 6.
Scouting Switching - A family of Flow Control Mechanisms
Scouting switching (SS) is a flow control mechanism that can be configured to provide specific
trade-offs between fault tolerance and performance. In SS, the first data flit is constrained to
remain K links behind the routing header. When the flow control is equivalent to wormhole
switching, while large values can ensure path setup prior to data transmission (if a path exists).
Figure
3 illustrates a time-space diagram for messages being pipelined over five links using SS
mechanisms. The parameter, K, is referred to as the scouting
distance or probe lead. Every time a channel is successfully reserved by the routing header, it
returns a positive acknowledgment. As acknowledgments flow in the direction opposite to the
Figure
2. Failed nodes and unsafe channels
Faulty Node
Faulty Channel
Unsafe Channel
routing header, the gap between the header and the first data flit can grow up to 2K - 1 links while
the header is advancing. If the routing header backtracks, it must send a negative acknowledg-
ment. Associated with each virtual channel is a programmable counter. A virtual channel reserved
by a header increments its counter every time it receives a positive acknowledgment and it decrements
its counter every time it receives a negative acknowledgment. When the value of the
counter is equal to K, data flits are allowed to advance. For performance reasons, when
acknowledgments are sent across the channels. In this case, data flits immediately follow the
header flit. For example, in Figure 4, the header is blocked by faulty links at node A. The first data
flit is constrained to remain K links behind the header at node B. From the figure, we can
see that header can backtrack, releasing link A, and establish an alternate path across link C. By
statically fixing the value of K, we fix the trade-off between network performance (overhead of
positive and negative acks) and fault tolerance (the ability of the header to backtrack and be routed
around faults). By dynamically modifying K, we can gain improved run-time trade-offs between
fault tolerance and performance.
If L is the message length in flits, l the number of links in the path, and K the scouting dis-
tance, we can derive expressions for the minimum message latency for each type of routing mech-
Figure
3. Time-space diagram of WS, Scouting, and PCS
su
data
data
data
su
su
data
Scouting
Pipelined Circuit Switching
Route Setup
Data Transmission
Routing Header
PCS
Acknowledgment
Data Flit
Scouting Acknowledgment
Wormhole Switching
anism.
4 Fault-tolerant Routing
The basic idea proposed in this paper is for messages to be routed in one of two phases. When
messages are traversing fault-free segments of the network, they are routed using protocols based
on WS. When messages traverse a segment of the network with faults, a more conservative flow
control mechanism, and associated fault-tolerant routing protocol is employed. The use of SS flow
control to be made dynamically by simply modifying the value of K.
The design of effective two-phase protocols is dependent upon the relationships between the i)
scouting distance (K), ii) the number of faults (f), iii) the number of links a header flit may be
forced to backtrack in routing around faults (b), and iv) the number of steps a header may be
routed along non-minimal paths (m). The analysis in the following subsection establishes these
relationships for k-ary n-cubes and multi-dimensional meshes. Section 4.2 describes a fully adaptive
two-phase, fault-tolerant, routing protocol.
4.1 Analysis
Messages are assumed to always follow shortest paths in the absence of faults. Further, when
Figure
4. Backtracking out of a faulty region
A
Data flit progress
Failed channel
Routing header progress
scouting l
a header encounters a faulty link, it is allowed to either misroute or backtrack, with the preference
given to misrouting.
Theorem 1 In the absence of any previous misrouting, the maximum number of consecutive links
that a header flit will backtrack over in a torus connected k-ary n-cube in a single source-destination
path is is the number of faulty components.
Proof: If there have been no previous misroutes, the header flit is allowed to misroute in the presence
of faults even when the number of misroutes is limited. Thus, the header will only backtrack
when the only healthy channel is the one previously used to reach the node (Figure 5). In the case
of a k-ary n-cube, every node has 2n channels, incident on a distinct PE. Since the header arrived
from a non-faulty PE, it will be forced to backtrack if 2n - 1 channels are faulty. At the next node,
since the header has backtracked from a non-faulty PE and originally arrived from a non-faulty
PE, it will be forced to backtrack if the remaining 2n - 2 channels are faulty. Each additional back-tracking
step will be forced by 2n - 2 additional failed channels. Thus we have:
Consider the second case shown in Figure 5 where there is a turn at the end of the alley. In order to
cause the routing header to backtrack initially, there needs to be 2n - 1 faulty channels, the second
backtrack requires 2n - 2 faulty channels while the third backtrack is necessitated by 2n - 3 node
Figure
5. Node faults causing backtracking
case 1
case 2
Faulty Node
Faulty Link
faults or 2n - 2 channel faults. All subsequent backtracks require 2n - 2 additional faults. Thus we
Theorem 2 In the absence of any previous misrouting, the maximum number of consecutive links
that a header flit will backtrack over in a n-dimensional mesh in a single source-destination path
is is the number of faulty components.
Proof: If there have been no previous misrouting operations, the message is allowed to misroute
in the presence of faults, even if the maximum number of misrouting operations is limited. There
are several possible cases:
- The routing probe is at a node with 2n channels. This is the same case as with a torus connected
k-ary n-cube. Hence, the number of faults required to force the first backtrack is 2n - 1. To
force additional backtracks, 2n - 2 additional faults are required per additional backtrack.
- The probe is at a node with less than 2n channels. As with the earlier cases, all channels except
the one used to reach the node can be used in case of faults (either for routing or misrouting). The
worst case (Figure 6(a)) occurs when the node has the minimum number of channels. In an n-dimensional
mesh, nodes located at the corners only have n channels. One of the channels was
used by the probe to reach the node. Hence, the failure of n - 1 channels or nodes causes the routing
probe to backtrack. The probe is now on the edge of the mesh, where each node has
channels. One channel was already used to reach the node the first time and another one for the
previous backtracking operation, therefore, only n - 1 channels are available for routing. These
channels must all be faulty to force a backtrack operation. Thus, the maximum number of mandatory
backtrack operations is f div (n - 1), where f is the number of faults.
- Consider the second case shown in Figure 6(b) where a turn at the end of the alley exists. In
order to cause the initial backtrack, there needs to be n faults. n - 2 faults are required to cause a
backtrack at the corner processing element. Each additional backtrack requires n - 1 II faults.
Hence, the maximum number of backtracking operations is (f +1) div (n - 1).
II. n -1 faulty channels or n - 2 faulty nodes for the first additional backtracks.
The above theorems establish a relationship between the number of backtracking operations
and the number of faults for both meshes and tori. Now consider the relationship between the
number of misrouting operations, number of faults, and number of backtracking steps. This is
determined by the configuration of faults and is specified by the following theorem. It will be useful
in determining the scouting distance.
Theorem 3 In a torus connected k-ary n-cube with less than 2n faults, the maximum number of
consecutive backtracking steps, b, before the header can make forward progress is 3 III if
Figure
6. Faults causing backtracking in a mesh
Faulty Link
Faulty Node
(a) (b)
Figure
7. Fault configuration showing required to search all inputs in one plane
A
Legend
Source/Desination Node
Failed Node
Failed Channel
i) the maximum number of misroutes allowed is 6,
ii) misrouting is preferred over backtracking,
iii) when necessary, the output channel selected by the routing function for misrouting the mes-
sage, is in the same dimension as the input channel of the message.
Proof: Consider Figure 7, where all of the adjacent nodes to the destination in one plane are
faulty. The routing header would have to take a maximum of six misroutes to check all of the possible
input links to the destination lying within a plane. This will eliminate two dimensions to
search out of the n possible dimensions. If all permitted misroutes have been used or the routing
header arrives at a previously visited node, the routing header must backtrack. Backtracking over
a misroute removes it from the path and decrements the misroute count. The routing header backtracks
two hops to point A in Figure 7. From this point, the routing header can take one misroute
into any of the n - 2 remaining dimensions, j for example (where j is not one of the two dimensions
forming the plane in Figure 7). The routing header is now two hops away from the node
adjacent to the destination lying along dimension j. The routing header can check to see if that
node is faulty with one profitable hop. If that node is faulty, then the routing header is forced to
backtrack two hops back to point A. Alternatively, in two hops the header can check if the link
adjacent to the destination is faulty. In this case the maximum backtrack distance is three hops
back to point A. From point A, with one misroute and two profitable routes, the routing header
can check the status of every node one hop away from the destination and/or every link adjacent to
the destination. Since the number of faults allowed in the system is limited to 2n - 1, the existence
of one healthy node and one healthy channel adjacent to the destination is guaranteed. Hence, the
maximum number of backtracks that the routing header has to perform is three.
Theorem 4 In a n-dimensional mesh with less than n faults, the maximum number of consecutive
backtracking steps, b, before the header can make forward progress is 3 if
i) the maximum number of misroutes allowed is 6,
ii) misrouting is preferred over backtracking,
iii) when necessary, the output channel selected by the routing function for misrouting the message
is in the same dimension as the input channel of the message.
Proof: Consider the case when the destination node cannot be surrounded by faults in any plane.
III. If only node failures are considered, the number of backtracks required per backtracking operation is 2.
Figure
8 shows the corner of a mesh where 3. At the corner node of the mesh, two of the three
input/output channels of the corner node are faulty. The routing probe entering the corner node is
forced to backtrack one step. However, since there cannot be any additional faulty links or nodes
in the network (due to the limit in the number of faults), the routing probe can reach the destination
without any further backtracking operations. If the routing probe is not at a corner node, but
at a node on the edge of the mesh, then since each node on the edge of a mesh has
and since a maximum of n - 1 faults are allowed, no backtracking will be required because misrouting
is preferred over backtracking.
Consider the case when the destination node can be surrounded by faults in some plane. This
means that a situation similar to that shown in Figure 7 occurs, even in the nodes at the edge of the
mesh. If the number of misroutes is limited to 6, then the results of Theorem 3 can be applied and
the maximum number of consecutive backtracking steps is 3.
Only 2n and n faults are required to disconnect the network in a k-ary n-cube and n-dimensional
mesh respectively. However, in practice, the network can often remain connected with a
considerably larger number of failed nodes and channels. If the total number of faults was allowed
to be greater than 2n or n, then it is possible that some messages may be undeliverable. If allowed
to remain in the network, these messages impact performance and may lead to deadlock. Techniques
such as those described in 6.2Section 2.3 can be used to detect and remove such messages
Figure
8. Backtracking in corner node of mesh
Failed Link
Failed Node
from the network.
4.2 Two-Phase Routing Protocol
Routing protocols operate in two phases: an optimistic phase for routing in fault-free segments
and a conservative phase for routing in faulty segments. The former uses an existing fully adap-
tive, minimal, routing algorithm [12]. In this section we propose two candidates for the conservative
phase. The candidates differ primarily in the impact on performance as a function of the
number of faults.
The proposed Two-Phase (TP) protocol is shown in Figure 9 and operates as follows: In the
absence of faults, TP uses a deadlock-free routing function based on Duato's Protocol (DP) [12].
In DP, the virtual channels on each physical link are partitioned into restricted and unrestricted
partitions. Fully adaptive minimal routing is permitted on the unrestricted partition (adaptive
while only deterministic routing is allowed on the restricted partition (deterministic
channels). The selection function uses a priority scheme in selecting candidate output channels at
a router node. First, the selection function examines the safe adaptive channels. If one of these
channels is not available, either due to it being faulty or busy, the selection function examines the
/* Structure of Two-Phase Routing */
IF detour complete THEN /* completed detour (destination reached or detour completed)*/
reset header to DP mode;
END IF
IF DP THEN /* route using DP routing restrictions with unsafe channels */
select safe profitable adaptive channel; RETURN;
select safe deterministic channel; RETURN;
IF NOT (safe deterministic channel faulty) THEN
RETURN; /* blocks progress */
END IF
select unsafe profitable adaptive channel; /* Acks sent or not sent depending on */
switch to SS mode & set ack counter; /* which one of the two different */
conservative phases of TP routing used */
select unsafe deterministic channel;
switch to SS mode & set ack counter;
set header to detour mode;
END IF
IF detour THEN /* route with no restrictions in detour mode */
select profitable channel; RETURN;
IF #_misroutes < m THEN
END IF
END IF
Figure
9. Structure of Two-Phase routing
safe deterministic channel (if any). If the safe deterministic channel is busy, the routing header
must block and wait for that channel to become free. If a safe adaptive channel becomes free
before the deterministic channel is freed, then the header is free to take the adaptive channel. If the
deterministic channel is faulty, the selection function will try to select any profitable adaptive
channel, regardless of it being safe or unsafe. The selection function will not select an unsafe
channel over an available safe channel. An unsafe channel is selected only if it is the only alternative
other than misrouting or backtracking. When an unsafe profitable channel is selected as an
output channel, the message enters the vicinity of a faulty network region. This is indicated by
setting a status bit in the routing header. Subsequently, the counter values of every output channel
traversed by the header is set to K. Values of K > 0 will permit the routing header to backtrack to
avoid faults if the need arises. Message flow control is now more conservative, supporting more
flexible protocols in routing around faulty regions. If no unsafe profitable channel is available, the
header changes to detour mode.
In detour mode, no positive acknowledgments are generated and with no positive acknowledg-
ments, data flits do not advance. During the construction of the detour, the routing header performs
a depth-first, backtracking search of the network using a maximum of m misroutes. Only
adaptive channels are used to construct a detour. The detour is complete when all the misroutes
made during the construction of the detour have been corrected or when the destination node is
reached. When the detour is complete, SS acknowledgments flow again, and data flits resume
progress. Note that all channels (or none) in a detour are accepted before the data flits resume
progress. This is required to ensure deadlock-freedom. The detour mode is identified by setting a
status bit in the header.
While it is desirable to remain with WS for the fault-free routing (optimistic phase), alternatives
are possible for the conservative phase. In the conservative phase of TP (Figure 9), the
header enters SS mode when an unsafe channel is selected. Alternatively, in the conservative
phase we may chose to continue optimistic WS flow control across unsafe channels. In
this case, it not necessary to mark channels as unsafe. When WS forward progress is stopped due
to faults, then detours can be constructed using increased misrouting as necessary. When a detour
is completed, one acknowledgment is sent to resume the flow of the data flits. Note, in this case
we always have no positive or negative acknowledgments are transmitted.
When larger values of K are used (as in Figure 9), the increased ability to backtrack and route
around fault regions reduces the probability of constructing detours. Thus we see that the choice
of K is a trade-off between acknowledgment traffic, and the increased misrouting/backtracking
that occurs in detour construction. We expect that the choice of an appropriate value of K is
dependent upon the network load and failure patterns. The trade-offs are evaluated in Section 6.
Note that the proofs of deadlock freedom do not rely on unsafe channels. Therefore the designer
has some freedom in configuring the appropriate mechanisms as a function of the failure patterns.
Figure
shows a routing example using the Two-Phase routing protocol (as shown in
Figure
with seven node failures and initially set to 0, the
routing header routes to node B where it is forced to cross an unsafe channel. The value of K is
increased to 3 and the header routes profitably to node A, with the data flits advancing until node
B. At node A, the routing header cannot make progress towards the destination entering detour
mode, so it is misrouted upwards. After two additional misroutes can no longer be misrouted due
to the limit on m. The routing header then is forced to backtrack to node A. Since there are no
other output channels to select, the routing header is forced to backtrack to node C. From there, it
is misrouted twice downwards and then finds profitable links to the destination. In this case, the
detour is completed when the destination is reached. Also, notice that data flits do not advance
while the header is in detour mode. Thus, the first data flit is still at node B.
For comparison purposes, consider the use of an alternative conservative phase as described
Figure
10. Routing example
Failed channel
Failed node
above where unsafe channels are not used and K is always 0. Referring to Figure 10, the routing
header is routed profitably to node A. In this case, Thus, the first data flit also reaches node
A. Since it cannot be routed profitably from node A, a detour is constructed. The header is misrouted
upwards three links, cannot find a path around the fault region, and therefore is forced to
backtrack back to node A. The routing header is then forced to misroute to node C. From node C,
it misroutes downwards, and traverses a path to the destination. Notice that in this case a path that
is two hops longer since the data flits now pass through node A. However, while the header is
routed from node B to node A, no acknowledgment flits are generated. These two examples indicate
that the specific choice of flow control/routing protocol for the conservative phase is a trade-off
that is dictated by the fault patterns and network load.
The theorems in Section 4.1 cover networks with a fixed number of faults. For an arbitrary
number of faults, f, small values of m, and destination node failures, it is possible that the header
may backtrack to the location of the first data flit. In fact, this may occur if the links are simply
busy rather than being faulty. One solution is to re-try from this point. However, it is possible that
this also will not succeed. At this point, we rely on the recovery mechanism referenced in
Section 2.3 to tear down the path and, if designed to do so, re-try from the source. With successive
failures to establish a path from the source, some higher level protocol is relied upon to take
appropriate action. This behavior particularly addresses messages destined for failed nodes. After
a certain number of attempts, the higher level protocol may mark the node as unreachable from
the source. While livelock is addressed in this fashion, the following theorem establishes the
deadlock freedom of TP.
Theorem 5 Two-Phase routing is deadlock-free.
Proof: Let C be set of all virtual channels, C 1 be set of deterministic channels and C 2 be set of
adaptive channels. The following situations can occur during the message routing:
- If the routing header does not encounter any faulty nodes or channels, TP routing uses DP routing
restrictions which have been shown to be deadlock-free in the fault-free network [12].
- If the routing header encounters an unsafe channel and selects a safe channel over the unsafe
channel, then no deadlock can occur since the safe adaptive channel still is contained in the set of
virtual channels C 2 and routing in this set cannot induce deadlock.
- If the routing header is forced to take an unsafe adaptive channel, then no deadlock can occur
since the unsafe channels are still in channel set C 2 and routing in C 2 cannot induce deadlock.
- If the routing header encounters a faulty node or channel and cannot route profitably and cannot
take a deterministic channel from C 1 , because it is faulty, then the routing header constructs a
detour. No deadlock can occur while building the detour because the probe can always backtrack
up to the node where the first data flit resides. No deadlock can occur in the attempt to construct a
detour because if after several re-tries, the detour cannot be constructed, the recovery mechanism
will tear down the path, thus releasing the channels being occupied by the message.
- As the detour uses only adaptive channels, channels from C 2 , no deadlock can arise in routing
the message after the detour has been constructed because, taking into account the condition to
complete a detour, the ordering between channels in the deterministic channels, C 1 , is still preserved
- Finally, the detour only uses adaptive channels from C 2 . Thus, building a detour does not prevent
other messages from using deterministic channels to avoid deadlock.
5 Architectural Support
Figure
11 illustrates the block diagram of a router that implements Two-Phase routing. This is
a modified version of a PCS router described in [1]. Each input and output physical channel has
associated with it a link control unit (LCU). The input LCU's feed a first-in-first-out (FIFO) data
input buffer (DIBU) for each virtual channel. All input control channels are multiplexed over a
single virtual channel and therefore feed a single FIFO control input buffer (CIBU). The data
FIFO's feed the inputs of the crossbar. The control FIFO's arbitrate for access to the routing control
unit (RCU). The RCU implements the two-phase routing protocol to select an output link, and
maps the appropriate input link of the crossbar to the selected output link. The modified control
flit is now sent out the RCU output arbitration unit to the appropriate control output virtual chan-
nel. The LCUs and DIBUs support SS flow control as described later in this section.
A single chip version of this router with only PCS flow control has been implemented in a
metal layer CMOS process and fabricated by MOSIS [1]. The overall design contains
over 14,000 transistors and is 0.311 cm square. The chip has 88 pins. The core logic of the
router chip consumes 55% of the chip area and the crossbar occupies 14% of the area dedicated to
the core circuitry. An additional 10% of the logic payload is devoted to the RCU.
The routing header (Figure 12) for the Two-Phase protocol consists of six fields. The first field
is the header bit field which identifies the flit as a routing header. The second field is the backtrack
field. This bit signifies whether the routing header is going towards the source (i.e., backtracking)
or towards the destination. The next field is the misroute field. It records the number of misrouting
operations performed by the routing header. Since the Two-Phase protocol must be allowed a
maximum of 6 misroutes to ensure the delivery of the message (in a network with up to 2n - 1
node faults), this field is three bits in size. The fourth field is the detour bit. This bit is used by the
control logic to determine if the message is in detour mode. If the bit is clear and the SS bit is set,
Figure
11. Overview of router chip
LCU
CPU
CPU
Data Buffer (Input/Output)
Control Buffer (Input/Output)
Data Input Bus
Data Output Bus
Control Input Bus
Control Output Bus
LEGEND
CIBU/COBU
DIBU/DOBU
LCU LCU
LCU LCU
LCU LCU
LCU
CROSSBAR
RCU
RCU
RCU
INPUT
Enable Buffers
Figure
12. Format of header flit(s)
Bit
Header Back-track
Misroute Detour Xn-offset
X2-offset
X1-offset
SS
the router generates an acknowledgment flit every time the routing header advances. Acknowledgments
are propagated over the complementary control channel. Following the detour field is the
SS bit. When the SS bit is set, SS flow control is used across every channel traversed by the
header thus setting the counter to K. The next field is actually a set of offsets, one offset for each
of the n dimensions in the k-ary n-cube. Their size depends on the size of the interconnection net-work
(i.e., the value of k).
Depending upon how the conservative phase is implemented, each physical channel will
require an unsafe channel status bit maintained in the RCU. When a routing header enters the
RCU, the input virtual channel address is used to access the unsafe channel store and the history
store. The history store maintains a record of output channels that have been searched by a back-tracking
header. Figure 13 shows the organization of the RCU. The major distinguishing features
of this router architecture are due to the support for the backtracking search done by a header. A
detailed discussion of architectural requirements for such routing can be found in [15, 1].
Associated with each virtual channel is a counter for recording acknowledgments and a register
with the value of K, the scouting distance. For a two bit counter is required for each virtual
channel. All counters are maintained in the counter management unit (CMU) in the RCU.
When a positive (negative) acknowledgment flit arrives for a virtual circuit, the CMU increments
(decrements) the counter that corresponds to the data virtual channel. If the counter value is K,
data flits are allowed to flow. Otherwise they are blocked at the DIBU as show in Figure 14. This
Figure
13. Routing control unit
Channel Mappings
Decision Unit Inc/Dec Banks
History Store Decode
Unsafe Store
Header (modified)
Output Virt. Chan
DIBU
Enable
Input Virt. Chan. Header
Unit
Counter Management
is achieved by providing DIBU output enables from the RCU. Finally, the RCU does not propagate
the acknowledgment beyond the first data flit of a message.
6 Performance Evaluation
The performance of the fault-tolerant protocols was evaluated with simulation studies of message
passing in a 16-ary 2-cube with messages. The routing header was 1 flit long. The simulator
performs a time-step simulation of network operation at the flit level. The message
destination traffic was uniformly distributed. Simulation runs were made repeatedly until the 95%
confidence intervals for the sample means were acceptable (less than 5% of the mean values). The
simulation model was validated [14] using deterministic communication patterns. We use a congestion
control mechanism (similar to [3]) by placing a limit on the size of the buffer (eight buffers
per injection channel) on the injection channels. If the input buffers are filled, messages cannot
be injected into the network until a message in the buffer has been routed. A flit crosses a link in
one cycle.
The performance of TP was compared to the performance of Duato's Protocol (DP) [12]. DP
is a wormhole based routing protocol which partitions the virtual channels into two sets, adaptive
and escape. The adaptive channels permit fully adaptive minimal routing while the escape channels
are used to implement a deadlock-free sub-network. To measure the fault tolerance of TP, it
was compared with Misrouting, Backtracking with m misroutes (MB-m) [15]. MB-m is a PCS
based routing protocol which allows fully adaptive routing and up to m misroutes per virtual circuit
Figure
14. Data flit flow control
COBU
CIBU To RCU Arb
From RCU
DOBU
DOBU
DOBU
To Crossbar
Enable Lines From RCU
DIBU
DIBU
DIBU
From Crossbar
Router A Router B
The metrics used to measure the performance of TP are average message latency and network
throughput. Average message latency is the average of the time that messages spend in the net-work
after their respective routing headers have been injected into the network until the time when
the tail flit is consumed by the destination node. Network throughput is defined as the total number
of flits delivered divided by the number of nodes in the network and the total simulation time
in clock cycles.
When no faults are present in the network, TP routing uses the DP routing restrictions and
This results in performance that is identical to that of DP. The fault performance of TP is
evaluated with a configuration of TP which uses
the faulty regions, i.e., does not use unsafe channels, and then uses misrouting backtracking
search to construct detours when the header cannot advance.
6.1 Static Faults
Figure
15 is a plot of the latency-throughput curves of TP and MB-m with 1, 10, and 20 failed
nodes randomly placed throughout the network. While the theorems developed in this paper
depend on the number of faults being less than the degree of processing elements (i.e.,
connected k-ary n-cube), the plots show the performance of TP for larger
values of faults because the faults are randomly distributed throughout the network. When randomly
placed, 2n - 1 faults do not perturb the system significantly. The performance of both routing
protocols drop as the number of failed nodes increase, since the number of undeliverable
Figure
15. Latency-throughput of TP and MB-m with node faults
Throughput
Latency
(Clock
Cycles)
Latency Vs. Throughput
TP and MB-m in Faulty Network
MB-m (1F)
MB-m (10F)
MB-m (20F)
messages increases as the number of faults increase. However, the latency of TP routed messages
for a given network load remains 30 to 40% lower than that of MB-m routed messages.
MB-m degrades gracefully with steady but small drops in the network saturation traffic load
(the saturation traffic is the network load above which the average message latency increases dramatically
with little or no increase in network throughput) as the number of faults increases.
Figure
16(a) shows that the latency of messages successfully routed via MB-m remains relatively
flat regardless of the number of faults in the system. The number in parenthesis indicates the number
of messages offered/node/5000 clock cycles. However, with the network offered load at 0.2
flits/node/cycle (30 msgs/node/5000 cycles), the latency increased considerably as the number of
faults increased. This is because with a low number of faults in the system, an offered load of
flits/node/cycle is at the saturation point of the network. With the congestion control mechanism
provided in the simulator, any additional offered load is not accepted. However, at the saturation
point, any increases in the number of faults will cause the aggregate bandwidth of the
network to increase beyond saturation and therefore cause the message latency to increase and the
network throughput to drop. When the offered load was at 0.32 flits/node/cycle, the network was
already beyond saturation so the increase in the number of faults had a lesser effect.
At low to moderate loads and with a lower number of faults, the latency and throughput characteristics
of TP are significantly superior to that of MB-m. The majority of the benefit is derived
from messages in fault-free segments of the network transmitting with
trol). TP however, performed poorly as the number of faults increased. While saturation traffic
Figure
16. Latency and throughput of TP and MB-m as function of node faults
Node Failures200.0600.0Latency
(Clock
Cycles)
Latency Vs. Node Faults
TP and MB-m
TP (1)
MB-m (1)
MB-m (30)
Node Failures0.100.30Throughput
Throughput Vs. Node Faults
TP and MB-m
TP (1)
MB-m (1)
MB-m (30)
with one failed node was 0.32 flits/node/cycle, it dropped to slightly over
with 20 failed nodes (only ~17% of original network throughput). In the simulated system (a 16-
ary 2-cube), 2n - 1 faults is 3. Hence 20 failed nodes is much greater than the limit set by the theorems
proposed in this paper. Figure 16 also shows the latency and throughput of TP as a function
of node failures under varying offered loads. At higher loads and increased number of faults, the
effect of the positive acknowledgments due to the detour construction becomes magnified and
performance begins to drop. This is due to the increased number of searches that the routing
header has to perform before a path is successfully established and the corresponding increase in
the distance from the source node to the destination. The trade-off in this version of TP is the
increased number of detours constructed vs. the performance of messages in fault-free sections of
the network. With larger numbers of faults, the former eventually dominates. In this region purely
conservative protocols appear to remain superior.
In summary, at lower fault rates and below network saturation loads, TP performs better than
the conservative counterpart. We also note that TP protocol used in the experiments was designed
for 3 faults (a 2 dimensional network). A relatively more conservative version could have been
configured.
Figure
17 compares the performance of TP with only one fault in the
network and low network traffic, both versions realize similar performance. However, with high
network traffic and larger number of faults, the aggressive TP performs considerably better. This
is due to the fact that with K > 0, substantial acknowledgment flit traffic can be introduced into the
Throughput (Flits/Cycle/Node)100.0200.0Latency
(Clock
Cycles)
Latency Vs. Throughput
Conservative vs. Aggressive SR
Aggressive (1F)
Aggressive (10F)
Aggressive (20F)
Conservative (1F)
Conservative (10F)
Conservative (20F)
Figure
17. Comparison of aggressive conservative SS routing behavior
network, dominating the effect of an increased number of detours.
6.2 Dynamic Faults
When dynamic faults occur, messages may become interrupted. In [16], a special type of control
flit called, kill flit, was introduced to permit distributed recovery. When a message pipeline is
interrupted, PEs that span the failed channel or PE release kill flits on all virtual circuits that were
affected. These kill flits follow the virtual circuits back to the source and the destination of the
messages. These control flits release any reserved buffers and notify the source that the message
was not delivered, and notify the destination to ignore the message currently being received. If we
are also interested in guaranteeing message delivery in the presence of dynamic faults, the complete
path must be held until the last flit is delivered to the destination. A message acknowledgment
sent from the destination traverses the complementary control channel, removes the path,
and flushes the copy of the message at the source. Kill flits require one additional buffer in each
control channel. This recovery approach is described in [16]. Here we are only interested in the
impact on the performance of TP. Figure 18 illustrates the overhead of this recovery and reliable
message delivery mechanism.
The additional message acknowledgment introduces additional control flit traffic into the sys-
tem. Message acknowledgments tend to have a throttling effect on injection of new messages. As
a result, TP routing using the mechanism saturates at lower network loads and delivered messages
have higher latencies. We compare the cases of i) probabilistically inserting f faults dynamically,
with ii) f/2 static faults - this is the average number of dynamic faults that would occur. From the
simulation results shown in Figure 18, we see that at low loads the performance impact of support
for dynamic fault recovery is not very significant. However, as injection rates increase, the additional
traffic generated by the recovery mechanism and the use of message acknowledgments
begins to produce a substantial impact on performance. The point of interest here is that dynamic
fault recovery has a useful range of feasible operating loads for TP protocols. In fact, this range
extends almost to saturation traffic.
6.3 Trace Driven Simulation
The true measure of the performance of an interconnection network is how well it performs
under real communication patterns generated by actual applications. The network is considered to
have failed if the program is prevented from completing due to undeliverable messages. Communication
traces derived from several different application programs: EP (Gaussian Deviates), MM
(Matrix Multiply), and MMP (another Matrix Multiply). These program traces were generated
using the SPASM execution driven simulator [25].
Communication trace driven simulations were performed allowing only randomly placed
physical link failures. Node failures would require the remapping of the processes, with the resulting
remapping affecting performance. No recovery mechanisms were used for recovery of undeliverable
messages. The traces were generated from applications executing on a 16-ary 2-cube.
The simulated network was a 16-ary 2-cube with 8 and 16 virtual channels per physical link. The
aggressive version of TP was used, i.e., no unsafe channels were used. Figure 19 shows three plots
of the probability of completion rates for the three different program traces with differing values
of misrouting (m). A trace is said to have completed when all trace messages have been delivered,
hence the probability of completion is defined as the ratio of the number of traces that were able to
execute to completion over the total number of traces run. If even one message cannot be deliv-
ered, program execution cannot complete. The results show the effect of not having recovery
mechanisms. These simulations were implemented with no re-tries attempted when a message
backtracks to the source or the node containing the first data flit. This is responsible for probabilities
of completion below 1.0 for even a small number of faults. The performance effect of the
recovery mechanism was illustrated in Figure 18. We expect that 2 or 3 re-tries will be sufficient
in practice to maintain completion probabilities of 1.0 for a larger number of faults.
In some instances, an increased number of misroutes resulted in poorer completion rates. We
Figure
18. Comparison of TP with and without tail-acknowledgment flits
Throughput (Flits/Cycle/Node)100.0200.0Latency
(Clock
Cycles)
Latency Vs. Throughput
Comparison of Dynamic Fault-Tolerant Mechanism
w/o TAck (1F)
w/o TAck (10F)
w/o TAck (20F)
with TAck (1F)
with TAck (10F)
with TAck (20F)
believe that this is primarily due to the lack of recovery mechanisms and re-tries. Increased misrouting
causes more network resources to be reserved by a message. This may in turn increase the
probability that other messages will be forced to backtrack due to busy resources. Without re-
tries, completion rates suffer. We again see the importance of implementating relatively simple
heuristics such as a small number of re-tries.
Finally, the larger number of virtual channels offered better performance since it provided an
increase of network resources and hence reduced the probability of backtracking due to busy
links.
6.4 Summary of Performance
Specifically, the performance evaluation provided the following insights.
Link Failures0.801.00
Probability
of
Completion
Probability of Completion
M=3, V=8
M=4, V=8
M=5, V=8
Link Failures0.40.8Probability
of
Completion
Probability of Completion
M=3, V=8
M=4, V=8
M=5, V=8
Link Failures0.700.901.10
Probability
of
Completion
Probability of Completion
M=3, V=8
M=4, V=8
M=5, V=8
Figure
19. Probability of completions for various program traces and numbers of
allowed misroutes
. The cost of positive acknowledgments dominates the cost of detour construction, suggesting
the use of low values of K, preferably
. Configurable flow control enables substantial performance improvement over PCS for low to
modest number of faults since the majority of traffic is in the fault-free portions, realizing
close to WS performance.
. For low to modest number of faults, the performance cost of recovery mechanisms is relatively
low.
. At very high fault rates, we still must use more conservative protocols to ensure reliable message
delivery and application program completion.
Conclusions
Routing in the presence of faults demands a greater level of flexibility than required in fault-free
networks. However, designing routers based on the relatively rare occurrence of faults,
requires that all message traffic be penalized: even the messages that route through the fault-free
portions of the network. Overhead may arise due to the setting up of a fault-free path prior to data
transmission (PCS), marking processors, and channels faulty to construct convex fault regions
[4,5], or increasing the number of virtual channels for routing messages around the faulty components
[4].
From low to moderate number of faults, configurable flow control mechanisms can lead to
deadlock-free fault-tolerant routing protocols whose performance is superior to more conservative
routing protocols with comparable reliability. In a network with a large number of faults, TP's
partially optimistic behavior results in a severe performance degradation. With conservative routing
protocols, no network resources are reserved until a path has been setup between the source
and the destination. TP does not require any complex renumbering scheme to provide fault-tolerance
[19,20], does not require the construction of convex regions [4,5], does not require additional
virtual channels [4], and the dynamic fault-tolerant version of TP does not rely on time-outs [11]
or padding of messages [22]. It does, however, result in a more complex channel model which can
affect link speeds.
The router designed to support TP requires only slightly more hardware than a router supporting
PCS [1], making the implementation very feasible. Current efforts are redesigning the PCS
router for support of TP protocols. It is however apparent that one of the most important performance
issues is a more efficient mechanism for implementing the positive/negative acknowledg-
ments. We are currently evaluating an implementation that adds a few control signals to the
physical channel, modifying the physical flow control accordingly (the logical behavior remains
unchanged). By implementing acknowledgment flits in hardware, we hope to extend the superior
low load performance of TP to significantly higher number of faults.
--R
DISHA: An efficient fully adaptive deadlock recovery scheme.
A comparison of adaptive wormhole routing algorithms.
The reliable router: A reliable and high-performance communication substrate for parallel computers
High performance bidirectional signalling in VLSI systems.
A theory of fault-tolerant routing in wormhole networks
A new theory of deadlock-free adaptive routing in wormhole networks
Scouting: Fully adaptive
Computer Systems Performance Evaluation.
The effects of faults in multiprocessor net- works: A trace-driven study
Adaptive routing protocols for hypercube interconnection networks.
The turn model for adaptive routing.
Cray T3D: A new dimensions for cray research.
Compressionless routing: A framework for fault-tolerant routing
A fault-tolerant communication scheme for hypercube computers
Machine abstractions and locality issues in studying parallel systems.
--TR
--CTR
Dong Xiang, Fault-tolerant routing in hypercubes using partial path set-up, Future Generation Computer Systems, v.22 n.7, p.812-819, August 2006
Dong Xiang , Ai Chen , Jiaguang Sun, Fault-tolerant routing and multicasting in hypercubes using a partial path set-up, Parallel Computing, v.31 n.3+4, p.389-411, March/April 2005 | multiphase routing;multicomputer;fault-tolerant routing;pipelined interconnection network;message flow control;virtual channels;wormhole switching;routing protocol |
299336 | Vexillary Elements in the Hyperoctahedral Group. | In analogy with the symmetric group, we define the vexillary elements in the hyperoctahedral group to be those for which the Stanley function is a single Schur Q-function. We show that the vexillary elements can be again determined by pattern avoidance conditions. These results can be extended to include the root systems of types A, B, C, and D. Finally, we give an algorithm for multiplication of Schur Q -functions with a superfied Schur function and a method for determining the shape of a vexillary signed permutation using de taquin. | Introduction
The vexillary permutations in the symmetric group have interesting connections with
the number of reduced words, the Littlewood-Richardson rule, Stanley symmetric func-
tions, Schubert polynomials and the Schubert calculus. Lascoux and Sch-utzenberger [14]
have shown that vexillary permutations are characterized by the property that they avoid
any subsequence of length 4 with the same relative order as 2143. Macdonald has given
a good overview of vexillary permutations in [16]. In this paper we propose a definition
for vexillary elements in the hyperoctahedral group. We show that the vexillary elements
can again be determined by pattern avoidance conditions.
We will begin by reviewing the history of the Stanley symmetric functions and establishing
our notation. We have included several propositions from the literature that
we will use in the proof of the main theorem. In Section 2 we will define the vexillary
elements in the symmetric group and the hyperoctahedral group. We state and prove
that the vexillary elements are precisely those elements which avoid different patterns
of lengths 3 and 4. Due to the quantity of cases that need to be analyzed we have used
a computer to verify a key lemma in the proof of the main theorem. The definition of
vexillary can be extended to cover the root systems of type A, B, C, and D; in all four
cases the definition is equivalent to avoiding certain patterns. In Sections 3, we give an
algorithm for multiplication of Schur Q-functions with a superfied Schur function. In
Section 4 we outline a method for determining the shape of a signed permutation using
de taquin. We conclude with several open problems related to vexillary elements in
the hyperoctahedral group.
Let S n be the symmetric group whose elements are permutations written in one-line
notation as [w 1 generated by the adjacent transpositions oe i for
Date: June 24, 1996.
The first author is supported by the National Science Foundation and the University of California,
Presidential Postdoctoral Fellowship.
positions i and when acting on the right, i.e.,
be the hyperoctahedral group (or signed permutation group). The elements of
are permutations with a sign attached to every entry. We use the compact notation
where a bar is written over an element with a negative sign. For example
is generated by the adjacent transpositions oe i for along with oe 0
which acts on the right by changing the sign of the first element, i.e., [w
If w can be written as a product of the generators oe a 1
oe a 2
ap and p is minimal then
the concatenation of the indices a 1 a is a reduced word for w, and p is the length
of w, denoted l(w). Let R(w) be the set of all reduced words for w. The signed (or
unsigned) permutations [w have the same set
of reduced words. For our purposes it will useful to consider these signed permutations
as the same in the infinite groups
Let s - be the Schur function of shape - and let Q - be the Q-Schur function of shape
-. See [15] for definitions of these symmetric functions.
Definition 1. For w define the S n Stanley symmetric function by
where A(D(a)) is the set of all weakly increasing sequences such that if i then we
don't have a k\Gamma1 ? a k , (i.e. no descent in the corresponding reduced word).
For define the C n Stanley symmetric function by
where A(P (a)) is the set of all weakly increasing sequences such that if i
then we don't have a no peak in the corresponding reduced word).
In [20], Stanley showed that Gw is a symmetric function and used it to express the
number of reduced words of a permutation w in terms of f - the number of standard
tableaux of shape -, namely
where ff -
w is the coefficient of s - in Gw . Bijective proofs of (1.3) were given independently
by Lascoux and Sch-utzenberger [13] and Edelman and Greene [4]. Reiner and Shimozono
[18] have given a new interpretation of the coefficients ff -
w in terms of D(w)-peelable
tableaux.
VEXILLARY ELEMENTS IN THE HYPEROCTAHEDRAL GROUP 3
Stanley also conjectured that there should be an analog of (1.3) for B n . This conjecture
was proved independently by Haiman [7] and Kra'skiewicz [8] in the following form:
where g - is the number of standard tableaux on the shifted shape -, and fi -
w are the
coefficients of Q - when Fw is expanded in terms of the Schur Q-functions.
The Stanley symmetric functions can also be defined using the nilCoxeter algebra of S n
and B n respectively(see [5] and [6]). The relationship between Kra'skiewicz's proof of (1.4)
and B n Stanley symmetric functions are explored in [11]. See also [2, 10, 23] for other
connections to Stanley symmetric functions. The functions Fw are usually referred to as
the Stanley symmetric functions of type C because they are related to the root systems
of type C. The Weyl group for the root systems of type B and C are isomorphic, so we
can study the group B n by studying either root system. We extend the results of the
main theorem to the root systems of type B and D at the end of Section 2.
We will use these symmetric functions to define vexillary elements in S n and B n .
The Stanley functions Fw can easily be computed using Proposition 1.1 below which
is stated in terms of special elements in B n . There are two types of "transpositions"
in the hyperoctahedral group. These transpositions correspond with reflections in the
Weyl group of the root system B n . Let t ij be a transposition of the usual type i.e.
be a transposition of two
elements that also switches sign
element s ii simply changes the sign of the ith element. Let - ij be a transposition of
either type. A signed permutation w is said to have a descent at r if w r ? w r+1 .
Proposition 1.1. [1] The Stanley symmetric functions of type C have the following
recursive formulas:
0!i!r
l(wtrs t ir )=l(w)
Fwtrs t ir
l(wtrss ir )=l(w)
Fwtrss ir
where r is the last descent of w, and s is the largest position such that w s ! w r . The
recursion terminates when w is strictly increasing in which case is the
partition obtained from arranging fjw decreasing order.
For example, let is a descent and w
and This implies wt rs 3] and we have
Continuing to expand the right hand side we see [ - 4; - 3; 1; 2] is strictly increasing so
Note that l(wt rs ) always equal Proposition 1.1 because of the choice for r
and s. If l(wt rs - ir rs - ir rs 1. The reflections which increase
the length of wt rs by exactly 1 are characterized by the following two propositions.
Proposition 1.2 ([17]). If w 2 S1 or B1 and only
if
and no k exists such that
Proposition 1.3. [1] If w 2 B1 , and i - j, then l(ws ij only if
and no k exists such that either of the following are true:
2. Main Results
In this section we give the definition of the vexillary elements in S n and B n . Then we
present the main theorem. The proof follows after several lemmas.
Definition 2. If w 2 S n then w is vexillary if
Similarly, if w then w is vexillary if
parts.
It follows from the definition of the Stanley symmetric functions (1.3) that if w is
vexillary then the number of reduced words for w is the number of standard tableaux of
a single shape (unshifted for w 2 S n or shifted for
For S n , this definition is equivalent to the original definition of vexillary given by
Lascoux and Sch-utzenberger in [14]. They showed that vexillary permutations w are
characterized by the condition that no subsequence a exists such that
. This property is usually referred to as 2143-avoiding. Lascoux and
Sch-utzenberger also showed that the Schubert polynomial of type A n indexed by w is a
flagged Schur function if and only if w is a vexillary permutation. One might ask if the
Schubert polynomials of type B, C or D indexed by a vexillary element could be written
in terms of a "flagged Schur Q-function."
Many other properties of permutations can be given in terms of pattern avoidance. For
example, the reduced words of 321-avoiding [3] permutations all have the same content,
and a Schubert variety in SL n =B is smooth if and only if it is indexed by a permutation
which avoids the patterns 3412 and 4231 [9]. Also, Julian West [24] and Simion
and Schmidt [19] have studied pattern avoidance more generally and given formulas for
computing the number of permutations which avoid combinations of patterns. Recently,
Stembridge [23] has described several properties of signed permutations in terms of pattern
avoidance as well.
We will define pattern avoidance in terms of the following function which flattens any
subsequence into a signed permutation.
VEXILLARY ELEMENTS IN THE HYPEROCTAHEDRAL GROUP 5
Definition 3. Given any sequence a 1 a of distinct non-zero real numbers, define
to be the unique element such that
ffl both a j and b j have the same sign.
ffl for all i; j, we have jb
For example, fl( - 6; 3; - 7; containing the subsequence
does not avoid the pattern - 41.
Theorem 1. An element w 2 B1 is vexillary if and only if every subsequence of length
4 in w flattens to a vexillary element in B 4 . In particular, w is vexillary if and only if it
avoids the following patterns:
This list of patterns was conjectured in [10]. Due to the large number of non-vexillary
patterns in (2.1) we have chosen to prove the theorem in two steps. First, we have verified
that the theorem holds for B 6 , see Lemma 2.1. Second, we show that any counter example
in B1 would imply a counter example in B 6 .
Lemma 2.1. Let w vexillary if and only if it does not contain any
subsequence of length 3 or 4 which flattens to a pattern in (2:1).
See the appendix for an outline of the code used to verify Lemma 2.1.
Lemma 2.2. Let w be any signed permutation. Suppose w
is a subsequence
of w and let
). Then the following statements hold:
1. If the last decent of w appears in position i r 2 fi then the last descent of
u will be in position r.
2. If in addition, w i s
and i s is the largest index in w such that this is true then
s is the largest index in u such that this is true.
3. If
then
and - jk are transpositions
of the same type.
One can check the facts above follow directly from the definition of the flatten function.
Lemma 2.3. For any v 2 B1 and any 0 there exists
an index k such that Similarly, if
there exists an index k such that either
6 SARA BILLEY AND TAO KAI LAM
Proof. If l(vt ir pick k such that v k is the largest value in fv k rg.
Then no j exists such that k
Say l(vs ir r such that v k chose k such that v k
is the largest value in fv k rg. Then no j exists such that k
1. On the other hand, if no such k exists, then choose
k such that v k is the smallest value in fv k ? \Gammav ig. Then no exists such
exists such that \Gammav r
Lemma 2.4. Given any w 2 B1 and any subsequence of w, say w
Similarly, if
Proof. If
so since the flatten map preserves the
relative order of the elements in the subsequence and signs. Therefore, l(vt jk
If
exists such that w i j
. This in
turn implies that no exists such that
and \Gammaw i k
so
since the flatten map preserves the relative order of the elements in the subsequence
and signs. Also, if
Therefore, l(vs jk
exists such that
, and no exists such that \Gammaw i k
. This in turn
implies that no exists such that \Gammav exists such that
Lemma 2.5. Given any w 2 B1 , if w is non-vexillary then w contains a subsequence
of length 4 which flattens to a non-vexillary element in B 4 .
Proof. Since w is non-vexillary then either Fw expands into multiple terms on the first
step of the recurrence in (1.5) or else non-vexillary. Assume
the first step of the recurrence gives
other terms
1g be an order preserving map onto the 4 smallest
distinct numbers in the range. Let w
fore, the recursion implies
Hence, w is not vexillary, and it follows that w contains the non-vexillary subsequence
If on the other hand the first step of the recursion gives rs - ir
and v is not vexillary. Assume, by induction on the number of steps until the recurrence
branches into multiple terms that v contains a non-vexillary subsequence say v a v b v c v d . If
VEXILLARY ELEMENTS IN THE HYPEROCTAHEDRAL GROUP 7
is exactly the same non-vexillary subsequence. So we
can assume the order of the set fa; b; c; d; than or equal to 6. Let
be an order preserving map which sends the numbers 1 through 6 to the 6 smallest distinct
integers in the range. Let w
contains a non-vexillary subsequence, hence v 0 is not vexillary by
Lemma 2.1. We will use the recursion on Fw 0 to show that w 0 is not vexillary in B 6 . From
Lemma 2.2 it follows that
By Lemma 2.3, rs
possibly other terms.
Irregardless of whether there are any other terms in expansion of Fw 0 , w 0 is not vexillary
since v 0 is not vexillary. Again by Lemma 2.1, this implies w 0 contains a non-vexillary
subsequence of length 4, say w
h . Hence, w contains the non-vexillary subsequence
This proves one direction of Theorem 1.
Lemma 2.6. Given any w 2 B1 , if w contains a subsequence of length 4 which flattens
to a non-vexillary element in B 4 then w is non-vexillary.
Proof. Assume w is vexillary then let w be the sequence of signed permutations
which arise in expanding
using the recurrence
(1.5). This recurrence terminates when the signed permutation w (k) is strictly increasing,
hence w (k) does not contain any of the patterns in (2.1). Replace w by the first w (i) such
that w (i) contains a non-vexillary subsequence and w (i+1) does not, and let
Say w a w b w c w d is a non-vexillary subsequence in w. If
would be exactly the same non-vexillary subsequence. This contradicts our choice of v.
So we can assume that the order of the set fa; b; c; d; than or equal to 6. As
in the proof of Lemma 2.5, let
be an order preserving map onto the smallest 6 distinct numbers in the range. Let
To simplify notation, we also
contains a
non-vexillary subsequence hence w 0 is not vexillary by Lemma 2.1. As in 2.5 one can
show
contains a non-vexillary subsequence and v 0 does not there must be another
term in Fw 0 indexed by a reflection - should
note that it is possible that i must be different types of
transpositions. 1.3 and the definition of the flatten
function, we have l(wt rs - jr rs ) ? 0. By Lemma 2.3 there exists a reflection - kr
such that l(wt rs - kr rs
We must have - kr 6= - ir since -
possibly other terms.
This proves w is not vexillary contrary to our assumption.
This completes the proof of Theorem 1.
The definition of vexillary can be extended to Stanley symmetric functions of type B
and D. These cover the remaining infinite families of root systems. For these cases, we
define vexillary to be the condition that the function is exactly one Schur P -function. The
signed permutations which are B and D vexillary can again be determined by avoiding
certain patterns of length 4.
Theorem 2. An element w 2 B1 is vexillary for type B if and only if every subsequence
of length 4 in w flattens to a vexillary element of type B in B 4 . In particular, w is vexillary
if and only if it avoids the following patterns:
An element w 2 D1 is vexillary for type D if and only if every subsequence of length
4 avoids the following patterns:
Note, that the patterns that are avoided by vexillary elements of type D are not all
type D signed permutations but instead include some elements with an odd number of
negative signs. The proof of Theorem 2 is very similar to the proof of Theorem 1 given
above. We omit the details in this abstract.
3. A rule for multiplication
Lascoux and Sch-utzenberger noticed that the transition equation for Schubert polynomials
of vexillary permutations can be used to multiply Schur functions [16][p.62].
L. Manivel asked if the transition equations for Schubert polynomials of types B, C, and
could lead to a rule for multiplying Schur Q-functions. The answer is "sometimes".
There are only certain shifted shapes - which can easily be multiplied by an arbitrary
Schur Q-function. Therefore, we have investigated a different problem. In this section
VEXILLARY ELEMENTS IN THE HYPEROCTAHEDRAL GROUP 9
we present an algorithm for multiplication of a Schur Q-function by a superfied Schur
Let OE be the homomorphism from the ring of symmetric functions onto the subring
generated by odd power sums defined by
The image of a Schur function under this map, OE(s - ), is called a superfied Schur function.
The superfied Schur functions appear in connection with the Lie super algebras [21][25].
The Stanley symmetric functions of type A and C which are indexed by permutations
are related via the superfication operator.
Proposition 3.1. [2][11][22] For v 2 S n , we have F
be any signed permutation. We denote the signed permutation
v. Also, if signed
permutation, let w \Theta v be [w
Lemma 3.2. For v 2 S1 and w 2 B1 we have
Proof. From (1.2), when v 2 S1 , F v is equal to F 1 n \Thetav since a 1 a only
assuming n is large enough, the reduced
words for w \Theta v are all shuffles of a reduced word for w with a reduced word for v. One
can check that the admissible monomials in F v\Thetaw are exactly the product of admissible
monomials in Fw and F 1 n \Thetav counted with their coefficients.
From Lemma 3.2 and Proposition 3.1 we have the following corollary.
Corollary 3.3. Let w 2 B1 such that
Then
and Fw\Thetav can be determined by the recursive formula in Proposition 1.1.
We remark that Corollary 3.3 can be used to multiply two Schur Q-functions in the
special case that one of the shapes is equivalent to a rectangle under jeu de taquin. In
this case, if a shifted shape - is equivalent to a rectangle ae then Q [12]. For each
straight shape ae one can easily choose a permutation v in S1 with that shape. However,
for any straight shape - other than a rectangle, the expansion of OE(s - ) will always be a
sum of more than one Q - . Since F only if v 2 S1 , the algorithm for
multiplying FwF v given above will not carry over for arbitrary elements of B1 .
4. The shape of a signed permutation
Given a vexillary element w, for which straight shape - does
for which shifted shape - does there are several
ways to determine this shape: the transition equation [16, p. 52], inserting a single
reduced word using the Edelman-Greene correspondence [4], or by rearranging the code
in decreasing order [20]. Similarly, for vexillary elements of type C one can find this
shape for the signed permutation by using the recursive formula (1.5) or by using the
Kra'skiewicz insertion [8] or Haiman procedures [7] on a single reduced word. There is
another method for computing the shape of a C n -vexillary element using jeu de taquin.
We describe this method below.
For any standard Young tableau U of shifted shape and any standard Young tableau
V of straight shape we form a new standard shifted tableau U V by jeu de taquin as
follows:
1. Embed U into the shifted shape
2. Obtain a tableau R by filling the remaining boxes of ffi with starting from
the rightmost column and in each column from bottom to top.
3. Add j-j to each entry of V to obtain S.
4. Append R on the left side of S to obtain T .
5. Delete the box containing 1 0 in T . If the resulting tableau is not shifted, apply jeu
de taquin to fill in the box. Repeat the procedure for the box containing 2 0 and so
on until all the primed numbers are removed.
6. The resulting tableau of shifted shape is denoted U V .
We illustrate the procedure with an example. Let
Then, Steps 1 through 4 will produce the following tableau:
Deleting the boxes and applying jeu de taquin as in Step 5 gives
U
Note that different choices for V will result in different shapes for U V . If V is chosen
to be the the standard tableau with entries in the first row, in the
second row etc., we will say U V has shape - -. So, the result of combining - and -
by jeu de taquin in the example is the shape (4; 2) (2;
There is a canonical decomposition of any signed permutation into the product of a
signed permutation and an unsigned permutation with nice properties. Let w be an
element of B n , not necessarily vexillary. Rearrange the numbers in w in increasing order
VEXILLARY ELEMENTS IN THE HYPEROCTAHEDRAL GROUP 11
and denote this new signed permutation by u. Let v 2 S n so that uv. Note that
and v is [4; 2;
Definition 4. Given an element w of S n , the code of w is defined to be the composition
g. The shape of w is defined to be the
transpose of the partition given by rearranging the code in decreasing order.
It is well known that if - is the shape of S n vexillary permutation v then G
Furthermore, for each standard tableau Q of shape - there exists a unique reduced word
for v with recording tableau Q under the Edelman-Greene correspondence [4]. Also, the
reduced words for u are in bijective correspondence with the shifted standard tableau of
shape -.
Recall from Proposition 1.1 that if u is a strictly increasing signed permutation, we
have F is the strictly decreasing sequence given by fju
is the same set as fjw 0g. Therefore, it is easy to determine the shape of u.
Definition 5. For any w 2 B n , let and u is
strictly increasing. Let - be the shape of u, and let - be the shape of v as an element of
. Define the shape of w , denoted -(w), to be the shape - -.
In the case of vexillary signed permutations, we claim that is the
shape of w. This is true in the case when w has all its numbers in increasing order.
Before we prove the claim, here are some results that we will need. We refer the reader
to the references for their proofs.
Proposition 4.1. [2][11][22] Let w be a signed permutation with no positive numbers.
-. The shape of w is
Proposition 4.2. [11, Theorem 3.24] Let a 1 a 2 a am be a reduced word for some signed
permutation w in B n . Suppose Q and Q 0 are the recording tableaux that are obtained by
applying Kra'skiewicz insertion on a 1 a 2 a am and a 2 a
also be obtained by deleting the box containing the entry 1 in Q and applying jeu de taquin
to turn Q into a shifted tableau and subtracting 1 from every entry.
Theorem 3. For , the shape of w, -(w) is the shape of some Kra'skiewicz recording
tableau for some reduced word of w. Hence Q -(w) appears in the expansion of Fw with
a non-zero coefficient.
Proof. Let uv be the canonical decomposition of w. Given
a reduced word a, denote the Kra'skiewicz recording tableau by K(a). Fix a reduced
word b of u with recording tableau U = K(b) under Kra'skiewicz insertion. Let V be
the standard tableau of straight shape with entries filled sequentially in rows from left
to right, top to bottom. Let c 2 R(v) be the reduced word which inserts under the
Edelman-Greene correspondence to V . We will actually show that the reduced word
rise to the Kra'skiewicz recording tableau U V which by definition has
the same shape as -(w).
We prove the claim by induction on the number of positive numbers in w. When there
is no positive number in w, this follows from Proposition 4.1 and there are no jeu de
taquin slides necessary.
Now suppose w has p positive numbers. Let be the
signed permutation that is the same as w except that m is signed. We can write w 0 as a
product is the arrangement of w 0 in increasing order and v is the same as in
the decomposition for w. Let
Note that u
with a be a reduced word for
z, then abc 2 R(w 0 ). Let U 0 be the recording tableau for ab 2 R(u 0 ).
positive numbers, by the induction hypothesis, the recording tableau
of the Kra'skiewicz insertion of abc 2 R(w 0 ) is given by U 0 V .
From Proposition 4.2, the tableau U can be obtained from U 0 by deleting m boxes
labeled applying jeu de taquin to fill them up. Since u and u 0 are strictly
increasing, by Proposition 1.1, we know their shapes explicitly. Note that u 0 was chosen
so that the m boxes which are vacated from the shape for u 0 in the jeu de taquin process
form a vertical strip in the (n \Gamma p)th column (since w 2 B n ). Therefore, filling these m
boxes with the next higher primed numbers we obtain the tableau T which appears in
Step 4 when computing -(w). Therefore, since U 0 V gives K(abc) and evacuation of a
from U 0 gives U then U V must give K(bc). Therefore, U V is the recording tableau
for bc 2 R(w).
From this proof one can also prove the following corollaries.
Corollary 4.3. For any vexillary element w 2 B n , we have
where -(w) is the shape of w as in Definition 5.
Corollary 4.4. Given any w 2 B n , let uv be the canonical
decomposition of w. For any a 2 R(u) and any b 2 R(v) with recording tableaux U and
respectively, then U V is the recording tableau for the reduced word ab 2 R(w).
5. Open Problems
The vexillary permutations in S n have many interesting properties. We would like to
explore the possibility that these properties have analogs for the vexillary elements in
1. Is there a relationship between smooth Schubert varieties in SO(2n + 1)=B and
vexillary elements? In particular, does smooth imply vexillary as in the case of S n ?
2. Is there a way to define flagged Schur Q-functions so that the Schubert polynomial
indexed by w of type B or C is a flagged Schur Q-functions if and only if w is
vexillary?
VEXILLARY ELEMENTS IN THE HYPEROCTAHEDRAL GROUP 13
3. Are there other possible ways to define vexillary elements in B n so that any of the
above questions can be answered?
Below is a portion of the LISP code used to verify Theorem 1 for B 6 . The calculation
was done on a Sparc 1 by running (grind-patterns 6 'c).
(setf *avoid-patterns*
(list
(defun grind-patterns (n type)
(flet ((helper (perm)
(if (not (eq (and (avoid-subsequences perm
(avoid-subsequences perm 4))
(vex-p perm type)))
(format t "ERROR::NEW PATTERN: ~a ~%" perm)
(format t "."))))
(all-perm-tester n #'helper
(defun avoid-subsequences (the-list size)
(let ((results t))
(catch 'foo
(flet ((helper (tail)
(when (member (flatten-seq tail) *avoid-patterns*
:test #'equal)
(setf results nil)
(throw 'foo nil)
(all-subsequences-tester (reverse the-list) size #'helper nil))
(throw 'foo t))
--R
Transition Equations for Isotropic Flag Manifolds
Schubert polynomials for the classical groups
Some Combinatorial Properties of Schubert Polyno- mials
Schubert Poly- nomials
Schubert Polynomials and the NilCoxeter Algebra
Dual equivalence with applications
Criterion for smoothness of schubert varieties in SL(n)
Bn Stanley
Structure de hopf de l'anneau de cohomologie et de l'anneau de grothendieck d'une variete de drapeaux
Oxford University Press
The geometry of flag manifolds
Algebraic Combin.
of Combinatorics
On the number of reduced decompositions of elements of Coxeter groups
personal communication
Permutations with forbidden sequences
A theory of shifted Young tableaux
--TR
--CTR
Bridget Eileen Tenner, On expected factors in reduced decompositions in type B, European Journal of Combinatorics, v.28 n.4, p.1144-1151, May, 2007
S. Egge , Toufik Mansour, 132-avoiding two-stack sortable permutations, Fibonacci numbers, and Pell numbers, Discrete Applied Mathematics, v.143 n.1-3, p.72-83, September 2004 | stanley symmetric function;reduced word;hyperoctahedral group;vexillary |