Innovative Software Algorithms & Tools

ABS10 Title of Talk:
Online Modeling of Accelerators and Beamlines at Fermilab
Email of the Corresponding Author:
mccrory@fnal.gov
Oral
Name(s) and Affiliation(s) of Author(s):
ELLIOTT MCCRORY, LEO MICHELOTTI, JEAN-FRANCOIS OSTIGUY (FERMILAB)
We have implemented access to beam physics models of the Fermilab accelerators and beamlines through the Fermilab control system. The models run on Unix workstations, communicating with legacy controls software via a front end redirection mechanism (the open access server), a relational database and simple text-based protocol over TCP/IP. The clients and the server are implemented in object-oriented C++. We discuss limitations of our approach and the difficulties that arise from it. Some of the obstacles may be overcome by introducing a new layer of abstraction. To maintain compatibility with the next generation of accelerator control software currently under development at the laboratory, this layer would be implemented in Java. We discuss the implications of that choice.We have implemented access to beam physics models of them Fermilab accelerators and beamlines through the Fermilab control system.
The models run on Unix workstations, communicating with legacy controls software via a front end redirection mechanism (the open access server), a relational database and simple text-based protocol over TCP/IP. The clients and the server are implemented in object-oriented C++. We discuss limitations of our approach and the difficulties that arise from it. Some of the obstacles may be overcome by introducing a new layer of abstraction. To maintain compatibility with the next generation of accelerator control software currently under development at the laboratory, this layer would be implemented in Java. We discuss the implications of that choice.
ABS09 Title of Talk:
EMS: a Framework for Data Acquisition and Analysis
Email of the Corresponding Author:
nogiec@fnal.gov
Oral
Name(s) and Affiliation(s) of Author(s):
JERZY NOGIEC, JIM SIM, KELLEY TROMBLY-FREYTAG, DANA WALBRIDGE, Fermi National Accelerator Laboratory
EMS is a universal Java framework for building data analysis and test systems. The objective of the EMS project is to replace a multitude > of different existing systems with a single expandable system, capable of accomodating various test and analysis scenarios and varying algorithms.

The EMS framework is based on component technology, graphical assembly of systems, introspection and flexibility to accommodate various data processing and data acquisition (hardware-oriented) components.

Core system components, common to many application domains, have been identified and designed together with the domain specific components for the magnetic measurements of accelerator magnets.

The EMS employs various modern technologies and the result is a highly portable, graphically configurable, and potentially distributed system, with the capability of parallel signal data processing, parameterized test scripting, and run-time reconfiguration.
ABS13 Title of Talk:
Stable Algorithm for Extraction of Asymmetries - from the Data on Polarized Lepton-Nucleon Scattering
Email of the Corresponding Author:
gagunash@sunhe.jinr.ru
Oral
Name(s) and Affiliation(s) of Author(s):
GAGUNASHVILI NIKOLAI, Laboratory of High Energies, Joint Institute for Nuclear Research, Dubna, Russia
A new algorithm for extraction of asymmetries from polarized lepton-nucleon scattering data is proposed. The algorithm is stable to set-up acceptance and/or luminosity monitor acceptance variations. A statistical test for checking the data quality is proposed.
ABS06 Title of Talk:
Search for New Physics in e mu X Data at D0 Using Sherlock: A Quasi-Model-Independent Search Strategy for New Physics
Email of the Corresponding Author:
knuteson@fnal.gov
Combined
Name(s) and Affiliation(s) of Author(s):
BRUCE KNUTESON, UC Berkeley, DAVE TOBACK, University of Maryland
We present a quasi-model-independent search for the physics responsible for electroweak symmetry breaking. We define final states to be studied, and construct a rule that identifies a set of relevant variables for any particular final state. A new algorithm ("Sherlock") searches for regions of excess in those variables and quantifies the significance of any detected excess. After demonstrating the sensitivity of the method, we apply it to the semi-inclusive channel e mu X collected in 108 pb^-1 of ppbar collisions at sqrt(s) = 1.8 TeV at the D0 experiment during 1992-1996 at the Fermilab Tevatron. We find no evidence of new high p_T physics in this sample.
ABS05 Title of Talk:
Search for New High p_T Physics in ppbar Collisions at sqrt(s) = 1.8 TeV
Email of the Corresponding Author:
knuteson@fnal.gov
Combined
Name(s) and Affiliation(s) of Author(s):
BRUCE KNUTESON, UC Berkeley, DAVE TOBACK, University of Maryland
We apply a quasi-model-independent search algorithm ("Sherlock") to search for new high p_T physics in ppbar collisions at sqrt(s) = 1.8 TeV collected by the D0 experiment during 1992-1996 at the Fermilab Tevatron.
We systematically analyze many exclusive final states, and demonstrate sensitivity to a variety of models predicting new phenomena at the electroweak scale. Results of this search will be presented.
ABS15 Title of Talk:
Simultaneous Tracking and Vertexing with Elastic Templates
Email of the Corresponding Author:
haas@fnal.gov
Oral
Name(s) and Affiliation(s) of Author(s):
ANDREW HAAS, University of Washington
The Elastic Templates algorithm uses simulated mean-field annealing and gradient descent to find near-optimal solutions of problems which are both combinatorial and continuous. For instance, in high-energy physics experiments it has been applied to the tracking problem, which is to minimize the distances between track templates and the hits belonging on each track. I will explain recent attempts to extend and improve this approach for reconstructing simulated data from the D0 experiment. Vertex templates have been added, and the algorithm simultaneously minimizes the distances between the track templates and the vertex assigned to each track.
A weight for each template is used in the algorithm, which represents the certainty that the template is really needed. Error matrices for the hits are used properly. Also, dramatic performance increases have been achieved by modifying the mean-field algorithm to take advantage of STL vectors
ABS08 Title of Talk:
Worldwide Distributed Analysis and Data Grids for Next-Generation Physics Experiments
Email of the Corresponding Author:
newman@hep.caltech.edu
Combined
Name(s) and Affiliation(s) of Author(s):
HARVEY NEWMAN, Caltech
The major physics experiments of the next twenty years will break new ground in our understanding of the fundamental interactions, structures and symmetries that govern the nature of matter and spacetime. Realizing the scientific wealth of these experiments presents new problems in data access, processing and distribution, and collaboration across national and international networks, on a scale unprecedented in the history of science.
The challenges include:
- The extraction of small or subtle new physics signals from large and potentially overwhelming backgrounds,
- Providing rapid access to event samples and subsets drawn from massive data stores, rising from 100s of Terabytes in 2000 to Petabytes by 2005, to 100 Petabytes by 2010,
- Providing secure, efficient and transparent access to heterogeneous worldwide-distributed computing and data handling resources, across an ensemble of networks of varying capability and reliability,
- Tracking the state and usage patterns at each site and across sites, in order to make rapid turnaround as well as efficient utilization of global resources possible
- Providing the collaborative infrastructure that will make it possible for physicists in all world regions to contribute effectively to the analysis and the physics results, including from their home institutions.
In my talk I will provide a perspective on the key computing, networking and software issues, and the ongoing R&D aimed at building a worldwide-distributed system to meet these diverse challenges. Over the last year this concept has evolved into that of a data-intensive, hierarchical "Data Grid" of national centers linked to the principal center at the experimental site, and to regional and local centers. I will summarize the role of recent projects on distributed systems and "Grids" in the US and Europe. I will touch on the synergy between these developments and work in other fields, and briefly discuss the potential importance for scientific research and industry in the coming years.
ABS36 Title of Talk:
High Performance visual display for HENP detectors
Email of the Corresponding Author:
mcguigan@bnl.gov
Poster
Name(s) and Affiliation(s) of Author(s):
MICHAEL MCGUIGAN, Gordon Smith, John Spiletic {mcguigan,smith3,spiletic}@bnl.gov Information Technology Division, Brookhaven National Lab Upton, NY 11973
A high end visual display for High Energy Nuclear Physics (HENP) detectors is necessary because of the sheer size and complexity of the detector. For BNL this display will be of special interest because of STAR and ATLAS. To load, rotate, query, and debug simulation code with a modern detector simply takes too long even on a powerful work station. To visualize the HENP detectors with maximal performance we have developed software with the following charecteristics. We develop a visual display of HENP detectors on BNL multiprocessor visualization server at multiple level of detail. We work with general and generic detector framework consistent with ROOT, GAUDI etc, to avoid conflicting with the many graphic development groups associated with specific detectors like STAR and ATLAS. We develop advanced OpenGL features such as transparency and polarized stereoscopy. We enable collaborative viewing of detector and events by directly running the analysis in BNL stereoscopic theatre. We construct enhanced interactive control, including the ability to slice, search and mark areas of the detector. We encorporate the ability to make a high quality still image of a view of the detector and the ability to generate animations and a fly through of the detector and output these to MPEG or VRML models. We develop data compression hardware and software so that remote interactive visualization will be possible among dispersed collaborators. We obtain real time visual display for events accumulated during simulations.
ABS37 Title of Talk:
The FreeHEP and HEP Libraries for Java
Email of the Corresponding Author:
tonyj@slac.stanford.edu
Oral
Name(s) and Affiliation(s) of Author(s):
MARK DONSZELMANN - CERN CHARLES LOOMIS - UC Santa Cruz GARY BOWER, TONY JOHNSON and JOSEPH PERL - SLAC
Java is becoming an increasingly dominant language in many fields of scientific computing. In High-Energy and Nuclear physics Java is catching on more slowly, but is already in use by many experiments. The FreeHEP and HEP libraries are an attempt to reduce unnecessary duplication of effort by making common cross-function functionality available to the entire community.

The FreeHEP library started as an means of pulling common functionality from the JAS and WIRED projects into a common base library. It has now expanded beyond that immediate goal and includes:

A vector graphics package for generating encapsulated postscript, Scalable Vector Graphics and other graphics formats. General GUI utilities which are not present in the base Swing library Jaco - A package to simplify Java Access to C++ objects Classes for building large applications, such as a command dispatcher and XML based menu builder.

While the FreeHEP library has been developed within HEP, it contains code which would be of general applicability outside of the field. By contrast the HEP library contains code which is more specific to high energy and nuclear physics, and can be thought of as a Java analog of the CLHEP library. Currently the HEP library consists of packages dealing with:  3 and 4 vectors and associated utilities Jet finding and Event Shapes Particle Properties A Diagnostic Event generator An implementation of the AIDA histogram interface

Both libraries are being actively developed following an Open-Source model, using CVS for distributed code management. Developers interested in using or contributing to either library are encouraged to contact the authors.
ABS38 Title of Talk:
Graduate Student in Physics
Email of the Corresponding Author:
ernane@lcs.poli.usp.br
Oral
Name(s) and Affiliation(s) of Author(s):
ERNANE J. X. COSTA, EUVALDO F. CABRAL JR., LCS- Escola Politécnica - USP
A significant number of research groups is actively investigating the development of so-called "Brain-Computer Interface (BCI)" systems. There are various methods described for good determination and classification of the EEG signals which offer many exciting possibilities for the control of peripheral devices via computer analysis. Most effort has also been concentrated in the analysis of changes in the frequency content and has been carried out using the complexity measures of EEG signals. This work presents a low cost Brain computer Interface using Short Time Fractal Dimension and Wavelets Transforms for signals recorded from single channel EEG using only three electrodes, a single isolation amplifier and a 166 MHz PC. Wavelet transforms were used to decompose the EEG signal into six scaling components (D1, D2, ..., D6).For each decomposition the fractal dimension parameter was computed. The fractal dimension parameter can be computer by means of several methods. The most popular are the correlation dimensions and the box counting methods. This approach is based on the curve covering at different scales. A basic element of size e is selected. By increasing e and computing the corresponding area of the cover, area(e), a number of couples (e, area(e)) is obtained. A straight line is then fitted, using the least square method, obtaining the graph of log[area(e)/e2] versus log(1/e). The approximate estimate of the fractal dimension is the slope of this line. The Fractal dimension of each decomposition constituted the selected features of the EEG signals. The classification is then made by a three layer artificial neural network (ANN) using the fractal dimensions as input parameters.
ABS43 Title of Talk:
Faster tracking in hadron collider experiments
Email of the Corresponding Author:
n.konstantinidis@cern.ch
Oral
Name(s) and Affiliation(s) of Author(s):
NIKOS KONSTANTINIDIS, U.C. Santa Cruz HANS DREVERMANN, CERN
Strategies for improving the speed and performance of pattern recognition in the tracking detectors of hadron collider experiments are presented. They are based on fast algorithms which aim at preselecting the hits from the underlying physics event, while filtering out hits from noise and pile up. Quantitative results in terms of timing and efficiency are presented in the context of the ATLAS experiment at the LHC; an extrapolation to other hadron collider conditions, such as the Tevatron, is also discussed.
ABS18 Title of Talk:
A Software Tool for the Online Analysis and Monitoring of the HADES Spectrometer
Email of the Corresponding Author:
vassili@lns.infn.it
Oral
Name(s) and Affiliation(s) of Author(s):
PAOLO FINOCCHIARO, DMITRY VASILIEV, Istituto Nazionale di Fisica Nucleare, Laboratori Nazionali del Sud, Catania, Italy for the HADES collaboration
A program for the online analysis and monitoring of the High Acceptance DiElectron Spectrometer (HADES) is described. HADES has to be commissioned at GSI (Darmstadt, Germany) using the highest SIS energies and beam intensities. A certain part of the HADES spectrometer is already in operation allowing to carry out intensive hardware and software tests. We present an effective software tool which being a part of the general HADES analysis framework, provides an easy way to perform the direct monitoring of any particular HADES subdetector and also the preliminary online analysis by means of data correlations. This tool allows to define at run time histogrammes and conditions under which the histogrammes must be filled. The user can both make use of a set of predefined parameters for histogramming or create his own ones also at run time. Conditions can be mathematical expressions, graphical cuts or combinations of both types. There are two levels of conditions which can be applied to the hits in a particular subdetector (for monitoring purposes) or to the event in the whole for matching the information from different detectors and making all possible types of data correlations. The program has a powerful and ergonomic graphical user interface (GUI) which allows to easily create and save all histogram and condition definitions, perform event loop and visualize the data. The whole package is a library of approximately 40 classes written in C++ and integrated into the ROOT framework. Several constituents of the program mainly related to the data visualization like for example making slices of two-dimensional histogrammes, are designed in such a way that can be independently used in other applications.
ABS19 Title of Talk:
Tailorable Software Architectures in Accelerator Physics Research
Email of the Corresponding Author:
igor@will.or.jp
Oral
Name(s) and Affiliation(s) of Author(s):
IGOR MEJUEV, PFU Limited, 658-1 Tsuruma, Matchida, Tokyo 194-8510, Japan, Akira Kumagai and Eiichi Kadokura, High Energy Accelerator Research Organization (KEK), Tsukuba, Japan
A tailorable software system can continue its evolution after deployment in order to adapt to particular work situation and diverse needs of the users. End-user tailorability has been extensively researched in applied computer science from HCI and software engineering perspectives. Tailorability allows coping with flexibility requirements, decreasing maintenance cost of software products and actively involving users on the process of software development and maintenance. In general, evolving, dynamic or diverse software requirements constitute the need for implementing end-user tailorability in computer systems. In accelerator physics research the factor of dynamic requirements is especially important, due to relatively frequent software and hardware modifications resulting in correspondingly high upgrade and maintenance costs. In this work we introduce the results of feasibility study on implementing end-user tailorability in the software for accelerator physics, focusing mainly on distributed monitoring and data analysis applications. The software prototypes used in this work are based on a generic tailoring platform (VEDICI), which allows decoupling of tailoring interfaces and runtime components. A VEDICI application is represented as a nested hierarchy of compositional markup specifications with the possibility to associate an individual tailoring component with each specification. This approach allows integrating multiple tailoring interfaces within an application instance. While representing a reusable application-independent framework, VEDICI can be potentially applied for tailoring of arbitrary compositional Web-based applications. Keywords: Tailorability, Monitoring, Data Analysis, Web-based Systems
ABS20 Title of Talk:
Event Bookkeeping for CLEO-3
Email of the Corresponding Author:
jjt@uiuc.edu
Oral
Name(s) and Affiliation(s) of Author(s):
JON J THALER, University of Illinois at Urbana-Champaign
Most aspects of particle physics data analysis share a common bookkeeping task, maintaining relationships between members of two sets of objects. For example, the tracks created by pattern recognition algorithms must be correlated with the detector hits that contribute to them. Conversely, it is important to be able to determine which tracks each hit contributes to. Similar bidirectional correlations must be established and maintained in vertex reconstruction and for event kinematics. It would simplify the software environment if this bookkeeping were managed by a single sharable package. The bookkeeping would only have to be written and debugged once, and code writers would be freed to concentrate on the algorithms. To this end, the CLEO collaboration has written software, called "Lattice," which is designed to perform the bookkeeping task described above. Lattice allows the user to add and remove connections between objects (e.g., hits on tracks). It allows data to be associated with each connection (e.g., hit contributions to track chi-squared). An object can be a member of several Lattices simultaneously. Lattice is designed to have minimal impact on existing code. It does not require any special inheritance by data classes that will used by Lattice. The only behavioral requirement is that data objects implement an "identifier()" function, used by Lattice to distinguish objects. The Lattice interface is designed to look like STL, to allow the use of existing coding conventions. We will report on CLEO's experience with this software.
ABS22 Title of Talk:
Effects of Limited Resources in 3D Real-Time Simulation of an extended Echo Complex Adaptive System Model 
Email of the Corresponding Author:
dominiak@webfootgames.com
Oral
Name(s) and Affiliation(s) of Author(s):
DANA DOMINIAK, Illinois Institute of Technology, FRANK RINALDO, Illinois Institute of Technology, MARTHA EVANS, Illinois Institute of Technology
John Holland proposed an evolutionary model of adaptive agents called 'ECHO'. ECHO is a first step toward mathematical theory in the field of complex adaptive systems. The existing ECHO model has been used by researchers in numerous disciplines, both to model and to explain complex system behaviors. This paper describes the effects of limited resources in a 3D simulation of an extended Holland ECHO model. The simulation shows adaptive agents moving about the ECHO landscape and interacting with other agents in real-time. A genetic algorithm is used to breed the adaptive agents. The virtual environment contains limited resources in the form of symbols. Agents develop elaborate relationships to utilize these resources through both competition and cooperation. By observing the emergence of complexity in real-time inside a virtual 3D world, researchers have a better tool by which to identify and explain complex adaptive system behavior. In addition to implementing Holland's basic ECHO model, several additions have been made including expanded resource metabolism and resource hierarchies. It is hoped that such new constructs will help to generate more realistic systems while maintaining a minimal set of underlying rules. This real-time 3D ECHO visualization allows an observer to quickly identify interesting and complex agent behaviors and resource interactions. Such a tool is a good first step in the young field of complex adaptive systems theory, a field involved in the identification, prediction, and control of emergent complex phenomena.
ABS58 Title of Talk:
Using CORBA for remote participation in large physics experiments
Email of the Corresponding Author:
B.U.Niderost@phys.uu.nl
Oral
Name(s) and Affiliation(s) of Author(s):
B.U. NIDERÖST1, A.A. GERRITSEN1, W. LOURENS1, A. TAAL1, G. KEMMERLING2, M. KORTEN2, W. KOOIJMAN3, A.A.M. OOMENS3, F. WIJNOLTZ3
As computer hard- and software is evolving rapidly and the performance of the Internet is increasing, remote participation in physics experiments becomes feasible. Scientists still have to travel to research institutes to install and calibrate measurement equipment, but the need to be present at the experiment site during actual measurements is decreasing. To show the possibilities of remote participation, we built a demonstrator for experiments at the Textor94 tokamak, located at the Institut für Plasmaphysik (IPP) of the Forschungszentrum Jülich in Germany. Textor-94 is the main experimental facility of the three plasma physics institutes, located in the Netherlands, Belgium, and Germany, that collaborate in the Trilateral Euregio Cluster (TEC). In addition to video conferencing, participating scientists from external institutes can use the demonstrator to browse a measurement database, which contains a large part of the existing as well as newly measured data. Data that is measured by other participating institutions and stored in legacy formats is also available in the same database. Furthermore, scientists can monitor the current status of the tokamak, and there is a tool that allows them to remotely control measurement equipment. The architecture that underlies these tools is client/server based. Our demonstration clients run in Java-enabled web-browsers (for example Internet Explorer 4.0+ or Netscape Navigator 4.07+). They are available from a secured website, and can be evaluated by interested users from anywhere in the world. The servers are C++ programs, running on a variety of computer platforms. The clients communicate with the servers via CORBA IDL interfaces. End users who are connected to the Internet can access the servers directly from within their own computer (analysis) programs. Thanks to CORBA, users can write these programs in a large variety of programming languages, and on many computer platforms. CORBA IDL also makes co-development fairly easy: several institutes developed their own clients and servers, with little communication between the institutes during the development process. Still, the clients and servers worked correctly together after only a few minor adaptations. In this paper, we will present our experience with CORBA, the advantages mentioned above, and others, and we will cover some performance related issues. We will also present ideas to improve the performance by using CORBA's new component / POA based architecture.
ABS57 Title of Talk:
Vertex reconstruction brfore tracking in magnetic field
Email of the Corresponding Author:
yuyatsu@fnal.gov
Oral
Name(s) and Affiliation(s) of Author(s):
YURIY A YATSUNENKO, Joint Institute for Nuclear Research
The "Integral mathematics model of the track pattern" allows a computer to make a global overview of the whole hit array like a human does it before a reconstruction of the trajectories. It gives the position of the primary intensive vertex, that allows to separate also the hits of multi vertices events (i.e. each vertex gets own set of hits). The whole hit array can be subdivided on the "regions of interest" (in some limits of transversal momentum, for instance) and the reconstruction of trajectories in each region can be done at the more reliable conditions of smaller multiplicity in respect to the total multiplicity. The recognition and analysis of the jets is also possible before the tracking. The ideas have been successfully tested by the D0 Monte-Carlo hits data. All these possibilities simplify the following tracking and could be a base for the future developing of the high-level triggers.
ABS56 Title of Talk:
More performance and implementation of an OO track reconstruction model in different OO frameworks
Email of the Corresponding Author:
SIJIN@axuw01.cern.ch
Oral
Name(s) and Affiliation(s) of Author(s):
IRWIN GAINES, Fermi National Accelerator Laboratory SIJIN QIAN, Brookhaven National Laboratory
This is an update of the report about an Object Oriented (OO) track reconstruction model, which was presented in the previous AIHENP'99 at Grete, Greece. The OO model for the Kalman filtering method has been designed for high energy physics experiments at high luminosity hadron colliders. The model has been coded in the C++ programming language and has been successfully implemented into a few different OO computing environments of the CMS and Atlas experiments at the future Large Hadron Collider (LHC) at CERN.
We shall report: (1) more performance result; (2) implementing the OO model into the new SW OO framework "Athena" and the upgrade of the OO model itself.
ABS48 Title of Talk:
Reflections on a decade of object-oriented programming in accelerator physics
Email of the Corresponding Author:
michelotti@fnal.gov
Workinggroup
Name(s) and Affiliation(s) of Author(s):
LEO MICHELOTTI, Fermi National Accelerator Laboratory
In 1989, I began object-oriented programming by writing MXYZPTLK, a library of C++ classes for implementing automatic differentiation and differential algebra. This was followed quickly by BEAMLINE, a library of classes, built on top of MXYZPTLK, for modeling accelerators. At that time, C++ was a language both simple and attractive to someone looking for a "natural" way to use operator overloading. The most critical design decisions made for these libraries' first versions were (a) choosing between singly and doubly linked lists as data containers and (b) deciding whether to use inheritance or encapsulation to bind them to the classes. The maturing of C++ during the last decade forced more decisions and multiple revisions. New challenges came to the fore -- data handling, persistence, memory management, greater polymorphism -- as the developing language provided new tools -- RTTI, namespaces, templates, containers, iterators -- and programmers published increasingly sophisticated strategies for dealing with the issues. The language itself slowly became more stable, until an ANSI C++ Standard finally emerged. Good compilers and debugging tools came into existence for all platforms. The bad news was that, due to increasing complexity of the language, they were needed more than ever. The language, regrettably, is no longer simple; it has become more powerful but less approachable with the passage of time. We will recap a view of C++ programming in accelerator physic during the past decade, as reflected in my work and that of others.
ABS45 Title of Talk:
KID - KLOE Integrated Dataflow
Email of the Corresponding Author:
I
gor.Sfiligoi@lnf.infn.it
Oral
Name(s) and Affiliation(s) of Author(s):
IGOR SFILIGOI,Istituto Nazionale Fisica Nucleare - Laboratori Nazionali di Frascati
KLOE is the major experiment at the INFN DAFNE Phi-factory in Frascati, Italy. Its main goal is the measurement of CP violation to an accuracy of 10^-4 and is also capable of investigating a whole range of other physics. KLOE will acquire ~10^11 events per year at full luminosity (5x10^32 cm-2s-1) at a throughput of 50 MB/s. Data will be taken for several years to come, accumulating hundreds of terabytes of raw data, acquired during hundred thousands of runs. The estimated total data size, including reconstructed and Monte Carlo data, will be of the order of 1 PB. Data are stored as files; millions are expected to be created and managed by the end of the experiment. The files get created on local disks, moved on tape for long term storage and recalled back as needed to (other) disk areas for analysis purposes. A Relational Database Management System (RDBMS) is used for the bookkeeping of these files. It contains both information about file content (run number, nr. of events, size, etc.) and their current location (that is on which tape and/or disk area a file can be found). The whole file movement system is fully automated; the files get archived on tape as soon as they are eligible (according to the policy in use at any point in time) while the recall procedure is user driven. There are several mechanisms the users can use to obtain the needed data, from simple command line tools to integrated program libraries. All of them are query based to allow maximum flexibility and ease of use. In the talk, the whole system will be presented, with particular emphasis on the user interface.
ABS67 Title of Talk:
ROOT OO model to render multi-level 3-D geometrical objects via an OpenGL layer
Email of the Corresponding Author:
fine@bnl.gov
Poster
Name(s) and Affiliation(s) of Author(s):
VALERIE FINE, Brookhaven National Laboratory and Joint Institute for Nuclear Research, RENE BRUN, FONS RADEMAKER, CERN
A 3-D OO ROOT model allows a user to create and render rather complicated 3-D objects with the various 3-D viewer classes. At present the ROOT user can use TPad, X3D and OpenGL layers to draw ROOT 3D objects like TNode, TShape, TVolume, TVolumeView, TPolyLine3D, TPolymarker3D etc. These ROOT classes allow the creation of hierarchical 3D objects. Such an organization allows creating an effective OpenGL model to render the original object by a limited number of user-defined geometry levels. By default the system renders three levels of the hierarchy. This allows rendering very complicated objects (for example the ATLAS detector OO model contains about 30'000'000 nodes) but still can be drawn and manipulated in a reasonable time with a simple PC.
This paper presents the OO model, base classes and 3D viewer class (TVirtualGL, TKernelGL) to render ROOT 3D objects via OpenGL on various hardware / software platforms: Unix, Windows NT, Window 9x.
The current OpenGL ROOT viewer employs the OpenGL display list technique and provides:
1. Artificial and natural lighting,
2. Stereo view
3. Two kinds of projection: orthographic and perspective,
4. Rotations around 5 axes,
5. Zooming,
6. Clipping
7. Wired, solid, and semi-transparent views etc
It is used at STAR to build "custom event displays" with user-defined "Detector" and "Event" geometry objects.
ABS68 Title of Talk:
A Component Based Architecture to Support Scientific Workflow Management.
Email of the Corresponding Author:
bakern@ecid.cig.mot.com
Oral
Name(s) and Affiliation(s) of Author(s):
NIGEL BAKER, PETER BROOKS, University of the West of England, ZOLT KOVACS, CERN Geneva
CRISTAL is a distributed scientific workflow system used in the manufacturing and production phases of HEP experiment construction at CERN. The CRISTAL project has studied the use of a description driven approach, using meta-modelling techniques, to manage the evolving needs of a large physics community. The next generation CRISTAL vision is to build architecture based on co-operating distributed production managers. The overall production model, plan and production policies are described in a central repository. These production requirements and constraints are pushed out to the production managers using publish-subscribe mechanisms. The distributed production managers have knowledge and are able to build certain components including system elements capable of performing computations and writing data based on previous data collected by the system. The elements are user defined, and the algorithms are stored in so called "User Code". This paper discusses the next s! tage in the design and behaviour of these elements, concentrating on the specification, development, testing, deployment and versioning of these components. The first part of the paper will discuss the concepts of components and component architectures. A following section explains how components are used and versioned within the new system. The final part of the paper will describe how the workflow concept can help solve the problems of scheduling and deployment of the different versions of User Code components.
ABS69 Title of Talk:
Simulating Distributed Systems
Email of the Corresponding Author:
Iosif.Legrand@cern.ch
Oral
Name(s) and Affiliation(s) of Author(s):
HARVEY B. NEWMAN, IOSIF C. LEGGRAND, California Institute of Technology
The aim of this paper is to describe the simulation framework developed within the "Models Of Networked Analysis at Regional Centers" (MONARC) project as a design and optimisation tool for large scale distributed systems. The goals are to provide a realistic simulation of distributed computing systems, customized for specific physics data processing and to offer a flexible and dynamic environment to evaluate the performance of a range of possible data processing architectures.  
An Object Oriented design, which allows an easy and direct mapping of the logical components into the simulation program and provides the interaction mechanism, offers the most flexible, extensible solution for modelling such large-scale systems. This design approach also copes with systems that may scale and change dynamically.  
A process-oriented approach for discrete event simulation has been adopted because it is well suited to describe various activities running concurrently, as well the stochastic arrival patterns typical of this class of simulations. Threaded objects or "Active Objects" provide a natural way to map the specific behaviour of distributed data processing (and the required flows of data across the networks) into the simulation program. The program allows realistic modelling of complex data access patterns by multiple concurrent users in large scale computing systems in a wide range of possible architectures.  
This simulation program is based on Java2(TM) technology because of the support for the necessary methods and techniques needed to develop an efficient and flexible distributed process-oriented simulation. This includes a convenient set of interactive graphical presentation and analysis tools, which are essential for the development and effective use of the simulation system. 
Validation tests of the simulation tool with queuing theory and realistic client-server measurements are presented. A detailed simulation of the CMS High Level Trigger (HLT) production farm is also presented.
ABS70 Title of Talk:
A Beamline Matching Application based on Open Source Software
Email of the Corresponding Author:
ostiguy@fnal.gov
Oral
Name(s) and Affiliation(s) of Author(s):
Jean-Francois Ostiguy, Fermi National Accelerator Laboratory
An interactive Beamline Matching application has been coded using internally developed beamline and automatic differentiation class libraries.
Various freely available components were used. In particular, the user interface is based on FLTK, a C++ toolkit distributed under the terms of the GNU Public License (GPL).
The result is an application that compiles without modifications under both X-Windows and WIN32 and offers the the same look and feel under both operating environments. In this paper, we discuss some of the practical issues that were confronted and the choices that were made. In particular, we discuss object-based event propagation mechanisms, multithreading, language mixing and persistence. wally%
ABS71 Title of Talk:
Singular Value Decomposition (SVD) to simplify features recognition analysis in very large collection of images
Email of the Corresponding Author:
guillon@sourceworks.com
Oral
Name(s) and Affiliation(s) of Author(s):
FRANCIS GUILLON,DANIEL J.C MURRAY, SourceWorks Consulting Inc and PHILIP DESAUTELS ,Ereo Inc.
We present and discuss the techniques used in applying the singular value decomposition (SVD) in order to resolve such difficult problems
as object and feature extraction of images in the context where the number of images and related data to characterize them become very
large.
We will present preliminary results indicating the potential of such methods in analyzing data pertinent to large image collections where image complexity is high and where detailed ,exact and realistic physical models of the complexities are yet to be written. The SVD can be seen as an empirical type of alternative to such physical modeling .
ABS51
Title of Talk:
Theoretical physicist's personal software engineering in the post-Java world
Email of the Corresponding Author:
ftkachov@ms2.inr.ac.ru
Oral
Name(s) and Affiliation(s) of Author(s):
FYODOR TKACHOV, Institute for Nuclear Research of RAS, Moscow
With the general tendency towards precision measurements and sophisticated theoretical calculations and data processing in HEP, it is essential that design of algorithms and software engineering be tightly integrated into the theoretical physicist's thinking. This imposes severe requirements on the software tools (languages and application development framework) such as simplicity, expressive power, efficient compilation and suitability for numerical calculations, integral support of object- and component-oriented programming methodologies, suitability for symbolic calculations, evolvability of software. It is possible to satisfy all these requirements. The example mostly discussed in this talk is the Component Pascal (of the Pascal/Modula 2/Oberon-2 pedigree) and the RAD tool BlackBox Component Builder. For scientific computing, CP compares very favourably with Java (especially in regard of simplicity and suitability for numerical work). Its C-world analogue C# (recently proposed by Microsoft) is briefly discussed. The use of such software engineering tools results in a huge productivity increase and opens up radically new prospects.
ABS55 Title of Talk:
Evolutionary and Genetic Algorithms in Computer Vision
Email of the Corresponding Author:
dlli@nlpr.ia.ac.cn
Oral
Name(s) and Affiliation(s) of Author(s):
DALONG LI, National Laboratory of Pattern Recognition (NLPR) Institute of Automation, Chinese Academy of Sciences
A tracking system based on Eight-Neighborhood Searching (ENS) has been developed to track moving target in complex background visible-band or infrared image sequences. The ENS is used for target localization. The tracking system can be controlled according to the tracking state parameter, which is evaluated by the target region features. The target feature used in the algorithm is the boundary of the object. Both global and local features are computed based on the boundary. Also a method based on motionn estimation is compared with ENS. Our system has been tested on several image ssequences in visible-band, which include cars and ships. Although the camera moves freely in the sequences, accurate tracking results were obtained. The motion of the camera makes it impossible to estimate the motion of the object. Processing speed on a Pentium II 400 MHz PC was approximately 30 frames/second. The tracking system works very well on a 400-frames sequence of moving cars, in which the template is automatically updated at a step of 20 frames.

ORAL=20,   POSTER=3,   SPECIAL=4,   TOTAL=27
Last updated by MH on August 22, 2000