1 / 23

Research Seminar, Fermi National Accelerator Laboratory November 2005

Research Seminar, Fermi National Accelerator Laboratory November 2005. Signal and Image Processing at LAPI / POLITEHNICA 1990 - 2005 Vasile Buzuloiu. Research Seminar, Fermi National Accelerator Laboratory November 2005. LAPI stands for

holt
Download Presentation

Research Seminar, Fermi National Accelerator Laboratory November 2005

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Research Seminar, Fermi National Accelerator Laboratory November 2005 Signal and Image Processing at LAPI / POLITEHNICA 1990 - 2005 Vasile Buzuloiu

  2. Research Seminar, Fermi National Accelerator Laboratory November 2005 LAPI stands for “LABORATORUL DE ANALIZA SI PRELUCRAREA IMAGINILOR” (Image Processing and Analysis Laboratory) from the FACULTY OF ELECTRONICS, TELECOMMUNICATIONS and INFORMATION TECHNOLOGY of the UNIVERSITATEA POLITEHNICA BUCURESTI (UPB)

  3. Research Seminar, Fermi National Accelerator Laboratory November 2005 The expertise we have at LAPI in the field of (uni-and- multidimensional) signal processing and statistical analysis as well as in software and algorithm development has allowed us a welcomed participation in TDAQ second level triggering for the future ATLAS experiment of LHC at CERN

  4. Research Seminar, Fermi National Accelerator Laboratory November 2005 • LAPI pre-history (1975 - 1985) • involvement in building image analysis systems from scratch (including video ADC/DAC, memories etc) • applications of our systems : remote sensing maps analysis, geological microscopic image analysis, dental X-ray image analysis, image compression methods) • an award of the Romanian Academy crowned the activity and also stopped it

  5. Research Seminar, Fermi National Accelerator Laboratory November 2005 LAPI history (1990 - 2005) old age (1990 - 2000) Involvement at CERTN in R&D activity for the future detectors of LHC in the framework of the LAA project (led by Prof A. Zichichi) more specifically in the group of Dr. R.K. Bock and W Krischer. Also collaboration with Dr. A. Marchioro for the first level triggering.

  6. Research Seminar, Fermi National Accelerator Laboratory November 2005 • LAPI history (1990-2005) • old age (1990- 2000) (cont.) • Some results: • The ELOISATRON international workshop (in Erice) dedicated to "Image processing for future High Energy Physics Detectors“ (Nov 1991) (V. Buzuloiu director of the workshop and editor of the proceedings) • Fast and precise peak finder for the first level triggering (published in the proceedings of the CHEP 1992 - Annecy) • 4 participants from UPB over various periods between 1993 and 2000

  7. Research Seminar, Fermi National Accelerator Laboratory November 2005 • LAPI history (1990-2005) • new age (2000 - 2005) • The LAPI team starts cooperating with the group of Prof. Bob Dobinson - B. Martin working on: • High speed networking (10 Gb/s Ethernet) and long haul intercontinental connections • Quality degradation in networks • TDAQ for the Second Level Triggering of ATLAS

  8. Research Seminar, Fermi National Accelerator Laboratory November 2005 • LAPI history (1990-2005) • new age (2000- 2005) (cont.) • Research and teaching home and elsewhere : • cooperation with CERN on the same subjects • research on various themes of image processing and analysis: filtering, enhancement, image data bases with content indexing, color and multi-component image processing, head tracking, eye tracking, digital photography tools, medical image analysis etc.

  9. Research Seminar, Fermi National Accelerator Laboratory November 2005 • LAPI history (1990-2005) • new age (2000- 2005) • Research and teaching home and elsewhere : (cont.) • R&D in cooperation with foreign universities (Ireland, France, Germany, Italy). Exchange of professors and students • R&D in cooperation with foreign companies as e.g. for digital photography tools and eye tracking • A yearly international postgraduate, intensive school on multidimensional signal processing and analysis

  10. ATLAS detector Event size ~= 1.5MBytes ~40 MHz ~60 TBps Level 1 Trigger Detector buffers ~150GBps ~100kHz ROB ROB ROB ROB ATLAS Data Collection Data Collection Network L2PU L2PU L2PU SFI SFI SFI L2PU ~4.5 GBps ~3 kHz Mass storage Back End Network SFOs Research Seminar, Fermi National Accelerator Laboratory November 2005 ~200 Hz 300MBps EF EF EF REF The ATLAS TDAQ System

  11. Research Seminar, Fermi National Accelerator Laboratory November 2005 ATLAS DataCollection network

  12. Research Seminar, Fermi National Accelerator Laboratory November 2005 DataCollection components

  13. Research Seminar, Fermi National Accelerator Laboratory November 2005 No packet loss! • The main data flow is based on a request–response mechanism. • Message loss implies time-outs and retries at the application level, having a great penalty on its performance. • TCP guarantees no message loss even if packet loss occurs in the network, but… • TCP does not scale with large number of connections (~600). • TCP is CPU intensive. • TCP increases the packet rate in the network. • UDP is the preferred protocol for the main dataflow. Packet loss in the network translates into message loss, therefore: • Special care should be taken in order to minimize packet loss. • We must avoid multicast/broadcast traffic, as its rate is not guaranteed on all switches.

  14. Research Seminar, Fermi National Accelerator Laboratory November 2005 Understanding congestion • Paper calculation based on queuing theory: • safe average link utilization as function of buffer size. • Measurements on switches from several manufacturers with various traffic patterns: • Throughput / packet loss • Latency • Multicast/broadcast handling. • Measurements performed on a 10-20% size system. • Computer modeling of the system: • Model validation by crosschecking with measurements • Model prediction for full size system

  15. Research Seminar, Fermi National Accelerator Laboratory November 2005 Contribution to the ATLAS TDAQ system • The issues related to the use of Ethernet as network technology have been analyzed and tested on several Ethernet switching devices. • Gigabit Ethernet traffic generators have been programmed to emulate the output from the level one trigger. • The large number of these devices allowed us to optimize not only the network topology, but also the traffic patterns and traffic shaping techniques inside the system. • Thus the performance of the DataFlow has been improved in several steps, and the scalability of the system has been proved up to one tenth of its final size. • The experience acquired in running the DataFlow system enabled us to derive the requirements for the networking equipment needed in the final system. • The Gigabit Ethernet traffic generators are currently used in evaluating the performance of sample devices from the major manufacturers.

  16. Research Seminar, Fermi National Accelerator Laboratory November 2005 The GETB platform • Atlas TDAQ DataFlow network • Large high-speed Layer 2 network (~700 nodes) • Central switches: ~250 ports, chassis based • Concentrating switches: 24 - 48 ports, pizza box • Packet loss or excessive latency  performance drop • Specific performance requirements for DataFlow switches • Need to evaluate devices with realistic test scenarios • See the “Switch Features” document • Testing equipment on the market is not fully adequate • Cost per channel is high • Not enough flexibility in defining traffic patterns • Layer 4 - 7 functionality – not essential for Atlas

  17. Configuration Flash GPS Clock SDRAM 2 x 64Mb 2 x Gigabit Ethernet SRAM 2 x 512Kb Altera Stratix EP1S25 FPGA 3.3V PCI Research Seminar, Fermi National Accelerator Laboratory November 2005 GETB Platform – Hardware • Logic utilization – approx 85 - 90% • Single FPGA controls 2 Eth ports • Multiple projects using the GETB • GETB Tester, Network Emulator, ROS Emulator • Common firmware / control infrastructure • FPGA Firmware • 90% Handel-C, 10% VHDL • Commercial IP cores (IP = Intellectual Property) • Gigabit Ethernet MAC • PCI Controller

  18. Research Seminar, Fermi National Accelerator Laboratory November 2005 GETB Platform – Control GETB Servers GPS Clock Distribution Gigabit Ethernet Links DUT Control GETB Client (Control PC) Device Under Test (DUT) • Distributed System • 15 servers hosting 65 cards • Entirely based on Python • Server (runs on Linux) • Configures the cards • Monitors status / collects statistics • Handles remote client requests • Accesses cards using IO-RCC (DataFlow package) • Client (runs on any platform) • Talks to servers using XML-RPC • Runs user-defined scripts • Displays statistics in a GUI • Manages cards from multiple servers

  19. Research Seminar, Fermi National Accelerator Laboratory November 2005 GETB platform - summary • The GETB platform provides a flexible environment for the design and development of Gigabit Ethernet applications • A tester able to evaluate switches for the DataFlow network has been created • 128 ports running at line-speed • Client-server traffic emulation provides a way to test devices under realistic Atlas conditions • Evaluating the device in a worst-case scenario • We have a comprehensive set of test procedures to check that devices meet our requirements

  20. Research Seminar, Fermi National Accelerator Laboratory November 2005 Quality degradation in networks • Measure the quality degradation introduced by the building blocks of any network (i.e. switches, routers): • emergent properties of traffic differentiation and scheduling mechanisms • service fairness for traffic flows of same class/priority • Study the relationship between quality degradation in networks and application-level user-perceived quality • for Internet Telephony (VoIP) applications • for bulk data transfers (FTP or TCP/IP)

  21. Research Seminar, Fermi National Accelerator Laboratory November 2005 Network Emulation • A high-performance network emulator is currently being built by two LAPI researchers, in collaboration with Predictable Network Solutions, Ltd. : • capable of reproducing realistic network conditions (correlated loss and delay, natural induced delay variation) • implemented on a custom-design FPGA-based platform • Emulate the quality degradation likely to appear in large networks • emulate the presence of background traffic as vacations of the service facility • use a “degradation algebra” to aggregate basic models (of queues and wires) into one single model

  22. Research Seminar, Fermi National Accelerator Laboratory November 2005 Long-distance networking • Transmission of native 10 Gigabit Ethernet over the installed base of long-distance telecommunication networks – through the ESTA EU project • First transatlantic connection based on the 10 GE WAN PHY technology: Oct. 2003, Geneva-Ottawa, live demo at the ITU Telecom World Conference • Studies for the use of remote computing farms, in real time, in the ATLAS TDAQ system • Proof of concept in 2004, on a testbed located in Switzerland, Canada, the United Kingdom, Denmark and Poland • the ATLAS online applications support a remote farm scenario • the network connections can assure the bandwidth for this particular application

  23. Research Seminar, Fermi National Accelerator Laboratory November 2005 High-speed signal integrity • Studies of signal integrity over high-speed backplanes for telecommunication equipment – the ESTA EU project • Simulating the behavior of different materials and connectors through mathematical models • Developed the software to control and operate a backplane tester device • Measurements on real hardware and reconciliation with the simulation models

More Related