520 likes | 723 Views
Center for Advanced Sensors. David J. Russomanno, Ph.D. Professor and Chair Department of Electrical and Computer Engineering Herff College of Engineering The University of Memphis. Knowledge Fusion Workshop December 1, 2005 Annapolis, MD. Center for Advanced Sensors (CAS). NVL (NVESD).
E N D
Center for Advanced Sensors David J. Russomanno, Ph.D. Professor and Chair Department of Electrical and Computer Engineering Herff College of Engineering The University of Memphis Knowledge Fusion Workshop December 1, 2005 Annapolis, MD
Center for Advanced Sensors (CAS) NVL (NVESD) ONR VU UAH
Outline • CAS Background • CAS Projects Overview • Imaging Sensors Research • Knowledge Engineering Research
Center for Advanced Sensors • Established in April 2005 via a cooperative agreement between ARL/ARO and The University of Memphis • Enhanced by over 25 years of collaboration between The University of Memphis and the U.S. Army’s NVESD, Redstone Technical Test Center (RTTC), and ARL • PIs reside in Electrical and Computer Engineering
Electrical & Computer Engineering • 120 EE and 70 CpE undergraduate students • 38% African-American • 50 MS and 15 PhD graduate students • increasing efforts to recruit US citizens into graduate programs • 11 Faculty, 1 Post-Doc, 3 Adjuncts • 8 research-oriented faculty • 3 teaching- and service-oriented faculty • ongoing search for tenure-track Assistant/Associate Professor • Research Focus Areas: • imaging sensors and electronic devices • ARL/ARO, NVESD, ONR, EOIR, ERC • biomedical imaging and devices • Whitaker Foundation, American Heart Association • intelligent information systems • NSF • Anticipate $2M in FY 05-06 research expenditures
Center for Advanced Sensors Projects at The U of M • Performance Modeling with Image Processing Enhancements –Halford, Robinson • Establish perception laboratory and provide support for models • Establish methodology for incorporating image processing techniques into existing performance models • Modify/adapt existing sensor models to account for image fusion • Model performance enhancement by anomaly detection ATRs • Performance Modeling of Advanced Architecture Systems – Griffin • Model short-to-mid range transmission in the THz • Model antenna coupling • Model optics and interaction with focal plane coupling system • Knowledge Engineering – Russomanno • Sensor ontology development • Ontology-driven algorithms • Application Development
Center for Advanced Sensors Projects at VU and UAH • VU – Bio-Optics of Vision, IR Display, Bio-Optic Sensor Electrodes –Bonds, Davidson • Research natural (living organism) visual sensory representation as performed by neural assemblies, recording isolated neural activity across a network of cells • Determine the coding of visual signals by cell populations and achieve a working view of the neural code to achieve superior electronic approaches to rapid AI night vision image signal processing • Engages multiple disciplines, including (1) visual neuroscience,(2) imaging, processing and detection, (3) automated decision making/classification and (4) statistical analysis • UAH – Wavefront Sensors – Reardon • Existing wavefront sensors tend to be computationally intensive • Desirable to reduce or eliminate computational steps • Research utilizes a purely optical means of decomposing the wavefront into known forms
Imaging Sensor Modeling Visible – Sunlight (Digital Video) SWIR – Laser Illuminated Infrared – “Thermal Glow” (FLIR) Submillimeter – “Molecular Glow” (THz) Millimeter & Longer – Obstruction (RADAR)
Role of Models Test & Evaluation Sensor Concepts Model Development Model Validation Model Applications Design Trades Technology Push Field Test Data Component Technology models Perception Testing HITL Experiments Training Theory / Literature War Gaming Acquisition Decisions Product Improvements Sensor Performance Models - NV-Therm, Acquire, Search, I2, TV, Laser Basic Research Applied Research Development Single Device Development Production & Fielding
CAS Perception Laboratory LightSpace 1024Z 3 Dimensional Monitor Indigo Merlin LWIR and MWIR cameras Large set of Military visible, LW, and MW images Synthetic Image Generation
Effects of band-limited noise, blur on minimum contrast for ID • Perception tests performed include: • TOD (Triangle Orientation Description) • ID of military vehicles • Assessment of perception as a function of range for imaging sensors in various environments
Urban Operations and AT/FP • Provide the sensor design and analysis models for the military user, system designer, and the war gamer in the urban environment [TV, I2, SWIR, MWIR, and LWIR] Two Main Areas: Search and Target ID Night Imagery Day Imagery Day Imagery MWIR MWIR LWIR LWIR
Phenomenology & Radiometry • Objective: Provide state-of-the-art radiometric field measurements to the NVESD research community and other government agencies
Phenomenology • Extensively used in sensor modeling and development • Not significantly exploited once sensor is deployed • Incorporate aspects of phenomenology into sensor ontologies to enhance confidence in acquired sensor data? • Source of Background/Context Knowledge
Third Generation FLIR Modeling • Objective: To provide sensor designers, analysts, and war gamers with a physics-based, multi-spectral modeling capability in support of the next generation of FLIR sensors Band 1 Band 2 Fused Image
Knowledge Engineering Activity Constellations of heterogeneous sensors Vast set of users and applications Airborne Network Services Sensor Web Enablement Weather Application Services Surveillance • Distributed self-describing sensors and related services • Link sensors to network and network-centric services • Common encodings, information models, and metadata for sensors and observations • Access observation data for value added processing and decision support applications • Users on workstations, web browsers, and mobile devices Chemical Detectors Biological Detectors Sea State
Ontologies (Prior Work in implementing Plausible Inference* on the Semantic Web) We implemented, using Semantic Web infrastructure, a symbolic algorithm that reads/parses RDF and determines when the composition of Ri and Rj is nonempty and whether it is a subset of Ri or Rj or neither. *see [Huhns, 89] [Russomanno, 03], [Russomanno, 06]
Requires Richer Relation Semantics <!-- Definition of causedBy property --> <rdf:Property rdf:ID="http://www.ee.memphis.edu/ksl/uofM_eece#causedBy"> <rdfs:comment>causedBy(Event1, Event2) means Event1 is caused by Event2. It is the property used for object-object event causation. Note that value "no" has been asserted for temporal because the range element precedes the domain element.</rdfs:comment> <rdfs:domain rdf:resource="http://www.ee.memphis.edu/ksl/uofM_eece#Event"/> <rdfs:range rdf:resource="http://www.ee.memphis.edu/ksl/uofM_eece#Event"/> <uofM_eece:composable>yes</uofM_eece:composable> <uofM_eece:connected>yes/no</uofM_eece:connected> <uofM_eece:functional>yes</uofM_eece:functional> <uofM_eece:homeomerous>no</uofM_eece:homeomerous> <uofM_eece:intangible>n/a</uofM_eece:intangible> <uofM_eece:intrinsic>yes/no</uofM_eece:intrinsic> <uofM_eece:near>n/a</uofM_eece:near> <uofM_eece:separable>yes</uofM_eece:separable> <uofM_eece:structural>n/a</uofM_eece:structural> <uofM_eece:temporal>no</uofM_eece:temporal> </rdf:Property>
CAS Knowledge Engineering • Activity: Build an ontology-based, knowledge repository of representative sensors • Logical data model analysis and ontology design • Population of an ontology with sensor instances • Construction of Ubiquitous Sensing Prototype Test Bed • Objective: Tap in-house sensor expertise to develop knowledge models
Ubiquitous Sensing Environment Connecting to infrastructure Discovering, extracting, collecting, and sharing data … Dynamic and Heterogeneous Environment CAS-Russomano et al.: 1. How do we expose sensor data, meta data, capabilities, and services within a network-centric environment? ● Raw byte streams? Semantic Web? 2. How do we discover and process data from multiple sensor sources for various tasks? ● Need explicit declarations and search strategies 3. How do we utilize sensors and other info sources for automatedreasoning? ● Need shared understanding of semantics 4. How do we distribute tasks and processing across the network? ● Must handle vast numbers and varieties of data sources, adaptive networks and applications
Ontologies needed for remote sensing through In-situ sensors
U of M’s OntoSensor* • Work in progress … attempting to take advantage of local expertise in “traditional” physics-based sensor models and phenomenology • OntoSensor defines a set of concepts, taxonomies and relations common to many sensors • Seeks to leverage Semantic Web infrastructure: OWL, OWL-S, Rules, etc. • Create sensor profiles that commit to OntoSensor with attributes, properties, and services • Deploy aspects of OntoSensor in applications ASAP • Incrementally develop “deeper” knowledge models as familiarity with physics-based sensor models evolves * see [Russomanno, 05a] [Russomanno, 05b]
OntoSensor Leverages Existing Work • Sensor Markup Language (SensorML) • Open Geospatial Consortium Initiative • U of M is a (non-voting) member of the OGC • Sensor Ontologists should be aware of this effort • Developed from a Software Engineering rather than a Knowledge Engineering Perspective • Not an ontology (UML models with XML realization only) • No formal semantics • Loose generic model • May provide a good organizational framework within which an ontology can be created • OGC revisions are ongoing (watching from the sidelines … incorporating aspects of SensorML into OntoSensor as appropriate)
SensorML Conceptual Models* Possible SensorML Applications Include: •Coincident search for relevant data (i.e., data discovery) • On-demand processing of data products • Dynamic on-demand fusion of disparate sensor data • Pre-mission planning • Onboard applications • Autonomous operation/target recognition • SensorWeb communications of location and targets • Direct transmission of data and processing information to remote sites • Determination of sensor footprint with time • Intelligent retrieval and co-registration of sensor data • Visual fusion of disparate sensor data *SensorML material from [OGC 04-019, 2004]
Identifier includes a type definition (e.g. shortName, longName, serialNumber, noradID, missionID, etc.) and a codeSpace which takes a URI. The codeSpace identifies the authority source (typically an online Dictionary or Registry) for the value. Classifier object provides a means of providing several classification tags with the description. For instance, a single sensor might be classified as “remote observing”, “infrared detector”, “airborne”, “civilian”, and “atmosphere observing”. Such classifications could assist in sensor discovery. Identifier & Classifier
hasCapabilities property provides information that might be useful for sensor discovery includes properties for supportedApplication, performanceProperties, and taskableProperty SensorML Capabilities
Sensors might support the IEEE-P1451 interface for low-level data transfer, or perhaps a specific Web service interface for tasking the sensor or retrieving data SensorML Interface
Each component, platform, sensor, and sample has its own local CRS (specified by the hasCRS property), which must ultimately be related to some geodetic CRS (e.g. latitude, longitude, altitude). The process by which this occurs is dependent on the LocationModel used. LocationModel
Parameters are used within various SensorML classes as values for various properties of the type, dataComponent, as well as physicalPropertyTypes. Example properties include latitude, speed, stepAngle, wavelength, or timeStamp. SensorML Parameters
ResponseModel is a particular type of ProcessModel that describes the sensor’s response to some phenomenon Describes the process by which a sensor measures a Phenomenon and converts that observation to an output Product ResponseModel provides information regarding the sensor’s sensitivity to a phenomenon and the quality of its measurements SensorML Response Model
Logical Data/Knowledge Model IEEE SUMO
U of M’s OntoSensor
OntoSensor effort includes Agent Shell Development • Objectives • Locate sensors via intelligent search • Query specs/capabilities of a sensor and/or network • Task sensors via abstract service specifications • Goal: Achieve these objectives at the agent level without the agent having to deal with proprietary software designed for each individual sensor • Important for adaptive fusion algorithms without a priori knowledge of specific sensors
Summary of Accomplishments through 2nd Quarter • Logical Data Model Analysis: Analyzed the 1.0.0 (beta) specification of the Sensor Model Language (SensorML) for In-situ and and Remote Sensors published by the Open GIS Consortium (OGC). • Logical Data Model Design: Started implementation of a prototype sensor ontology (OntoSensor) using the SensorML specification (in-part), IEEE Suggested Upper Merged Ontology (SUMO), International Organization for Standardization 19115, and constructs of the Web Ontology Language (OWL). • Sensor Repository Population: Skeletal data/knowledge about several sensors and motes have been instantiated. • Implementation: Coded initial version of Semantic Web Expert System Shell (SWEXSYS) capable of querying OWL knowledge bases. Includes Dempster-Shafer, voting fusion and other elementary fusion algorithms. • Prototype Construction: Started design and configuration of ubiquitous sensing prototype test bed.
Sensor nodes are built using low-cost magnetic, passive infrared, light, temperature, humidity, barometric pressure, and acoustic sensors provided by Crossbow Inc., and others Wireless software provided by Crossbow’s mesh networking protocol Mote packets are stored in a Postgress database for subsequent extract to Semantic Web data repositories MWIR and LWIR cameras from FLIR, Inc. SWEXSYS can read sensor repositories in the OWL knowledge base Possible SWEXSYS Applications: Thermal camera calibration using wireless MEMS sensors (in situ) in field of view for ground truth Map pixel to GPS location using wireless MEMS sensors (in situ) in field of view for ground truth Ontology-driven algorithms to specify sensor states based on information from base stations, perceived environment and overall objective of agent Increase confidence in target identification via incrementally acquired evidence and/or data/knowledge fusion Prototype Construction (CAS-Memphis) … 433 MHz TBD ES 223 ES 231 916 MHz 2.4GHz
Typical specs included in sensor repository that commits to OntoSensor
Knowledge Engineering 3rd Quarter Plans • Add more detail about sensors to ontology (emphasize low-cost sensors that comprise the sensor networks first, followed by more sophisticated imaging sensors) • Increase number of sensor instances/repositories that commit to OntoSensor in a network-centric environment • Explore capturing sensor services into the ontology using OWL-S
Application Development (ASAP) • Support ARL ongoing application development • Persistent Threat Detection System (PTDS) sensor markup using OntoSensor • MWIR, Color Video, B/W Video, Acoustic Sensor, etc., suite on single platform • PTDS interfaces to Counter Rocket and Mortar (CRAM) system • Project: Implement PTDS meta data spigot & ad-hoc query support using OntoSensor concepts to support future interoperability and possible fusion