420 likes | 847 Views
Smart Receivers GPS dancer project. Part 1 Introduction. Some issues in GNSS geodesy. The number of permanent GNSS sites keeps growing More than 10,000 sites in national & regional grids Less than 5% of reference sites have ITRF coordinates
E N D
Smart ReceiversGPS dancer project IGS Workshop 2010
Part 1 Introduction IGS Workshop 2010
Some issues in GNSS geodesy • The number of permanent GNSS sites keeps growing • More than 10,000 sites in national & regional grids • Less than 5% of reference sites have ITRF coordinates • Users can not access theITRF with their own receiver • Practical value of the reference frame increases if every surveyor can determine accurate ITRF coordinates • LEO data (a.o.) not available to IGS at short latency • political issue: giving away costly data is difficult to justify towards sponsoring agencies IGS Workshop 2010
Centralized analysis v. distributed process • IGS was created as a distributed process • Good idea: overlapping AC solutions build a larger reference frame than any single solution • Some statistical robustness is gained • How distributed is a process based on Analysis Centres? • For 10,000 receivers IGS would need ~300 AC • GNSS analysis can be distributed to a finer level • Analysis elements should become so simple that they may be added in arbitrary amounts, at minimum cost (…zero) • Political problems of LEO data access disappear if operators can process their own data but still contribute to global products IGS Workshop 2010
DC AC AC AC AC PC CC DC DC AC AC CC AC PC AC AC AC PC AC DC De-centralizing GNSS analysis receivers • Local computers of permanent GPS sites form a scalable • grid computer on the internet • Combined processing capacity is orders of magnitude • larger than that of all IGS AC together … at zero cost • The only missing element is the right software to do this • The GPS dancer project develops software for • distributed processing of GPS data via the internet IGS Workshop 2010
Overview • Introduction to the Dancer project (… done) • Mathematics of a distributed estimation process • Square dance internet exchange algorithm • Processing capacity on today’s internet • Dancer project status • Discussion IGS Workshop 2010
Part 2 Dancer theory IGS Workshop 2010
Basic processing characteristics Dancer performs a conventional, iterated batch least squares process • undifferenced ionosphere-free code and phase observables • observation sample rate 30 seconds • solution arc most recent 24 or 48 hours • solution update interval 30 or 60 minutes Dancer is implemented as a scalable peer-to-peer internet process • Estimation process is split into identical tasks per receiver • Taskscan be collocated with the receivers but don’t have tobe collocated • Every receiver pulls its own weight in the solution process • Receivers work together to estimate orbits, satellite clocks, ERP There is no limit to the number of receivers that can join the solution IGS Workshop 2010
Overview of estimated parameters IGS Workshop 2010 (*) in case of having 30 active GPS satellites, 3 manoeuvres, and 80 passes per receiver
global local j = 1 … N 2 1 3 Local parameters are pre-eliminated IGS Workshop 2010
Choice of algorithm for global solution • Five solution methods were compared • Two types of combination solutions (CS1 & CS2) • Conjugate Gradient Method (CGM) • Full Matrix Reconstruction method (FMR) • Real-Time Kinematic filter (RTF) • Ranked by local CPU load, internet bandwidth, demonstrated accuracy, solution latency, scalability IGS Workshop 2010
Distributed solution of the global normal equation (1) • Problem: solve from • The global normal matrix can not be accumulated explicitly • Only two requirements to the solution process: • Consistency all receivers use the same global solution vector • Convergence reduced observation residuals in the next iteration • Selected method: combination solution Difference with proper LSQ solution at an Analysis Centre: • Proper least squares: average at the level of the normal equations • Combination solution: average at the level of the solution IGS Workshop 2010
Distributed solution of the global normal equation (2) • To compare CS with AC solution we use the Jacobi notation • Proper least-squares normal equation (AC) • Dancer combination solution (D) • Three problems • Local diagonals contain about 80% zeroes • Solutions divide by different diagonal elements • Off-diagonal terms involve different vectors IGS Workshop 2010
problem 1 local singularities • Receivers can only solve a subset of global parameters • for every global parameter we must count the number of receivers in the network that contribute to its global average • Each receiver will fix a (different) subset of satellite clocks • these clocks are still estimated by many other receivers • updated values are obtained via the combination solution IGS Workshop 2010
problem 2 different diagonals • Replace local diagonals by the global average • The average diagonal can be found with the same mechanism that is already needed for computing the combination solution itself IGS Workshop 2010
problem 3 off-diagonal terms IGS Workshop 2010
LSQ iteration only requires first order accuracy a priori vector converged solution vector IGS Workshop 2010
Summary • Each Dancer process computes its own normal equation contribution • Parameter observability is counted by a square dancing process • The average diagonal is computed by a square dancing process • Each Dancer process solves its stabilized local equation system • The average solution is computed by a square dancing process • Local parameters are solved from the pre-elimination equation • The entire process is iterated (x5) IGS Workshop 2010
Part 3 Square dance algorithm IGS Workshop 2010
Problem description • Target:compute average (or sum) over N vectors • N may be large (order 10,000 … 100,000) • vectors are located at different computers on the internet • each of these computers must end up with the global average • Boundary conditions: • There are no central elements in the network • Computers may go off-line unexpectedly at any moment • Tight synchronization is impossible due to varying connection speeds and hardware performance IGS Workshop 2010
Solution: iterated pair-wise exchange • In a network of N computers, N/2 pairs can be formed simultaneously • The two computers in each pair exchange their vectors • Both computers compute the same sum of the two vectors • From N initial vectors in the network, only N/2 differentvectors remain • The pair-wise exchange process is repeated with new pairs • Each further exchange cycle cuts the number of different vectors by half • After log2Ncycles all computers have found the global sum vector IGS Workshop 2010
How to form pairs without central supervision • All Dancer processes have a unique node numberj = 0 ... N-1 • JXTA network software allows contacting other nodes “by number” • Toggling one bit in the binary representation of the node numbers makes two computers of a pair find each other: • Successive square dance cycles toggle successive bits • New pairs are always “orthogonal” to previous pairs • no common contributions ever exist in two exchanged vectors 5 = 0101 0101 1101 = 13 1101 0101 = 5 13 = 1101 5 = 0101 0101 0001 = 1 0101 0111 = 7 0101 0100 = 4 IGS Workshop 2010
How to cope with arbitrary network sizes • Square dancing only works if network size N is an integer power of 2 • Network separates into base nodes and fold nodes N =10,000 = 213 + 1808 1808 fold nodes 8192 base nodes (… requiring 13 cycles) • Fold nodes upload their vector to a base node before the first cycle • Target base node is found by toggling the MSB from 1 to 0 • Receiving base nodes compute the sum with their own vector • Only the base nodes perform the square dance exchange process • Folding nodes just wait until this process is ready • Fold nodes download the global sum vector after the final cycle IGS Workshop 2010
How to cope with anomalous base nodes • Fold nodes could replace them … but only if there are any • Instead, base nodes split into 50% core nodes and 50% spare nodes N = 10,000 = 213 + 1808 1808 fold nodes 8192 base nodes 4096 spare nodes 4096 core nodes • Fold nodes upload their vector to a base node (…core or spare) • Spare nodes then upload their vector to a core node • The core nodes now perform a complete square dance process • Spare nodes or fold nodes replace core nodes in case of anomalies • Spare nodes download the global sum from their core node • Fold nodes download the global sum from their base node The process now looks like this: IGS Workshop 2010
Complexity at system level is irrelevant At the level of a single nodej: • number of connections is always small • Quasi-permanent sockets • all required steps follow from numberjand network sizeN • standard contingency plans allow for complete autonomy • unresponsive core nodes are replaced by spare nodes • the network optimizes itself! IGS Workshop 2010
Part 4 Dancer performance IGS Workshop 2010
Square dance data budget • Number of exchanged vectors (…core nodes) • square dance cycles 1 + log2(N) • 1 count of observability per process 1 integer vector • 5 iterations x 1 diagonal vector 5 dble vectors • 5 iterations x 1 solution vector 5 dble vectors • Number of dble vectors in total m = 10.5 * (1 + log2N) • Size of each vector (8-byte double precision) IGS Workshop 2010
MB arc length / epoch rate N core node data volume (1-way) IGS Workshop 2010
core note internet bandwidth Mbit/s solution update rate MB IGS Workshop 2010
Today’s internet (1) … fastest parts Source: Akamai.com – 4th quarter 2009 IGS Workshop 2010
Today’s internet (2) …slowest parts Source: Akamai.com – 4th quarter 2009 IGS Workshop 2010
slow internet: use a proxy Upload RINEX files @ 64 kbps 1 Run dancer process @ 5 Mbps 2 Vanuatu Download products @ 64 kbps 3 IGS Workshop 2010
LEO receivers • LEO receivers have very different local parameters • but exactly the same global parameters • LEO models can be developed by LEO operators …the web dancer IGS Workshop 2010
Dancer solution 1 user data Dancer solution n User receivers …do notcontribute to the global solution • Poorly balanced geographical distribution • Poor monumentation • Short data sets uninteresting for ITRF … but download a converged global parameter vector • Dancer process is identical, but with fixed global parameters • Spare nodes act as “product centre” for the parameter vectors • Multiple overlapping solution arcs can reduce noise: … IGS Workshop 2010
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Part 5 Project status IGS Workshop 2010
Project background • Initial motivation for decentralized processing came from IGS LEO … • data availability at short latency remained a problem • workload for including LEO is problematic for IGS AC • it is much better to let the LEO missions analyze their own data • … but decentralized processing can solve many other issues • Network densification, models and standards, file formats, ITRF access … • Dancer concept dates from 2006… but who would implement it? • ESA? not a good idea IPR / distribution is an issue • Commercial enterprise? not a good idea not free, no say in upgrades • IGS? not a good idea Dancer is not limited to GNSS • Dancer was finally implemented as a project of IAG WG 1.1.1 • Budget of zero: all contributions are voluntary • Limited technical knowledge from IGS / IAG side • Excellent support from computing community (SUN, JXTA team, Berkeley, …) • Beta version expected around sep/oct 2010 IGS Workshop 2010
Implementation status 06 / 2010 IGS Workshop 2010
Dancer • - data types: GNSS & VLBI • - short arc (48 hrs) • short latency (30 min) • key technology: JXTA POD module ITRF Large scale, direct access ITRF • Dart (Dancer-RTK) • data types: GNSS only • very short arc (RT) • zero latency • key technology: BURST • Digger (… reprocessing) • data types: all • long arc (15 years) • long latency (1 week) • key technology: BOINC Real-time access to accurate ITRF Consistency among techniques Project context IGS Workshop 2010
Conclusions • GPS and the internet are natural partners • Receivers form a distributed system, the internet connects them all • (Some) other data types may be added to Dancer, notably VLBI • Dancer approach solves many current issues in GNSS • Network densification: there is no limit to the number of receivers • Anonymous participation: privacy of (LEO) operators is ensured • Direct access to ITRF by end-users • Models and standards guaranteed to be identical for all receivers • Interface issues (RINEX, ORBEX, …) no longer very important • Direct alignment to a UTC clock (USNO, BIPM) is possible • Beta release foreseen for October 2010 • Future extensions should include VLBI, Galileo • Long-arc / reprocessing version (Digger) prototyped • RTK layer (Dart) would be possible, nothing implemented yet IGS Workshop 2010
Questions / discussion items • How to introduce the Dancer system operationally • Global network needed of a few hundred receivers • Some ITRF sites should be involved for validation • Are there requirements that may have been overlooked? • How to maintain and extend Dancer in the future • Volunteers give great value for money but a more stable (global) non-profit organisation would be desirable • IPR / ownership of the software is a difficult issue • Do we want “real” smart receivers from the manufacturers • new piece of equipment box with receiver + PC + JAVA runtime engine + Dancer • separate receiver + local PC could do the same job • post-processing / reprocessing must still be possible IGS Workshop 2010