160 likes | 181 Views
DIRECTED DIFFUSION. Directed Diffusion. Data centric A node request data by sending interest for named data Data matching interest is drawn toward that node Intermediate nodes can cache or transform data directly Attribute-naming based Data aggregation
E N D
Directed Diffusion • Data centric • A node request data by sending interest for named data • Data matching interest is drawn toward that node • Intermediate nodes can cache or transform data directly • Attribute-naming based • Data aggregation • Interest, data aggregation and data propogation are determined by localized interactions. • Trades off some energy efficiency for increased robustness
Directed Diffusion • Consists of elements: Interests, data messages, gradients and reinforcements. • Interest: a query or an interrogation which specifies what a user wants. • Data: collected or processed information • Gradient: direction state created in each node that receives interest. • Gradient direction is toward the neighboring node which the interest is received • Events start flowing from originators of interests along multiple gradient paths.
Naming • Task descriptions are named by a list of attribute value pairs that describe a task • eg: • type=wheeled vehicle // detect vehicle location interval=20ms // send events every 20 ms duration=10s // for the next 10s rect=[-100,100,200,400] // from sensors within rectangle • Interests and Gradients • Interest is usually injected to the network from sink • For each active task, sink periodically broadcasts an interest message to each of its neighbors • Initial interest contains the specified rect and duration attributes but larger interval attribute • Interests tries to determine if there are any sensor nodes that detect the wheeled vehicle(exploratory).
Interests • Soft state, periodically refreshed by the sink • Sink sends the same interest in monotonically increasing timestamp attribute. • Because interests are not reliably transmitted through the network. • Refresh rate increase robustness to loss interests with the trade off overhead • Every node has an interest cache storing each distinct interest. • Interest entries do not contain information about the sink, but just about immediately previous hop. • Two interests overlapping rect attributes aggregated to a single interest entry. • eg: • Type=wheeled vehicle • Interval=1s • Rect=[-100,200, 200,400] • Timestamp=01:20:40 • Expires at=01:30:40
Interests • When a node receives an interest, it checks to see if the interest exists in the cache • If no matching, node creates an entry(gradient and data rate) • If interest exists but no gradient, adds a gradient and updates the timestamp and duration fields. • If interest exists and have gradient, just update the timastamp and duration • When gradient expires, it is removed from the interest entry.
Interests (diffusion) • After receiving an interest, a node may decide to resend the interest to subset of its neighbors. • To its neighbors, it apeears that it is originating from the sending node, although it is coming from distant sink(local interaction). • Not all received interest are resent • If a node recently resent matching interest, it may suppress the received interest
Gradient Establishment • Every node establishes a gradient towards each other • This two way gradient can cause low data rate because it would receive one copy from each node. • Reinforcement is a solution for this problem • Gradient includes data rate and direction in which to send events.
Data Propogation • When a sensor node receives a data message, it searches its interest cache for a matching interest entry. • If matching, checks data cache(keep track of recently seen data items) • Advantage of data cache: loop prevention • By examining the data cache, data rate can be determined • If exists in data cache, silently drop data message • If not, added to the data cache and resent to the neighbors • To resend a received data message, examine gradient list • If all gradient have data rate greater than or equal to the rate of incoming events(means more interest), resend data to neighbors. • If some gradients have lower data rates, node may donwconvert to appropriate gradients. • If no match, the data message is silently dropped
Reinforcement for Path Establishment • Sink periodically diffuses interest for a low-rate event (exploratory events) • Once source detects a matching target, it sends exploratory events toward sink(multiple paths) • After sink starts receiving these, it reinforces one particular neighbor in order to draw down real data.
Positive Reinforcement • Local rule – selects an epmirically low-delay path • Reinforce any neighbor from which node receives a previously unseen event • To reinforce this neighbor, the sink resends the original interest message with a smaller interval(higher data rate) • Type=wheeled vehicle • Interval=10ms • Rect=[-100,200, 200,400] • Timestamp=01:22:35 • Expires at=01:30:40 • When the neighboring node receives this interest, it notices that it already has a gradient toward this node(it notice the interval is small) • If this new data rate is also higher than the existing gradient (outflow from this node has increased), the node must reinforce at least one more neighbor. • We do not need to reinforce neighbors that are already sending data at higher rate.
Local Repair for Failed Paths • Intermediate nodes on a previously reinforced path can apply reinforcement rules(useful for failed or degraded paths) • C detects degradation • By noticing that the event reporting rate from its upstream neighbor(source) is now lower • By realizing that other neighbors have been transmitting previously unseen location estimates. • And apply reinforcement rules • Problem: wasted resources • Avoid this is interpolate location estimates from the events
Negative Reinforcement • If sink reinforces A, but then receives a new event from B, it will reinforce path through B • If path through B is better, negatively reinforce path through A • Two mechanisms • Time out all data gradients in the network unless they are explicitly reinforced • Sink periodically reinforces B, stop reinforcing A • Explicitly degrade the path through A by sending a negative reinforcement(interest with lower data rate) • When A receives this, it degrades its gradients toward the sink • Cost: decreased resource utilization • negatively reinforce which neighbor? • From which no new events have been received within a window of N evets or time T
Self Organization • Zero knowledge of identity or topology • Each node knows its own identity • The base directly connected to the host PC • Base, periodically broadcast out its identity and that it is connected to the PC. • Devices at one-hop distance receive the info and use to update routing information • Rebroadcast a new routing update to everyone that there is a path to the sink through them. • In order to prevent cycles, time is divided into eras and route updates are broadcast once per era.
Tiny Diffusion • Tiny Diffusion • Application Programmer’s Interface(API) • Tiny Diffusion is based on the concept of data-centric or subject-based routing as is the SCADDS datadiffusion implementation. • Provide aninterface to access sensor data by naming attributes.