1 / 38

Data-Centric Trust in Ephemeral Ad Hoc Networks

This paper proposes a data-centric trust establishment approach, where trust is attributed to data rather than the nodes reporting them. It introduces a general framework for dynamic trustworthiness in ad hoc networks.

pmeyer
Download Presentation

Data-Centric Trust in Ephemeral Ad Hoc Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. On Data-Centric Trust Establishment in Ephemeral Ad Hoc NetworksMaxim Raya, PanagiotisPapadimitratos, Virgil D. Gligory, Jean-Pierre Hubaux Presented by JiaGuo

  2. INTRODUCTION • Existing trust notions are entity-centric and slow to change. • In all traditional notions of trust, data trust was based exclusively on a priori trust relations established with the network entities producing these data • Moreover, any new data trust relationships that needed to be established required only trust in the entity/nodes that produced those data. • Furthermore, traditional trust relations evolved generally slowly with time: once established, they lasted a long time and changed only after fairly lengthy operations

  3. However, several emerging mobile networking systems are heavily data-centric in their functionality and operate in ephemeral environments. In such scenarios, it is more useful to establish trust in data rather than in the nodes reporting them. • For example, in vehicular networks, node identities are largely irrelevant; rather, safety warnings and traffic information updates, along with their time freshness and location relevance, are valuable. At the same time, interactions with data reporters do not rely on any prior association, and encounters are often short-lived, especially due to high mobility. • In such scenarios, the trust level associated with data is not the same as that of the node that generated the data. • We propose data-centric trust establishment: data trustworthiness should be attributed primarily to data per se, rather than being merely a reflection of the trust attributed to data-reporting entities.

  4. GENERAL FRAMEWORK • A. Preliminaries • B. Default Trustworthiness • C. Event- or Task-Specific Trustworthiness • D. Dynamic Trustworthiness Factors • E. Location and Time • F. Scheme Overview

  5. A. Preliminaries • We consider systems with an authority responsible for assigning identities and credentials to all system entities that we denote as nodes. All legitimate nodes are equipped with credentials (e.g., certified public keys) that the authority can revoke. Specific to the system and applications, • we define a set ­ of mutually exclusive basic events. Composite events are unions or intersections of multiple basic events. • We consider V , the set of nodes vk, classified according to a system-specific set of node types, . . We define a function returning the type of node vk. Reports are statements by nodes on events, including related time and geographic coordinates where applicable. For simplicity, we consider reports on basic events, as reports on composite events are straightforward.

  6. B. Default Trustworthiness • The default trustworthiness of a node vkof type µnis a real value that depends on the attributes related to the designated type of node vk. For all node types, there exists a trustworthiness ranking 0 < µ1< µ2< : : : < µN < 1. • For example, some nodes are better protected from attacks, more closely monitored and frequently re-enforced, and, overall, more adequately equipped, e.g., with reliable components. As they are less likely to exhibit faulty behavior, they are considered more trustworthy.

  7. C. Event- or Task-Specific Trustworthiness • Let be the set of all relevant system tasks. Then for some nodes v1 and v2with types and default trustworthiness rankings , it is possible that v1 is more trustworthy than v2 with respect to a task . • we define the events-pecifictrustworthiness function f has two arguments: the type of the reporting node vkand the task. f does differentiate among any two or more nodes of the same type, and if (no specific event or task), f is the default trustworthiness .

  8. D. Dynamic Trustworthiness Factors • The ability to dynamically update trustworthiness can be valuable, especially for capturing the intricacies of a mobile ad hoc networking environment. • For example, nodes can become faulty or compromised by attackers and hence need to be revoked. In addition, the location and time of report generation change fast and are important in assigning trustworthiness values to events. • To capture this, we define a security status function [0; 1]. s(vk) = 0 implies node vkis revoked, and s(vk) = 1 implies that the node is legitimate. Intermediate values can be used by the system designer to denote different trustworthiness levels, if applicable. • Second, we define a set of dynamic trust metric functions indexed by a selector l indicating different node attributes (e.g., location) that dynamically change. That is, for each attribute, a different metric l is defined. …………….……………… as inputs and returns a real value in [0; 1].

  9. E. Location and Time • Proximity can increase the trustworthiness of a report: • The closer the reporter is to the location of an event, the more likely it is to have accurate information on the event. • Similarly, the more recent and the closer to the event occurrence time a report is generated, the more likely it is to reflect the system state.

  10. F. Scheme Overview • We compute the trustworthiness of a report , generated by node vkand providing supporting evidence for event , by using both (i) static or slow-evolving information on trustworthiness, captured by the default values and the event-specific trust f, and (ii) dynamically changing information captured by security status s and more so by metric . We combine these as arguments to a function that returns values in the [0,1] interval. If vkreports no evidence for These values are calculated locally for each report received from another node and are called the weights (or trust levels) of the reports. Fig. 1 illustrates our scheme.

  11. EVIDENCE EVALUATION • The literature on trust in ad hoc networks proposes several approaches for trust establishment. In this work, we propose a new technique and compare it to four other existing techniques. These techniques are described below. • To mathematically model our approach, assume a node A has to decide among several basic events ­ , based on K pieces of evidence (reports from K distinct nodes). • Let di denote the combined trust level computed by evaluating evidence corresponding to event . The Decision Logic module outputs the event that has the highest combined trust level, i.e., maxi(di).

  12. A. Basic techniques • 1) Majority Voting: In this technique, the majority wins. The combined trust level corresponding to event is defined by: • 2) Most Trusted Report: The Most Trusted Report (MTR) decision logic outputs a trust level equal to the maximum value of trust levels assigned to reports about the event; the point of using MTR is to show the effect of isolated high trust values (in data or entities) on the system. The combined trust level corresponding to event is defined by:

  13. B. Weighted Voting • Weighted Voting (WV) sums up all the votes supporting an event with each vote weighted by the corresponding trust level to output the combined trust level: • decisions on composite events are harder to do using the above three techniques since they do not provide formalisms for handling unions and intersections of events. In contrast, the next two techniques provide such formalisms.

  14. C. Bayesian Inference

  15. D. Dempster-Shafer Theory • The lack of knowledge about an event is not necessarily a refutalof the event. In addition, if there are two conflicting events, uncertainty about one of them can be considered as supporting evidence for the other. • The major difference between BI and DST is that DST is more suitable for cases with uncertain or no information. • For example, if a node A confirms the presence of an event with probability p, in BI it refutes the existence of the event with probability 1 -p. • In DST, probability is replaced by an uncertainty interval bounded by belief and plausibility. • Belief is the lower bound of this interval and represents supporting evidence. • Plausibility is the upper bound of the interval and represents non-refuting evidence. Hence, in this example, node A has p degree of belief in the event and 0 degree of belief in its absence.

  16. The combined trust level corresponding to even t is the belief corresponding to : • where pieces of evidence can be combined using Dempster’s rule for combination: • Using trust levels as weights of reports, the basic belief assignment that confirms is equal to the trust level:

  17. CASE STUDY • A real ephemeral ad hoc network instantiation, namely vehicular networks • system • adversary models, • examples how the different components of data trust can be practically derived.

  18. A. Secure Vehicular Communications System • Vehicular Ad hoc NETworks (VANET) and Vehicular Communication • (VC) systems are being developed to enhance the safety and efficiency of transportation systems, providing, for example, warnings on environmental hazards and traffic and road conditions. • From a networking point of view, the nodes are vehicles and road-side infrastructure units (RSUs), all equipped with on-board processing and wireless modules, thus enabling multi-hop communication in general.

  19. Authorities are public agencies or corporations with administrative powers, e.g., city or state transportation authorities entrusted with the management of node identities and credentials. • A subset of the infrastructure nodes serves as a gateway to and from the authorities.

  20. Each node vkis equipped with a pair of private/public cryptographic keys Prk/Puk, and a certificate issued by an authority X as CertX{Puk}. • Nodes are equipped with a clock and a positioning system, that allows them to include their time and location information in any outgoing reports. • Unicast and multicast communication is possible; however, local broadcast (single hop) and geocast (flooding to a given geographic area) are predominantly used.

  21. Vehicle-specific information (e.g., velocity, coordinates) is transmitted frequently and periodically in the form of safety messages. Reports on in-vehicle or network events are included in these messages. • Safety and other messages, generated by vehicles and RSUs, can result in an abundant influx of information about events. • Our approach, based exclusively on local processing, does not add any communication overhead and very little computation overhead to a secure VC system where the actual overhead is due to frequent broadcasting and asymmetric cryptography and is inherent in VANETs.

  22. B. Adversary Model • Nodes either comply with the implemented protocols (i.e., they are correct) or they deviate from the protocol definition intentionally (attackers) or unintentionally (faulty nodes). The attacks can be mounted by either internal (equipped with credentials and cryptographic keys) or external adversaries. • Adversaries can replay any message, jam communications, and modify (yet in a detectable manner due to the digital signatures) messages. They can inject faulty data and reports, or control the inputs to otherwise benign nodes and induce them to generate faulty reports. • We assume that at most a small fraction of the nodes are adversaries, and consequently the fraction of the network area affected by them is bounded.

  23. C. Framework Instantiation • To illustrate our instantiation, we consider an example scenario: • a highway accident in which vehicle B is involved. Now, let us consider a vehicle A, several communication hops away from the accident location. A receives safety messages indicating that there is an accident on its route and has to decide whether to trust this information. In this case, we assume the event : “There is an accident at location LB”. The granularity of the event location should be properly defined to avoid having reports on several different events while, actually, all these reports refer to the same event but with slightly different locations. Now assume that one or more attackers generate safety messages supporting the null event : “There is no accident at location LB”. If there are several events, the data trust is computed for each of them.

  24. PERFORMANCE EVALUATION • Recall that a vehicle computes the combined trust in an event based on the reports it receives from distinct vehicles. We compare the four decision logics: MTR, WV, BI, and DST against the basic majority voting scheme. • The effects of several general but representative parameters, namely the percentage of false reports, prior knowledge, uncertainty, and evolution in time;

  25. We use a Beta distribution, with its mean equal to an average trust level (defined for each scenario), to assign the trust levels to the reports received by a vehicle A. • We simulate scenarios with 10 or 50 valid reports (i.e., sent by vehicles on the communication path between an accident location and A). This means that A includes all these reports in its decision process; each report confirms either event . Table I lists the parameters used in the following simulation scenarios.

  26. The results show that: • First, trust decisions based on MTR are the most sensitive to different parameters since the MTR is not corroborated by other vehicles in this case. • Second, under realistic conditions, the other three decision logics outperform both majority voting and MTR. • Third, there is no clear winner among these decision logics as each performs best in certain scenarios.

  27. A. Effect of Data Trust • To see the effect of data trust on the resilience of the decision logic, we compare the different decision logics to majority voting. • Collusion in this case means that all attackers report the same false information. In addition, the trust distributions of the reports generated by honest nodes and by attackers follow Beta distributions with different means.

  28. MTR is both little resilient to small percentages of attackers and highly resilient (on average) to high percentages of attackers. This can be explained by the fact that MTR relies on the trust value of only one report, which can differ significantly from the average trust value. • The other three decision logics are more resilient to attacks than majority voting when correct reports are more trustworthy than false ones (this is a realistic situation). • BI is the most resilient of all three methods. When false reports are more trustworthy than correct ones, the situation is reversed and weighted voting becomes the most resilient technique. There are two curves for BI, each corresponding to a different prior probability; these plots are discussed next.

  29. B. Effect of Prior Knowledge • One of the properties of BI is that it uses a prior probability to compute the posterior probability of an event. The prior probability represents the amount of knowledge about the event prior to the reception of new evidence; in our example, this is the probability of the presence of an accident

  30. INTRODUCTION • Figs. 2(a) and 2(b) show to some extent that the availability of prior knowledge increases the resilience of BI to false data attacks. • In Fig. 2(c), there are fewer reports (only 10 compared to 50 in the previous two scenarios) and we can clearly see the benefit of prior knowledge. The reason for this increase in resilience when the number of reports decreases is that large numbers of reports damp the effect of prior probability in the calculation of the posterior probabilit.

  31. C. Effect of Uncertainty • BI does not take uncertainty into account whereas DST does • To simulate the effect of uncertainty (Fig. 2(d)) on the decision logics, we use low mean data trust levels for both false and correct reports; the exact values are listed in Table I (such values can result from low values of the security status s, e.g., due to the discovery of a virus in the network). In this case, DST is indeed the most resilient of the decision logics.

  32. D. Evolution in Time • In ephemeral networks, it is important to evaluate data trust rapidly in order to permit an application logic to use the resulting values. Hence, a decision logic should be able to output the final result as fast as possible, based on the freshly received reports. • In this section, we are only interested in the decision delay as reports arrive. • Similation: • Our scenario is a 2 km-long highway with 3 lanes in each direction. There are 300 vehicles moving at speeds between 90 km/h and 150 km/h; the average distance between two vehicles on the same lane is 40 m. Vehicles periodically broadcast safety messages every 300 ms within a radius of 300 m (single hop); the broadcast start times are uniformly distributed between 0 and 2 seconds, approximately.

  33. In our simulations, we study the reception of reports at a vehicle A positioned in the middle of the scenario on the 90 km/h lane. We assume that an event (e.g., “ice on the road”) is generated by honest vehicles between coordinates 1300 m and 1400 m (the icy section). The attackers report the opposite event (“no ice on the road”). As A moves towards the icy section, it receives reports from vehicles that pass inside this section. Only the last report from each vehicle is considered; this allows A to update its decision as vehicles enter the icy section and change their reports

  34. In Fig. (e), the percentage of false reports received in each timestep is drawn from a Binomial distribution with a probability 0.5 (i.e., the mean percentage of false reports is 50); this figure shows the stability of the decision logics when the percentage of false reports varies. • In Fig. (f), the total percentage of false reports is also 50, but all false reports are received at the beginning of the simulation time to simulate the speed of convergence of the decision logics. • we can see that the speed of convergence of all four decision logics depends on the number of received reports and hence the scenario parameters

  35. E. Discussion • If the uncertainty in the network is low, BI is the most resilient to false reports. • To avoid the case of few highly trustworthy false reports (Fig. (b)), the decision of BI should be positioned with respect to another logic, such as DST or WV, and the most conservative value (i.e., the one that yields the lowest probability of attack success) should be taken. • The availability of prior knowledge can further improve the resilience of BI. • If the uncertainty in the network is high, DST performs consistently better than other methods (MTR does not always yield better results).

  36. CONCLUSION • In this work, we developed the notion of data trust. • We also addressed ephemeral networks that are very demanding in terms of processing speed. • We instantiated our general framework by applying it to vehicular networks that are both highly data-centric and ephemeral. • We evaluate data reports with corresponding trust levels using several decision logics, namely weighted voting, Bayesian inference, and Dempster-Shafer Theory. • Bayesian inference and Dempster-Shafer Theory are the most promising approaches to evidence evaluation, each one performing best in specific scenarios. More specifically, Bayesian inference performs best when prior knowledge about events is available whereas Dempster-Shafer Theory handles properly high uncertainty about events. • In addition, the local processing approach based on either one of the above techniques converges to a stable correct value, which satisfies the stringent requirements of a life-critical vehicular network.

More Related