1.24k likes | 1.67k Views
EPFL, Spring 2017. 3 Industrial Communication Networks Automation Overview. 3 Industrial Communication Networks. 3.1 Field bus principles 3.2 Field bus operation 3.3 Standard field busses 3.4 Industrial wireless communication. Networks in Automation Hierarchy. Engineering. Operator.
E N D
EPFL, Spring 2017 3 Industrial Communication Networks Automation Overview
3 Industrial Communication Networks 3.1 Field bus principles 3.2 Field bus operation 3.3 Standard field busses 3.4 Industrial wireless communication
Networks in Automation Hierarchy Engineering Operator Supervision level 2 Control Bus programmablecontrollers Control level Fieldbus microPLCs direct I/O Field level Sensor-Actuator Bus transducers / actors Course Hierarchy
What is a field bus ? A data network, interconnecting an automation system, characterized by: - many small data items (process variables) with bounded delay (1ms..1s) - transmission of non-real-time traffic for commissioning and diagnostics - harsh environment (temperature, vibrations, EM-disturbances, water, salt,…) - robust and easy installation by skilled people - high integrity (no undetected errors) and high availability (redundant layout) - clock synchronization (milliseconds to microseconds) - low attachment costs ( € 5.- .. €50 / node) - moderate data rates (50 kbit/s - 5 Mbit/s), large distance range (10m - 4 km)
Expectations - reduce cabling - increased modularity of plant (each object comes with its computer) - easy fault location and maintenance - simplify commissioning (mise en service, IBS = Inbetriebssetzung) - simplify extension and retrofit - off-the-shelf standard products to build “Lego”-control systems
The original idea: save wiring tray capacity marshalling bar dumb devices I/O Before PLC smart devices After PLC COM field bus But: the number of end-points remains the same ! energy must be supplied to smart devices
Marshalling (Rangierschiene, Barre de rangement) The marshalling is the interface between the PLC people and the instrumentation people. The fieldbus replaces the marshalling bar or rather moves it piecewise to the process (intelligent concentrator / wiring)
Different classes of field busses 10,000 1000 100 10 10 100 1000 10,000 One bus type cannot serve all applications and all device types efficiently... Data Networks Workstations, robots, PCs Higher cost Not bus powered Long messages (e-mail, files) Not intrinsically safe Coax cable, fiber Max distance miles Sensor Bus Simple devices Low cost Bus powered Short messages (bits) Fixed configuration Not intrinsically safe Twisted pair Max distance 500m frame size (bytes) High Speed Fieldbus PLC, DCS, remote I/O, motors Medium cost Not bus powered Messages: values, status Not intrinsically safe Shielded twisted pair Max distance 800m Low Speed Fieldbus Process instruments, valves Medium cost Bus-powered (2 wire) Messages: values, status Intrinsically safe Twisted pair (reuse 4-20 mA) Max distance 1200m source: ABB poll time, milliseconds
Fieldbus Application: locomotives and drives power line radio cockpit Train Bus diagnosis Vehicle Bus brakes power electronics motors track signals data rate 1.5 Mbit/second delay 1 ms (16 ms for skip/slip control) medium twisted wire pair, optical fibers (EM disturbances) number of stations up to 255 programmable stations, 4096 simple I/O integrity very high (signaling tasks) cost engineering costs dominate
Fieldbus Application: automobile redundant board networkECU Monitoring redundant ECU and board network Board network ECU Diagnosis 12V und 48V ECU c Brakes ECU ECU ECU 4 - Electromechanical wheel brakes - Redundant Engine Control Units - Pedal simulator - Fault-tolerant 2-voltage on-board power supply- Diagnostic System
Networking busses: electricity network control: myriads of protocols Inter-Control Center Protocol SCADA IEC 870-6 ICCP High control center control center control center HV Voltage IEC 870-5 DNP 3.0 Conitel RP 570 serial links (telephone) Modicom RTU RTU RTU RTU Remote Terminal Units COM RTU substation substation Medium MV Voltage FSK, radio, DLC, cable, fiber,... RTU RTU houses RTU RTU Low low speed, long distance communication, may usepower lines or telephone modems. Problem: diversity of protocols, data format, semantics... LV Voltage
Fieldbus over a wide area: example wastewater treatment Pumps, gates, valves, motors, water level sensors, flow meters, temperature sensors, gas meters (CH4), generators, etc are spread over an area of several km2. Some parts of the plant have to cope with explosives.
Engineering a fieldbus: consider data density (Example: Power Plants) Acceleration limiter and prime mover: 1kbit in 5 ms Burner Control: 2 kbit in 10 ms For each 30 m of plant: 200 kbit/s Fast controllers require at least 16 Mbit/s over distances of 2 m • Data transmitted from periphery or from fast controllers to higher level • Slower links to control level through field busses over distances of 1-2 km. The control stations gather data at rates of about 200 kbit/s over distances of 30 m. The control room computers are interconnected by a bus of at least 10 Mbit/s,over distances of several 100 m. Field bus planning: estimate data density per unit of length or surface, response time and throughput over each link.
3 Industrial Communication Networks 3.1 Field bus principles 3.2 Field bus operation 3.3 Standard field busses 3.4 Industrial wireless communication
Assessment • What is a field bus ? • Which of these qualities are required: 1 Gbit/s operation Frequent reconfiguration Plug and play Bound transmission delay Video streaming • How does a field bus support modularity ? • Which advantages are expected from a field bus ?
Objective of the field bus Distribute process variables to all interested parties: • source identification: requires a naming scheme • accurate process value and units • quality indication: {good, bad, substituted} • time indication: how long ago was the value produced • (optional description) source value quality time description
Data format minimum In principle, the bus could transmit the process variable in clear text (even using XML..) However, this is quite expensive and only considered when the communication network offers some 100 Mbit/s and a powerful processor is available to parse the message More compact ways such as ASN.1 have been used in the past with 10 Mbit/s Ethernet Field busses are slower (50kbit/s ..12 Mbits/s) and thus more compact encodings are used. ASN.1: (TLV) type length value
Datasets all door closed lights on heat on air condition on Field busses devices have a low data rate and transmit always the same variables. It is economical to group variables of a device in the same frame as a dataset.A dataset is treated as a whole for communication and access.A variable is identified within a dataset by its offset and its size Variables may be of different types, types can be mixed. dataset binary variables analog variables dataset identifier wheel air line time stamp speed pressure voltage 0 16 32 48 64 66 70 bit offset size
Dataset extension and quality check variable value correct variable 0 1 0 1 1 1 0 0 0 1 error 0 0 0 0 0 0 0 0 0 0 undefined 1 1 1 1 1 1 1 1 1 1 00 = network error chk_offset 01 = ok 10 = substituted var_offset 11 = data undefined To allow later extension, room is left in the datasets for additional variables. Since the type of these future data is unknown, unused fields are filled with '1". To signal that a variable is invalid, the producer overwrites the variable with "0". Since both an "all 1" and an "all 0" word can be a meaningful combination, eachvariable can be supervised by a check variable, of type ANTIVALENT2: dataset A variable and its check variable are treated indivisibly when reading or writing The check variable may be located anywhere in the same data set.
Hierarchical or peer-to-peer communication PLC PLC alternatemaster central master / slave: hierarchical “master” AP AP all traffic passes by the master (PLC); adding an alternate master is difficult (it must be both master and slave) “slaves” input output PLC PLC peer-to-peer: distributed PLC “masters” AP AP AP PLCs may exchange data, share inputs and outputs allows redundancy and “distributed intelligence” devices talk directly to each other separate bus master from application master ! “slaves” input output Application AP
Broadcasts application application processor processor Most variables are read in 1 to 3 different devices Broadcasting messages identified by their source (or contents) increases efficiency. application plant processor … instances … image = = plant plant plant variable distributed image image image database bus Each device is subscribed as source or as sink for some process variables Only one device is source of a certain process variable (otherwise collision) Bus refreshes plant image in the background Replicated traffic memories can be considered as "caches" of the plant state (similar to caches in a multiprocessor system), representing part of the plant image. Each station snoops the bus and reads the variables it is interested in.
Transmission principle The previous operation modes made no assumption, how data are transmitted. The actual network can transmit data • cyclically (time-driven) or • on demand (event-driven), • or a combination of both.
Cyclic versus Event-Driven transmission cyclic: send value strictly every xx milliseconds misses the peak (Shannon-Nyquist!) always the same, why transmit ? time individual period event-driven: send when value change by more than x% of range resolution nevertheless transmit: - every xx as “I’m alive” sign - when data is internally updated - upon quality change (failure) how much resolution? - coarse (bad accuracy) - fine (high frequency) limit update frequency!, limit resolution
Traffic Memory: implementation Bus and Application are decoupled by shared memory, the Traffic Memory, (content addressed memory, CAM, also known as communication memory); process variables are directly accessible by application. ApplicationProcessor Traffic Memory Ports (holding a dataset) Associativememory two pages ensure that read and write can occur at the same time (no semaphores !) an associative memory decodes the addresses of the subscribed datasets BusController bus
Freshness supervision Applications tolerate an occasional loss of data, but no stale data, which are at best useless and at worst dangerous. • Data must be checked if are up-to-date, independently of a time-stamp (simple devices do not have time-stamping) How: Freshness counterfor each port in the traffic memory • Reset by the bus or the application writing to that port • Otherwise incrementedregularly, either by application processor or bus controller. • Applications always read the value of the counter before using port data and compare it with its tolerance level. The freshness supervision is evaluated by each reader independently, some readers may be more tolerant than others. Bus error interrupts in case of severe disturbances are not directed to the application, but to the device management.
Example of Process Variable API (application programming interface) Simple access of the application to variables in traffic memory: ap_put (variable_name, variable value) ap_get (variable_name, variable value, variable_status, variable_freshness) Optimize: access by clusters (predefined groups of variables): ap_put_cluster (cluster_name) ap_get (cluster_name) Each cluster is a table containing the names and values of several variables. The clusters can correspond to "segments" in the function block programming.
Cyclic Data Transmission address Bus devices (slaves) 1 2 3 4 5 6 Master Poll List plant Principle: master polls addresses in fixed sequence (poll list) Individual period Individual period N polls Example Execution 1 2 3 4 5 6 1 2 3 4 5 6 1 2 3 4 5 6 time [ms] The duration of each poll is the sum of RTD the transmission time of address and data (bit-rate dependent) data address address 10 µs/km (i) (i) (i+1) and of the reply delay of the signals (independent of bit-rate). time [µs] read transfer 44 µs .. 296 µs
Round-trip delay of master-slave exchange master closest data sink most remote data source repeater repeater T_m Master Frame t_repeat t_repeat The propagation delay T_m round-trip (t_pd = 6 µs/km) delay limits access delay t_source the extension (t_repeat < 3 µs) t_ms t_mm of the bus T_s t_repeat Slave Frame t_sm T_m next Master Frame distance
Cyclic operation characteristics 1. Data are transmitted at fixed intervals, whether they changed or not. 2. The delivery delay (refresh rate) is deterministic and constant. 3. The bus is under control of a central master (or distributed time-triggered algorithm). 4. No explicit error recovery needed since fresh value will be transmitted in next cycle. Consequence:cycle time limited by product of number of data transmitted and the duration of each poll (e.g. 100 µs / variable x 100 variables => 10 ms) To keep the poll time low, only small data items may be transmitted (< 256 bits) The bus capacity must be configured beforehand. Determinism gets lost if the cycles are modified at run-time.
Optimizing Cyclic Operation Problem: fixed portion of the bus' time used => poll period increases with number of polled items=> response time slows down Solution: introduce sub-cycles for less urgent periodic variables length: power of 2 multiple of the base period. 2 ms period 4 ms period 1 2 4a 8 16 1 4b 1 2 3 64 1 4a time 1 ms period 1 ms 1 ms group with period 1 ms (basic period) Notes: Poll cycles should not be modified at run-time (non-determinism)
Cyclic Transmission with Decoupled Application cyclic cyclic cyclic cyclic algorithms algorithms algorithms algorithms cyclic application application application application poll 1 2 3 4 bus source master port Ports Ports Ports Ports Traffic Periodic Memory List sink sink port port bus bus bus bus bus controller controller controller controller controller bus port address port data The bus traffic and the application cycles are asynchronous to each other. The bus master scans the identifiers at its own pace. Deterministic behavior, at expense of reduced bandwidth and geographical extension.
Example: delay requirement publisher subscribers application instances application instance device device device applications bus bus instance Worst-case delay for transmitting all time critical variables is the sum of: Source application cycle time 8 ms Individual period of the variable on bus 16 ms Sink application cycle time 8 ms = 32 ms
Event-driven Operation • Events cause transmission only when state changes. • Bus load very low on average, but peaks under exceptional situations since transmissions are correlated by process (christmas-tree effect). event- event- event- intelligent reporting reporting reporting stations station station station sensors/ actors plant Detection of an event is an intelligent process: • Not every change of a variable is an event, even for binary variables. • Often, a combination of changes builds an event. • Only the application can decide what is an event, since only the applicationprogrammer knows the meaning of the variables.
Bus interface for event-driven operation • Each transmission on bus causes an interrupt. • Bus controller checks address and stores data in message queues. • Driver is responsible for removing messages of queue memory and prevent overflow. • Filter decides if message can be processed. application filter driver Application Processor message (circular) queues interrupt Bus Controller bus
Response of Event-driven operation Caller Transport Bus Transport Called Application software software Application request interrupt indication confirm time Since events can occur anytime on any device, stations communicate by spontaneous transmission, leading to possible collisions Interruption of server device at any instant can disrupt a time-critical task. Buffering of events can cause unbounded delays Gateways introduce additional uncertainties
Determinism and Medium Access In Busses Although the moment an event occurs is not predictable, the bus should transmit the event in a finite time to guarantee the reaction delay. Events are necessarily announced spontaneously The time required to transmit the event depends on the medium access (arbitration) procedure of the bus. Medium access control methods are either deterministic or not. Non-deterministic Deterministic Central master, Collision Token-passing (round-robin), (CSMA/CA) Binary bisection (collision with winner)
Events and Determinism Deterministic medium access necessary to guarantee delivery time bound but not sufficient since events messages are queued in the devices. events producers & consumers input and output queues bus acknowledgements data packets The average delivery time depends on the length of the queues, on the bus traffic and on the processing time at the destination. Often, the applications influence the event delay much more than the bus does. Real-time Control = Measurement + Transmission + Processing + Acting
Events Pros and Cons In an event-driven control system, there is only a transmission or an operation when an event occurs. Advantages: Can treat a large number of events – but not all at the same time Supports a large number of stations System idle under steady - state conditionsBetter use of resourcesUses write-only transfers, suitable for LANs with long propagation delays Suitable for standard (interrupt-driven) operating systems (Unix, Windows) Drawbacks: Requires intelligent stations (event building)Needs shared access to resources (arbitration)No upper limit to access time if some component is not deterministicResponse time difficult to estimate, requires analysisLimited by congestion effects: process correlated eventsA background cyclic operation is needed to check liveliness
Summary: Cyclic vs Event-Driven Operation decoupled (asynchronous): coupled (with interrupts): application processor application processor events (interrupts) traffic memory (buffer) queues bus controller bus controller sending: application writes data into memory receiving: application reads data from memory the bus controller decides when to transmit bus and application are not synchronized sending: application inserts data into queue and triggers transmission, bus controller fetches data from queue receiving: bus controller inserts data into queue and interrupts application to fetch them, application retrieves data
Mixed Data Traffic Process Data Message Data represent the state of the plant represent state changes of the plant short and urgent data items infrequent, sometimes long messages reporting events, for: ... motor current, axle speed, operator's commands, emergency stops,... • Users: set points, diagnostics, status • System: initialisation, down-loading, ... -> Periodic Transmission of Process Variables -> Sporadic Transmission of Process Variables and Messages Since variables are refreshed periodically, no retransmission protocol is needed to recover from transmission error. Since messages represent state changes, a protocol must recover lost data in case of transmission errors basic period basic period event time sporadic phase periodic phase sporadic phase periodic phase
Mixing Traffic is a configuration issue Cyclic broadcast of source-addressed variables standard solution for process control. Cyclic transmission takes large share of bus bandwidth and should be reserved for really critical variables. Decision to declare a variable as cyclic or event-driven can be taken late in a project, but cannot be changed on-the-fly in an operating device. Message transmission scheme must exist alongside the cyclic transmission to carry not-critical variables and long messages such as diagnostics or network management An industrial communication system should provide both transmission modes.
Real-Time communication stack The real-time communication model uses two stacks, one for time-critical variables and one for messages time-critical process variables time-benign messages Management Interface Application 7 Presentation 6 implicit implicit Remote Procedure Call Session 5 Transport (connection-oriented) 4 Logical Link Control connection-oriented Network (connectionless) 3 connectionless Logical Link Control 2" connectionless Link (Medium Access) 2' medium access common media Physical 1
Cyclic or Event-driven Operation For Real-time ? cyclic operation event-driven operation Data are transmitted at fixed intervals, Data are only transmitted when they whether they changed or not. change or upon explicit demand. Deterministic: delivery time is bound Non-deterministic: delivery time vary widely Worst Case is normal case Typical Case works most of the time All resources are pre-allocated Best use of resources (periodic, round-robin) (aperiodic, demand-driven, sporadic) object-oriented bus message-oriented bus Fieldbus Foundation, MVB, FIP, .. Profibus, CAN, LON, ARCnet
Time-stamping and synchronisation In many applications, e.g. disturbance logging and sequence-of-events, the exact sampling time of a variable must be transmitted together with its value. => Devices equipped with clock recording creation time of value (not transmission time). To reconstruct events coming from several devices, clocks must be synchronized. considering transmission delays and failures. processing input input input t1 t2 t3 t4 bus t1 val1
Example: Phasor information Phasor transmission over the European grid: a phase error of 0,01 radian is allowed, corresponding to +/- 26 µs in a 60 Hz grid or 31 µs in a 50 Hz grid.
Time distribution In master-slave busses, master distributes time as bus frame. Slave can compensate for path delays, time is relative to master In demanding systems, time is distributed over separate lines as relative time, e.g. PPS = one pulse per second, or absolute time (IRIG-B), with accuracy of 1 µs. In data networks, a reference clock (e.g. GPS or atomic clock) distributes the time. A protocol evaluates the path delays to compensate them. • NTP (Network Time Protocol): about 1 ms is usually achieved. • PTP (Precision Time Protocol, IEEE 1588), all network devices collaborate to estimate the delays, an accuracy below 1 µs can be achieved without need for separate cables (but hardware support for time stamping required). (Telecom networks typically do not distribute time, they only distribute frequency)
NTP (Network Time Protocol) principle client network server time request t1 t2 network delay time response t3 t4 time request t’1 t’2 time response t’3 network delay t’4 time distance Measures delay end-to-end over the network (one calculation) Problem: asymmetry in the network delays, long network delays
IEEE 1588 principle (PTP, Precision Time Protocol) Grand Master Clock residence time calculation peer delay calculation MC Pdelay-response TC Pdelay-request TC TC MC = master clock TC = transparent clock OC = ordinary clock TC TC OC OC OC OC Two calculations: residence time and peer delay All nodes measure delay to peer TC correct for residence time (HW support)
IEEE 1588 – 1 step clocks bridge bridge time 1-steptransparentclock 1-steptransparentclock ordinary(slave) clock grandmaster clock Pdelay_Req t1 Pdelay_Req t2 t1 t2 Pdelay_Req t1 t2 peer delay calculation Pdelay_Resp t3 link delay t3 Pdelay_Resp t4 t4 Pdelay_Resp(contains t3 – t2) t3 t4 Sync residencetime t5 residence time calculation t6 Sync residencetime t5 Sync(contains all + ) distance Grandmaster sends the time spontaneously.Each device computes the path delay to its neighbour and its residence time and corrects the time message before forwarding it
References To probe further • http://www.ines.zhaw.ch/fileadmin/user_upload/engineering/_Institute_und_Zentren/INES/IEEE1588/Dokumente/IEEE_1588_Tutorial_engl_250705.pdf • http://blog.meinbergglobal.com/2013/11/22/ntp-vs-ptp-network-timing-smackdown/ • http://blog.meinbergglobal.com/2013/09/14/ieee-1588-accurate/