1 / 12

Running IOC Core on Windows and Linux

Learn how to optimize system control using IOC Core on Windows and Linux. Discover the benefits, applications, and process for seamless integration.

carthur
Download Presentation

Running IOC Core on Windows and Linux

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Running IOC Core on Windows and Linux Dave Thompson Wim Blokland Ernest Williams

  2. Windows • IOC Core-Channel Access is a more stable combination than an Active X server or any user program poorly using the CA API. • Using IOC Core allows use of Epics DB tools and better configuration control of PVs exported to Channel Access. • IOC core does not seem to add much overhead to the overall application. • New version supports bi-directional PVs for bo, longout, and ao.

  3. The concept: LabView Application (wire scanner, BPM,etc) LabVIEW Application (Wire Scanner, BPM,etc) Shared Memory Interface VIs ReadData() WaitForInterrupt() GetIndexByName() Channel Access CreateDBEntry() IOC (database, CA) Shared Memory DLL WriteData() SetInterrupt() DBD and DBD files

  4. Implementation With LabView • LabView and IOC core link to a shared memory DLL. • Data flows in both directions through the DLL. • The DLL passes data through a FIFO to decouple CPU allocation between the LabView process and the IOC Core process, this reduces data loss. • Shared memory passes both data and “processing”. • Time stamps are passed through the shared memory with processing requests. • LabView starts IOC core and passes the startup file name on the command line.

  5. Database Issues With Shared Memory • LabView and the IOC Core database expect to work from the same shared memory layout. • The shared memory device support routines create the database in shared memory from parameters in the .db file. • Shared memory PVs have name, type and size attributes. • Registry entries set shared memory allocation and ring depths. • The LabView application binds to PVs in the shared memory database and does not attempt to create them. • The Epics database can be created by LabView based on Vis that interact with the shared memory, Epics database tools, or by the SNS database RDB tools.

  6. Linux • At SNS we have a few applications running on R3.14.1 IOC core on a Linux server. • NO device or record support implemented except standard “Soft Channel”. • Running some sequencer programs in the IOC program. • Mostly used as a PV server and a place to put calc records and or alarms for summaries. • We want to have to have the ability to run etherIP and the serial devices.

  7. Applications On the Linux IOC Core • Timing Scope application • Reads PVS out of the timing system and calculates gate waveforms relative to other waveforms from controls and diagnostics. • Allows the operators to visualize the timing.

  8. Applications • Status PVs from IOCs and server based processes. • These PVs are driven by periodic tasks that write to the PV via channel access. OPI and archive servers monitor the PVS. • Health of IOCS and archive tasks are collected • Ping tasks monitor UP status of both IOCs and servers. • No device support, all updates are made by CA clients.

  9. The setup CA Client Application On Linux Box Many PVs from accelerator PvServer Application On Linux Box Terminal Server epics> epics> dbpr Global:ArchiveEngine1:Status ASG: DESC: DISA: 0 DISP: 0 DISV: 1 NAME: Global:ArchiveEngine1:Status RBV: 0 RVAL: 0 SEVR: NO_ALARM STAT: NO_ALARM TPRO: 0 VAL: 0 User at IOC Shell

  10. Startup and operation of the IOC • We run the IOC as a service in inittab, like a getty. • Inittab keeps it running at all times. • Startup goes in three stages Stage 1: New lines in inittab # Epics pvServer service on port ttyS1 S1:2345:respawn:/etc/epics/su-tty ttyS1 Stage 2: su-tty, redirect IO and set user #/bin/sh (stty sane su - thompson -c /home/thompson/pvServer/startPvServer )>/dev/$1 2>/dev/$1 </dev/$1

  11. Stage 3: The startup script: • Sets up the EPICS environment • Sets PATH and LD_LIBRARY_PATH • Starts caRepeater • Starts the application in $TOP • Cleans up lingering threads after an exit • Depends on init to reboot the ‘IOC’ cd $TOP caRepeater >/dev/null 2>&1 </dev/null& pvServer iocBoot/iocpvServer/st.cmd # Kill all threads of the pvServer & exit for i in `/bin/ps h o pid -C pvServer` ; do kill -s SIGKILL $i done

  12. Problems and Plans • Could not get auto save and restore to work, problem related to mutex locking? • Want to implement IOC name to RTDL reset code lookup in a sequencer running here (part of the timing system). • Plans to port to 3.14.2 next week.

More Related