In this section we try to explain many basic concepts about design of control systems that are not always clear and recommend some good practices on how to make the design. There are many aspects we treat: the control panel, options for redundant architectures, design of a fail safe system, different types of logics (1oo2, 2oo2, 2oo3,…), some tips when selecting I/O modules, etc. Let us know if you are interested in a specific topic.
Why Cpt is so important?
Why Cpt is so important? In the process industry when we calculate the Probability of Failure on Demand (PFDavg) of a Safety Instrumented Function (SIF) we use equations of the following type: “Cpt” parameter is one of the most important and often is not paid much attention. “Cpt” (Proof Test Coverage) parameter is defined in IEC-61511 as follows: “Periodic test performed to detect dangerous hidden faults in a SIS so that, if necessary, a repair can restore the system to an as new condition or as close as practical to this condition.” These tests must be done every TI hours to try to detect the dangerous failures (DU) that have not been detected by the automatic diagnostics implemented in the SIS. The greater the effectiveness of these tests, the greater the value of Cpt. We find with certain frequency valve manufacturers that calculate the PFDavg with a Cpt value of 100% which is impossible in reality, especially in the case of the final elements. The impact of the value of Cpt on the result of the PFDavg, and therefore on the SIL achieved, can be enormous in many cases. By using one of the calculation functions of the SILcet tool we have made some examples that can illustrate us. EXAMPLE 1: Let’s suppose an architecture of the SIF as shown in the image. The result is as follows. We have calculated 12 values of PFDavg reducing, in steps of 2%, the initial value of Cpt = 90% of the actuator subsystem. The first value corresponds to the starting point (90%) and we observe how we pass from the SIL-2 zone to the SIL-1 zone when reducing the Cpt value. In a case like this, a difference of 5-10% in the Cpt can change completely the result. EXAMPLE 2: Suppose a SIF architecture similar to the previous one. We use the same components except in the final element because in this case we have supposed much lower failure rates. In this case DU =700 FITS instead of the 3000 FITS of the previous case. We also start from Cpt = 90% in the actuator and we are reducing its value in steps of 2%. In this case we see that the impact of modifying Cpt towards more realistic values is not so great because we have selected a final element with much lower DU failure rates. CONCLUSION It is very important to define correctly the procedure of proof tests that we are going to carry out in the Safety Functions and that will give us what value of Cpt we should use in the calculations of PFDavg. In general, we should not accept Cpt values of 100% as they are not realistic. In the case of final elements, if we do not know its value, it is advisable to be conservative and use values around 65%. Calculating with very high Cpt values that have not been analysed correctly can lead us to erroneous achieved SIL values.
Dangerous failures in the SIS
Dangerous failures of the SIS When calculating Probability of Failure of a Safety Instrumented Function (SIF), the most important are the dangerous failures, as we see in the following equations of 1oo1 and 1oo2 architectures. The value Lambda DD is the rate of dangerous detected failures, and Lambda DU corresponds to the undetected ones. We can break down dangerous failures into 4 groups: DD failures: these are the dangerous failures detected by the product diagnostics, i.e., by the diagnostics integrated by the manufacturer of the safety PLC, transmitter, positioner, etc. In the valves, being mechanical elements, we do not have product diagnostics. DU failures convertible to DD: these are the dangerous failures not detected by the automatic diagnostics, but that can be detected if we implement some application diagnostic in the SIF. Typical cases are, for example, the detection if analog signal is out-of-range or the PST test (Partial Stroke Test) on the safety valves. DU1 failures: are dangerous failures not detected by the automatic diagnostics, but that we detect in the manual tests (proof tests) made to the SIF every TI hours (Test Interval). The greater the effectiveness of these tests, the higher the value of the Cpt (coverage proof test), which is very important parameter in the formulas to calculate the failure probability. DU2 failures: are the dangerous failures that we never detect, neither with the automatic diagnostics nor in the manual tests. These are “hidden” faults that do not show up in tests and that can be very diverse in nature. Keep in mind that the tests we must perform periodically to all safety functions are never perfect. Therefore, we can define the “effectiveness of the tests” in the following way: Cpt = DU1 / (DU1 + DU2) In the process industry this value varies, between 70 and 95% in the subsystems SENSOR and LOGIC SOLVER, and between 40 and 95% in the ACTUATOR subsystem. Go deeper into these and many other concepts in our RECOMMENDED “online” COURSE on Functional Safety: “Design of SIFs and SIL calculation”
The diagnostics in the SIS
The importance of diagnostics in the SIS The diagnostics in an Safety Instrumented System (SIS) are of crucial importance because they are the key to reduce the rates of undetected dangerous failures and, therefore, to reduce the probability of failure on demand (PFD / PFH) and increase the SIL. We can see it in the following 2 equations (used in SILcet), for the 1oo1 and 1oo2 architectures, which are used to calculate the Probability of Failure and the SIL achieved. The objective of the diagnostics is to detect any internal failure in a component. What they do is to monitor the correct operation of the devices that intervene in a SIF (safety instrumented function). We can classify the diagnostics in 2 types: Product diagnostics or self-diagnostics. They are those that come integrated from factory with the product (sensors, PLC, final elements). In the safety PLCs the self-diagnostics are very high, in certified 4-20 mA transmitters they are high. It is in the final elements of the SIF that, depending on the type, we can find certified products without product diagnostics (as in shut off valves) because they are usually purely mechanical elements. Application diagnostics. They are additional diagnostics of each specific application. They are not always necessary because it depends on many factors, but for SIL-2 and 3 levels they can be essential to meet the required SIL. To implement them we will need to add some software routines in the PLC and, sometimes, also external wiring to the PLC and some additional hardware components (limit switch, valve positioner, line monitoring resistor, transmitter, DO feedback to DI, etc.) Some practical examples of this type of diagnostic: 4-20 mA transmitter signal diagnostic (out of range, frozen signal, etc.) Diagnostics by comparison (IEC-61508 gives them a lot of credit). Use of the Hart protocol to diagnose problems in the transmitters or in their wiring from the PLC to the instrument (due to earth leakage, etc.) Diagnostic to detect the failure of the digital outputs in a PLC as it is not a standard feature in all PLCs. The partial stroke test (PST) in a safety valve. Diagnostic of valve failures by using transmitters. It is an interesting application that can be used only in certain designs. Other cases such as detection of cable break, etc. These are some examples that we explained in the course SIFs Design and calculation of SIL.
Common cause failures
The importance of common cause failures When designing a control system, we should paid special attention to common cause failures, that is, to the factors that may cause the simultaneous failure of several components or redundant channels. It is an even more important aspect in the case of safety instrumented systems (SIS) and is considered in international safety standards such as IEC-61508 (for all industries), IEC-61511 (for the process industry), IEC-62061 (for machinery safety), IEC-61513 (for the nuclear industry) or ISA-84. We can perform an excellent design of the system or the safety instrumented function, but if we neglect the common cause failures the final result can be really bad. Not always the design engineer is aware of the importance of this. They are of vital importance in systems of high availability, i.e., in systems with redundant architectures 2oo3, 2oo4, 2oo2, 3oo3, etc. or in safety systems with architectures 1oo2, 1oo3, 2oo3, etc. Suppose for example a PLC with redundant CPU mounted inside a cabinet with insufficient ventilation, with errors in the design of grounding system and that has been tested by personnel with little experience (which greatly increases the possible software errors) . Any of these aspects can have a very negative effect on both CPUs simultaneously. How many System Integrators actually calculate the heat dissipation inside the control cabinet and under the specified temperature and humidity conditions? It is not a complicated calculation, but it serves as an example to draw attention to common cause failures. In the field elements, such as sensors or actuators, there are also many common factors that influence the operation of the system, such as how they are assembled and calibrated, if they have been correctly specified, or for example, if we are using the same multi cable and junction boxes for all redundant channels. In order to evaluate and quantify common cause failures, IEC-61508 makes several recommendations on how to do so and introduces the so-called β factor that should be used in formulas for calculation of failure probabilities and others. To give a better idea of what we are talking about, we show below some of the formulas used in the SILcet application, where we can see in red the term corresponding to the common cause failures, and that greatly influences the 1oo2, 2oo3 and 2oo4 results since the first term is usually the power of a very small number (<< 0.1).
Safety Instrumented Function
What is a Safety Instrumented Function? The safety instrumented function is a control loop in a process or machine whose objective is safety. SIF is its acronym in English. In the following image we see the most common simplified representation of the SIF. The integrity and performance of the safety instrumented function depends on a large number of factors, and it is measured by the so-called “Safety Integrated Level” (SIL) which are covered by various international standards such as IEC-61508 (for all industries), IEC-61511 (for the process industry), IEC-62061 (for machinery safety), IEC-61513 (for the nuclear industry) or ISA-84. Some of the main factors that influence the performance of the SIF are the following: The technology used: the quality of the components and the manufacturer, the safe and dangerous failure rates, the capacity of automatic diagnostics of the components, etc. The architecture used: component redundancy, common cause failures, etc. The response time of the components, the time to be repaired and restoration time to normal operation. The activities throughout the life cycle of the safety instrumented function such as periodic tests, documentation of failures and other actions, SIL level verifications, etc. In the following image we can see a more detailed representation of the SIF where you can see many other elements that make up the safety function. Sensors It is very important to consider everything around the sensor to work properly, such as an adequate connection to the process, a correct measurement technology in each case, or other aspects of the design such as wiring and interface components with the safety PLC. Logic solver The logic solver can be a PLC, a relay system or an electronic system in general (programmable or not) but must meet a series of requirements to be used in an safety instrumented function. In this article we talk for example about the safety PLC. The design should take into account both hardware and, software or firmware, as well as external factors such as cybersecurity. Final elements In the safety instrumented function the final elements are usually the weakest link in the chain for different reasons (mechanical elements and in direct connection with the process). It is very important to select well the construction materials, as well as all the components and a correct execution of the mounting on site. Other elements There are many other elements and external factors that greatly influence the performance and integrity of the safety instrumented function such as external temperature, vibrations, electromagnetic interferences, if there is dust in suspension (especially if it is corrosive), power supplies, operation and maintenance tasks, etc. All these factors are in the category that we call common cause failures and that must be analyzed in detail in order to minimize their impact on the performance of the SIF, i.e., to avoid the degradation of the required SIL level. Watch this video about the basics of Functional Safety. Go deeper into these and many other concepts in our RECOMMENDED “online” COURSE on Functional Safety: “Design of SIFs and SIL calculation”
SIL requirements – Systematic Capability, Failure Probability and Architectural Constraints. The designer of the safety instrumented function must verify that the 3 SIL requirements of the IEC61508 Standard are met. Each requirement will meet a certain maximum SIL level. The final SIL of the SIF will be the lowest of the three and must be greater than or equal to the required target SIL. 1-The systematic capacity of the element (sensor, PLC, valve, etc.) is reflected in the certification issued by certain entities such as Exida or TÜV. The rating (SC 1/2/3) achieved depends on the effectiveness of the quality system and other aspects. A category, for example SC3, means that the product is certified for applications up to SIL 3. Another option is the “prior use” that must be documented and justified. It is important to be clear that it is not enough, for complying with the SIL level, the use of certified products, but it is necessary to comply with all the SIL requirements. If the systematic capability of any of the elements is unknown, it should be indicated in the verification report. 2- The second requirement is determined by the average Probability of Failure on Demand of the SIF (PFDavg for low demand systems) or Probability of dangerous Failures per Hour in the SIF (PFH for high demand systems). The calculation is made for the selected architecture in each subsystem (sensor, logic solver and actuator) that are summed to obtain the probability of failure of the SIF (Safety Instrumented Function). If the required SIL is not met, it will have to be recalculated with another architecture (1oo2, 2oo3, etc.), or with products with lower failure rates or reducing other factors that affect this calculation (Interval Test, beta factor for common cause failures, etc.) It is important to use realistic failure rate values and if possible that they are based on historical data from similar applications. 2-The third SIL requirement is called “Architectural Constraints” based on the minimum hardware redundancy requirements. The tables of the SFF values (safe failure fraction = proportion of safe failures) and HFT (hardware fault tolerance) are used. If the selected architecture does not comply with any of the subsystems (sensor, logic solver, actuator), it will have to be recalculated with another safer architecture (1oo2, 2oo3, etc). There are 2 options (Route 1H and Route 2H). The 2H is used if the failure rates are realistic for the specific application. Otherwise Route 1H should be used, which was created as a defense against too low rates that are not too realistic. In Route 1H there are two HFT tables according to whether the element is type A (simple elements such as pressure switches, valves, etc.) or type B (complex elements such as smart transmitters or PLCs). The three SIL requirements must be met. The maximum SIL achievable by the SIF will be the lowest of the three. It’s the methodology we use in the tool SILcet to calculate and verify the SIL.
Isolators and Terminals (phase 7)
Phase 7: Isolators and Terminal Blocks In this article we discuss the options we have for isolators, converters, barriers, terminal blocks and other components (index of design phases). 7-Isolators, field terminals and prefabricated cables Converters, Isolators and Barriers They are used to convert signal ranges, to electrically isolate two circuits, to duplicate signals, to amplify signals, and so on. Normally we use the term “signal converter” when we transform one range into another, for example, from 0-10 VDC to 4-20 mA. The term “isolator” is used when the main objective is to perform a galvanic separation (or electrical isolation) between the input and output circuits. The term “barrier” usually refers to the intrinsic safety (IS) isolators used to wire signals from the control panel to the hazardous area. In the case of redundant signals we can use the signal duplicators, mainly with analog signals. In the market, there are many types and formats available and it is rare that we do not find the solution we are looking for. The important thing from the point of view of design is to correctly perform the selection in each case since the impact on the total cost can be important, especially in the case of large systems. For example, if it is technically acceptable it will be better to use double isolators than simple with the consequent saving of cost and space in the cabinet. Field terminals (terminals) The possible options are many and it is difficult to give general rules. Usually the discussion focuses on the following: -Connection by screw or spring connection. In most cases the screw connection is accepted, but not always the spring connection. This one is more suitable when there are vibrations and saves wiring time of the control cabinet. -Fused or non-fused terminal. This is a more important point than it seems and should be consulted in the technical specification. There are end users who have clear requirements in this regard. The reason for using fuses in the terminals is because we want to avoid that a short circuit or other problem in a signal affects the whole module or panel. On the other hand, we know that introducing fuses is to introduce other elements that can fail. Let’s look at some of the options we have: A) Use a fuse terminal in each signal individually. B) Use the fuse terminal for each group of signals or module according to a functional criterion (per equipment or service). C) Use I/O modules with built-in electronic protection. D) In the case of the digital outputs we can choose to protect each group with a circuit breaker. This is sometimes used to protect a group of electrovalves. We can also use several of the options depending on the type of signal and the type of module used. -Single or double level terminal blocks It is more common to use single level terminal blocks. The main advantage is when it comes to wiring, both in the manufacturing workshop and in the field. When we use double level is because we want to save space even at the cost of making it difficult to access the terminals. -Cables with or without terminal tip. In general, it is technically more convenient to use tips on the cables. There are numerous installations that do not use these tips when using spring terminals. Pre-fabricated cables for PLCs It is a good option to consider on many occasions, especially if we want to reduce costs, space and wiring time of the control cabinet. Finding an advantageous design based on prefabricated cables can be complicated if we have made a functional distribution of I/O signals (based on equipment or field areas) and therefore there may be, in the same module, signals with different wiring (e.g.: 2-wire and 4-wire analog inputs, digital outputs to intermediate relays with voltage or voltage-free contacts, digital inputs with and without intermediate relay, etc.). In large systems, especially in the oil & gas industry, marshalling cabinets are located between junction boxes and DCS. Within these cabinets field cables are connected and cross wiring is performed to order the signals according to the DCS I/O modules. In this way, it is possible to use prefabricated cables to interconnect the marshalling cabinets with the DCS modules. In large and complex projects this system has many advantages. Link to next phase.
Power Supply (phase 6)
Phase 6: Power Supply and Circuit Breakers In this article we discuss the design of 24 VDC power supply and circuit breakers (index of design phases). 6-Power Supply and Circuit Breakers The design of the power distribution of the control panel is an important aspect that can greatly influence the availability of the plant. Not always the necessary attention is given to this point. Power supply Let’s look first at the 24 VDC power supply needed in many control cabinets. Our recommendations are as follows: -Perform a conservative calculation of the power required by applying a coherent simultaneity factor and adding at least 20% reserve. -Analyze the voltage range allowed by 24 VDC consumers in order to correctly design the power supply. -Analyze the behavior of the power supply throughout the operating temperature range. This information is provided by the manufacturer. -Analyze the behavior of the power supply in the case of micro cuts in the input voltage, especially if we are facing unstable power networks. -Use a redundant power supply configuration whenever possible. For this, different technologies and commercial modules exist that allow to use both P.S. in a balanced way. -Analyze what type of 24 VDC loads we have and how we are going to distribute and protect the different power lines. It is necessary to take into account in the design if any of these loads can demand peaks of consumption that adversely affect the rest, if so it will be important either to consider it in the calculation of the power or use electronic circuit breakers with adjustable current limiting. -Analyze the efficiency of the power supply because it is important both for the electrical consumption and for the heat dissipation inside the cabinet. -Depending on the environment and the application, analyze whether it is necessary to include in the design some capacitor bank to avoid problems caused by micro-cuts in the supply, as well as if DC/DC converters are necessary to isolate zones from each other or uninterruptible power supply (UPS). Power Distribution The distribution of power and the selection of circuit breakers is not generally complicated. The most important thing is to define correctly the number, levels and protection current of each circuit breaker so that the protection and discrimination of each line are correct. Design must minimize as much as possible common mode failures, i.e. there is no single fault that causes unintentional trip of two or more switches. This is especially important when designing a redundant system with redundant power supplies and separate protections on each channel. For example, it is not a good design if we use a single circuit breaker to protect both power supplies in a redundant system, or if we use a single C.B. to protect all power to field elements such as electro valves, etc. In any case, all this depends on design criteria and if the budget is sufficient. Another aspect to consider in the design of the power distribution is the difference between AC and DC, since it is very common to use the same type of circuit breakers for everything without analyzing whether it is actually correct or not. The DC tripping curve is different with a factor of 1.3. Keep in mind that if we want to make a very efficient design an option are the adjustable electronic C.B. In certain applications, this may be necessary. Link to next phase.
HMI / SCADA (phase 5)
Phase 5: Selection of HMI In this article we discuss the selection of the HMI / SCADA (index of design phases). 5-Selection of HMI or SCADA In medium or large systems, normally the operator interface is defined in the technical specification. If it is a PLC system we will talk about a SCADA, if it is a DCS we will talk about the operator stations. In small systems with a single PLC there are other options to consider such as operator terminals or low cost SCADA. Large end users typically specify the brands of the entire control system and this includes the HMI. In any case let’s see some questions that we should ask ourselves when selecting the HMI /SCADA depending on whether we are the end user or the system integrator: End user: i) Do I want it to be compatible with my existing hardware? ii) What degree of scalability should it have? iii) What level of technical assistance do I need? iv) Is it easy to update to new versions and at what cost? v) Can I carry out the maintenance myself? vi) What is the licensing policy and its cost? vii) Does it have proven references in systems with many variables? viii) Does it have references and libraries in similar applications? ix) What communication options does it have and how secure they are? x) What other available products are compatible with this SCADA (MES, Historian, Data Analytics, Asset Management, etc.)? xi) Does the company behind the product give me confidence? xii) What is the experience of the system integrator? System Integrator: i) Am I already trained in this SCADA? ii) Have I already used it in a similar application? iii) Is there technical assistance in case of need? iv) Does it have references in systems of a similar size? v) Does it have references, libraries and complete functions for this type of application? vi) Are communication drivers that I need available? vii) Do I have confidence in the company behind the product? Nowadays, in the environment of the digital transformation that we are living, there are other aspects that we should take into account when choosing the HMI, like the remote access to the system from web browsers, mobile phones and tablets, or what is related to cybersecurity (encryption, authentication, firewalls, etc.) We also want to reflect about the differences between the PLC and DCS. Many discussions about this topic forget that the systems based on PLC + SCADA have developed a lot in recent years and have nothing to do with those available 15 or 20 years ago. In any case, especially in large applications, there are still significant differences that have to do mainly with the number of databases (unique in the case of DCS) and the advanced control features that we can find in some DCS manufacturers which have specialized in specific applications (refining processes, generation plants, etc.) Communication signals They are the signals that are exchanged between two controllers or between a PLC and the DCS using a communications bus. In this case you must define the physical support (RS232, RS485, Ethernet, etc.) and the protocol used (Modbus RTU, etc.). For example, in large facilities it is common to have PLCs in the package units that communicate with the DCS. Link to next phase.
CPU and Architecture (phase 4)
Phase 4: CPU selection and Control System Architecture In this article we discuss how to select the CPU and describe the main options for the Control System Architecture (index of design phases). 4-CPU selection and design of Control System Architecture The selection of CPU and PLC / PAC model depends directly on several parameters. Some of the main ones are: -The number of inputs and outputs, the number of racks and whether they are local and / or remote I/Os. -The size of the memory, the power and the speed of program execution (cycle time or scan). -If we need to program complex or regulation algorithms (PIDs for example). -If the CPU module must incorporate any communications port and / or any special feature. -If we have a redundant CPU configuration and what type of redundancy is specified. -If it is a fail-safe PLC. Another aspect to consider is whether we are talking about a traditional PLC or PC-based control (“embedded PC” or “softPLC”). Control System Architecture The term Control System Architecture is very broad and covers many aspects, from the configuration of the PLC (local and remote racks, redundancy, etc.) to the type of network in its different levels. Choosing an architecture will depend firstly on what is required in the technical specification, and in addition, on the different aspects of design made so far. Here are some cases: -PLC / PAC with local I/O racks and without redundancy. This is the simplest case, from a single chassis with its power supply, CPU and I/O modules to a configuration with several local expansion chassis. -PLC / PAC with local I/O racks and with redundancy. The redundancy can be CPU, power supply and I/Os. Normally the term redundancy is associated with the increase in availability (logic 2oo2, 2oo3, 2oo4) of the system, but we can also speak of redundancy to increase safety (logic 1oo2). When designing a redundant system we must analyze correctly that there are no common mode failures, i.e., no single failure can affect the two redundant channels at the same time. If this happens our system is no longer totally redundant. This is one of the points that require more attention and analysis when designing a redundant control system. In addition, the design is not exclusive to the PLC or control cabinet, but to the Plant as a whole. Reducing common mode failures to zero is an almost impossible task and in each case the designer should focus on those most likely to occur. -Control System Architecture with decentralized controllers – Remote I / O There may be different reasons for decentralizing I/Os: i) saving wiring; ii) redundancy criteria; iii) distribution and segmentation of I/Os; etc. It is also possible to use fieldbus to communicate the CPU directly with some devices (drives, etc.) In large installations, the decentralization of controllers by production areas is a good practice. Nowadays fiber optic networks give us a very high quality of communication in almost any type of network. -Basic industrial network In the simplest configuration we will have a PLC with local I/O and a PC / SCADA or terminal for the operator connected to the communications port of the CPU. -Industrial network (control level) Manufacturers offer different typologies (star, ring, bus or a combination). Typically, at this level of control (“control level”) we connect the controllers, the SCADA or the operating stations, and the engineering station. Today Industrial Ethernet is being imposed in its different versions of protocols (Ethernet/IP, Profinet, EtherCAT, PowerLink, Modbus-TCP, etc.) One of the aspects to be considered (not always defined by the technical specification) is the determinism of the network to ensure that the information is transmitted from one node to another at a specific time. Classic industrial networks such as ControlNet or Profibus are deterministic, but not the Ethernet of the IT world. Many of the industrial protocols that use Ethernet have been adapted to be deterministic and are suitable for real time but there is still no standard for this. There are many expectations put into Ethernet TSN that can be in short time a standard for the industry. – Industrial network (field bus) Used to connect I/O blocks and intelligent devices (sensors, actuators, drives, MCCs, etc.) The most used are the following: Profibus DP, Modbus-RTU, CC-Link, CAN, DeviceNet, etc. -DCS (Distributed Control System) In large installations, it is used the Distributed Control Systems (DCS), consisting of at least two controllers with their I/Os, one or two servers, the operating stations and an engineering station. We will have at least one fiber optic network and sometimes also one or several fieldbus. In this type of Plant the location of the controllers by zones usually provides interesting advantages. -Wireless Network There are Plant Wireless Networks normally destined to interconnect devices like camcorders, tablets, mobileworkers, tracking of assets, etc. and Field Wireless Networks designed to interconnect field sensors and final control elements for process measurement and control. Field Wireless Networks can be integrated into plant networks via Ethernet or radio cable. The two major Wireless standards for the process industry are ISA 100.11a and WirelessHart. It is a solution to take into account mainly in modifications and extensions of existing plants and for the monitoring of non-critical signals. Link to next phase.
Functional Distribution (phase 3)
Phase 3: Grouping and segmentation of I/Os and Racks (functional distribution) In this article we discuss the importance of the functional distribution of I/Os, especially in medium size and large control systems (index of design phases). 3-Grouping and segmentation of signals and racks – Functional distribution The way we distribute I/O signals, modules and racks is an essential part of the design of the control system, often we don’t pay much attention to this. The quality of the final design can become bad or very bad if we have not done a good grouping and segmentation of functional signals with the focus placed on the field instruments and equipment. It is true that it seems obvious, but it is not if we see the amount of bad designs that can be found in this aspect. This functional distribution of signals must always be a priority of the design regardless of the size of the system and whether it is redundant or not. Logically if the system is small then the functional distribution will be simple or even there will be no such distribution if we have very few signals. In medium and large systems it is very important. On the other hand, it is very different to distribute the signals in the case of high availability systems (redundant signals, logic 2oo3) than in the case of systems without redundancy. The signal functional distribution must be made in such a way that the failure of an I/O module does not cause a total or partial shutdown of the plant or equipment and it will depend on the criterion used in each case. The most critical signals are those that produce the total shutdown and one of the usual practices is to triple the instruments and to make a logic 2 out of 3 (2oo3) in the controller. In process plants, it is a common practice to use 4-20 mA transmitters for these cases. The design must be done so that each channel of the triple signal is located in a different module and, if possible, in a different I/O rack. If all three modules are in the same rack, it must have a redundant power supply. At the next level we have redundant signals with logic 2 out of 2 (2oo2). They are also critical signals related to main equipment whose malfunction causes total or partial shutdown, e.g. limit switches and open / close commands to the main valves. Each channel of the redundant signal must be wired to a different module and, if possible, to a different rack. If both modules are in the same rack, it must have a redundant power supply. At the last level we have simple signals that we can divide into several categories. On the one hand we have the signals that do not cause an equipment shutdown directly, such as starting and stopping permissives, lamp outputs, equipment signals that are only used during start-up processes, etc. An example may be the igniter signals from the burners of a boiler. Another category of non-redundant signals are those that affect non-critical equipment or duplicate equipment (primary and secondary equipment). The classic example is the pumping group consisting of two pumps of 100% capacity. In normal operation only one of the pumps works and in case of failure it starts the other automatically. Another example is when we have a boiler with several burners in which it is not critical to lose one of the burners. Functional distribution of I/Os in different modules is also important in all these cases. For example, in the case of the two pumps we have to avoid using the same module for both, and in the case of the burners it is advisable to make a distribution that affects the smallest number of burners in case of a module failure. In short, we must avoid so-called common mode failures. Another aspect to consider is the number of signals per module, i.e. density, because using high density modules (32 or 64 I/O) does not imply that the cost of the PLC will be lower. This is easily understood if we take into account the above on the functional distribution of the I/O (based on the field equipment). What is clear is that each case is different and should be analyzed well. Link to next phase.
Selection of I/O modules (phase 2B)
Phase 2: Selection of I/O modules (part B) In this article (part B) we discuss other design factors of the selection of Input / Output Modules such as intrinsic safety, electrical isolation, redundant I/Os, etc. (index of design phases) 2B-Hardware selection of input / output modules Link to part A. G) Electrical isolation Normally there are different levels of isolation in the input / output modules (channel-to-channel, between groups of channels, between channel and ground, etc.). The standard modules are not isolated channel by channel, but in most cases the isolation between groups of signals is usually sufficient. It is usually a good practice not to mix the common terminal of the modules with the “ground”. The modules with galvanic separation among channels have a high cost so it is necessary to analyze it well. If we only need to isolate a few signals we can install external isolators. If the technical specification is very demanding in terms of electrical isolation we must analyze in more detail than usual what is the best technical solution and at what cost (modules with galvanic isolation, external isolators and / or intermediate relays, etc.) H) -Diagnostics The modules with diagnostic functions of the internal faults offer us an interesting plus but we have to pay. When the down time of the machine or process is important then it will be worth paying this extra cost, so we can avoid or greatly reduce down time. Nowadays many manufacturers already incorporate standard diagnostics even in the standard modules. I) -SIL certification If our application requires a safety PLC we must use the fail safe modules whose cost is much higher. These modules incorporate many diagnostic functions and have a redundant internal structure with logic 1oo2D or 2oo3D. The same goes for the interposing fail-safe relays because they are special and certified relays that have a very high cost. If we also have redundant I/Os the cost of the solution will increase significantly. J) Extreme conditions Many manufacturers have special modules to work in extreme conditions, for example, at high temperature. Their cost is much higher and should only be used when they are really required. In many cases the control panel is located in a conditioned room. In any case, the operating temperature of the control system as a whole is usually defined in the technical specification. It is a very important data that we must never forget in the whole design process. K) -Hazardous area – Intrinsic safety solutions Hazardous areas with risk of explosion are classified in different categories. The typical case is found in some areas within refineries or gas plants. This means the design of the safety PLC or the DCS must fulfill a series of demanding technical requirements. A very common solution is to use intrinsic safety I/Os, this implies either intermediate barriers or intrinsic safety modules available in some manufacturers. These applications require a much higher level of training of the design engineer. There are other options for designing the electrical panel inside a classified area such as pressurized cabinets or “explosion proof”. In any case, in the case of control cabinets based on PLCs it is frequent to use the intrinsic safety solution that has some advantages such as, for example, the possibility to place the control cabinet in a safe area without danger of explosion. In large installations, such as in refineries, another solution is to pressurize the control and panels rooms. This requires the use of field cables and instruments suitable for hazardous areas. L) -Switching frequency Applications of motion control require special modules that can work at high speed. In digital signals the DC ranges allow more switching speed and are more recommended for fast applications. Depending on the case we can use standard or special modules. If we use high-frequency inputs (eg, capable of reading KHz signals), special caution must be taken with the electrical noise of other power cables. M) -Redundant Inputs / Outputs The concept of redundancy is broad and should be analyzed slowly. There are several types of redundant architectures (TMR, QMR, etc.), each with its advantages and disadvantages. On the other hand, we must distinguish between redundancy to increase safety and redundancy to increase availability. The customer specification must define the number of input / output signals of each type, which are redundant and the type of redundancy or logic (1oo1 for single signals without redundancy, 1oo2 / 2oo2 for redundant signals and 2oo3 signals). Although good design practices advise using redundant controllers and redundant power supplies whenever redundant I/Os are used, this is not always possible because of a cost issue. That is, we can find designs that use redundant I/Os and 2oo3 logic, but with a single CPU (very often this is due to the CPU redundancy limitations of the PLC model used). When designing a redundant system we must take into account the “Mean Time Between Failures” (MTBF) values provided by the manufacturer. Typically, this time will decrease as the module’s complexity increases, that is, we will encounter very high times in input and output modules and much lower in CPUs. Redundancy of I/Os (hardware) and / or redundancy of field instruments. Another aspect to be analyzed is the redundancy of the instruments and field elements, since the redundancy of I/Os does not usually coincide with the redundancy in the field. It is common to have non-redundant instruments or electro valves and to specify redundant I/Os. Here we can enter into the discussion of the probability of failure of the PLC hardware versus the probability of failure of the sensor or field device. N) – Remote signal Depending on the distribution of instruments and equipment in the field it may be interesting to use remote I/O racks next to the field elements. In these cases the usual goal is to save wiring. In critical applications (nuclear, airports, etc.) the objective can be to get very high availability by having the redundant I/Os in racks separated several hundred meters. In large installations it is a good option to distribute the controllers and I/O by zones, which communicate with each other and with the servers through the control network. O) -Spare I/Os The percentage of spare signals is not always defined in the specification. A good practice is to keep this percentage between 10 and 20%. Link to next phase.
Selection of I/O modules (phase 2A)
Phase 2: Selection of I/O modules (part A) In this article we discuss the selection of Input / Output Modules (part A) (index of design phases) 2A-Hardware selection of input / output modules The selection of the type of modules should be based mainly on the technical requirements, on the one hand, and on the other in the economic cost of the complete solution that we are designing. On the other hand, we must take into account the list of approved manufacturers (the so-called “vendor list”). Here are some of the points that should be considered in the design: A) -Modules of digital inputs The usual thing is to use 24 VDC modules and there are not many cases that justify using another voltage in the digital inputs. As examples that justify the use of a higher voltage, we can mention: i) we have cables of a very small section and very long that cause a non-allowable voltage drop (it is not usual for cables with sections of 1.5 or 2.5 mm2); ii) there are high electromagnetic noises. Some advantages of using 24 VDC are as follows: -It’s a low voltage not dangerous if it’s touched. -It does not produce interferences and we can also mix digital inputs and 4-20 mA analog inputs in the same cable. -Typically the cost of the module is lower and modules with high densities are available (number of inputs per module). -It is best suited for high frequency inputs or explosion hazardous areas. -We can connect sensors with output transistor (PNP or NPN) such as proximity sensors, Namur sensors, etc. B) -Digital Output Modules The most used output voltage is 24 VDC, but we can find technical requirements that advise another value such as 120VAC / 240 VAC / 125 VDC. On the other hand, it is necessary to take into account the voltages specified for the fields devices like solenoids, lamps, MCCs, etc. and whether there are intermediate relays or not. It is usual that the technical specification defines the different field voltages but does not enter into what should be the voltage value of the digital outputs of the PLC. The fundamental data to define the type of module is the power of the final element, i.e. the supply voltage and the current consumption in amperes. This information will also tell us whether or not to use intermediate relays. In applications with high switching frequencies (e.g. in machine tools) another fundamental data is the maximum output frequency as well as the possible overvoltages that can be generated by deactivating outputs with inductive loads (such as contactors, etc.). In what cases is it justified to use high voltages at the digital outputs when there are no intermediate relays? -If the field elements are too far from the control cabinet. -If there may be electromagnetic noises that advise it. -If there are consumers with high powers. -If we are in dirty environments or with suspended powders that affect the electrical contacts. Another aspect to consider is the type of internal component of the module (relay, transistor, triac, etc.) Digital relay outputs have many limitations that need to be properly analyzed, although they may be appropriate if we want to use different voltages in the same module. Some of these limitations are as follows: -Low switching frequency. -Possible temperature problems in applications with high density modules whose outputs remain active for long periods of time. -Lower MTBF. C) -Interposing Relays The use of interposing relays is a common practice, in most cases due to the need to electrically isolate the control cabinet from field equipment or for other purely technical reasons such as: -Field voltage is different. -High consumption of the final elements. -Use of the same output for the control of several field elements. -Output with one or more voltage-free contacts. It should also be considered that the use of interposing relays introduces another component that is susceptible to fail, and therefore affects to availability of the system. It is important to consider this in the design of redundant digital outputs. D) -Range of the analog signals The most used range, both in the inputs and in the analog outputs, is 4-20 mA. Many of its advantages are derived from having raised the zero point to 4 mA, i.e. to have a live zero that allows to diagnose faults in the current loop, such as cable breakage. It also allows the transmission of HART digital data through the same cables and can be used for intrinsic safety signals as it allows the checking of the line. In addition, current signals are generally more immune to electrical noise when compared to voltage signals (0-10 VDC, 1-5 VDC) and can operate over long distances (more than 1 km if the nominal power is 24 VDC). It is not recommended to work with voltage ranges over long distances and with intermediate junction boxes because there are voltage drops that make it impossible. If we have to wire the 4-20 mA signal to several devices in the control cabinet we can do it with a precision resistance of 250 Ω that converts the signal to 1-5 VDC and it can be wired in parallel to several points. Other types of analog inputs are temperature, RTD or thermocouples, but they should not be used if the distance to the PLC is large because of the error introduced by the resistance of the cables themselves. It is most advisable to use temperature transmitters, i.e. RTD or Thermocouple with a 4-20 mA converter. E) -Resolution of analog signals Normally a resolution of 12 bits is sufficient. By increasing 1 or 2 bits, the price of the module goes up significantly and is rarely justified. This is not usually a hot spot, but in any case you have to meet the specification. F) -Density of the modules In digital signals the most common densities are 16 and 32 channels, and in the analog ones it oscillates between 4 and 16. The selection of one type or another depends on several factors and there is not a general rule: -How we are going to perform grouping and distribution of signals. -Which is the redundancy architecture and how many simple and redundant signals there are. -The number of signals of each type, as well as if there are “fail safe” signals, intrinsic safety signals, etc. It is not always cheaper to use high-density modules, for example, with 32 or 64 I / O since it depends on the design we are making (if I/O distribution is made according to the location of the instruments and equipment in the plant, if there are redundant I/Os and/or logic 2 out of 3, if there are technical reasons in the construction and wiring of the electrical panel, etc.) Link to part B
Technical Requirements (phase 1)
Technical Specification – Design Phases of the Control System This is the first article in this series where we try to describe the different design phases of a control system. In PLC-based systems the design engineer must analyze all the technical requirements and make the design that meets the specifications with the lowest possible cost. In medium and large systems it can be a complex task for which many technicians are not sufficiently prepared. In this first article we deal with the technical requirements of the design. Index of design phases: 1-Analysis of the technical requirements. 2A-Hardware selection of input / output modules (A) 2B-Hardware selection of input / output modules (B) 3-Grouping and segmentation of signals and racks – Functional distribution 4-CPU selection and architecture design 5-Selection of HMI / SCADA 6-Power Supplies and Circuit Breakers 7-Isolators, field terminals and prefabricated cables 8-Design of the control cabinet. 1-Analysis of the technical requirements The first thing to do is to analyze in detail the technical specification of the customer. Depending on the sector and the size of the application we can find a simple document, several extensive and demanding technical documents or any other intermediate case. In any case, the design engineer should not be limited to the exclusively technical content of the specification, but should know other information about the project that can condition the design as it may be everything related to delivery times, penalties for noncompliance, etc. The most important points to keep in mind throughout the design process are: – Approved manufacturers (PLC / DCS, SCADA and other components). – Redundancy requirements (CPU, I / O, networks, servers, etc.) – Safety requirements (SIL level, hazardous area, firewalls, etc.) – Requirements for the architecture (if there must be several controllers, number of I / O by type and if they are local and / or decentralized or remote, communication speeds in different networks or with third parties, etc.) – Requirements for the construction of the cabinet (non-standard mechanical characteristics, front and / or rear access, minimum spare space, entry of field cables with cable glands, shielded cables, use of prefabricated cables, compliance with seismic regulations, use of marshalling cabinets, etc.) -HMI / SCADA requirements (industrial operating stations, graphics , SOE resolution, historical and alarm management, backup management, access management, number of graphics and monitors, number of communication I/Os, etc.) -Other requirements (interposing relays in digital I / O, galvanically separated I / O, network type and communication protocol with third parties, maximum number of process variables per server or CPU, digital signals with line monitoring, FAT and SAT, etc.) All this list above is not exhaustive, but gives an idea of the number of concepts that we must analyze. Some of them can have a very high impact on the cost, such as redundancy and safety requirements. In the following sections we will delve into many of these technical requirements. Link to the next phase.
PLC vs DCS
PLC vs DCS – What solution to choose? When we want to integrate several software applications, use advanced control algorithms and have future expansion plans, the PLC / SCADA solution may not be adequate. Currently the differences between both solutions have decreased, but they are still important, especially in systems with more than 500 I / O and many analog outputs. What is better to automate my plant, a DCS or a system based on PLC / PACs + SCADA? Years ago it was much easier to answer this question than now. If we were talking about large continuous processes with analog signals and with more than one controller the answer was DCS, if it was mainly discrete signals the solution was the PLC. Today the boundary for decision making is much more diffuse and no longer depends so much on hardware (such as processor power or the ability to handle many analog signals) but rather on software and other non-technical factors related to the whole life cycle of the plant. PLCs have evolved tremendously and have similar processing capabilities to DCS controllers. In fact, there are manufacturers of DCS and PLCs that use the same controllers, such as Siemens with the PSC7 and the S7-400. The SCADA have also evolved a lot and allow us to develop custom objects and libraries that give us features similar to those of the DCS. DCS is associated to the large facilities where it is very beneficial to distribute the controllers and I / O by zones of the plant, and with several operating stations located in the control room. In addition, the integration of other applications into a single database is required. All this together with the cost and level of specialization that some DCS have in specific applications (refining, electricity generation, pharmacy, etc.) is what makes the difference. The complete plant life cycle is another important aspect to consider in deciding for a DCS or PLC / SCADA. If we plan to grow, gradually integrate other applications, optimize the control algorithms and making other changes then possibly our solution goes through the DCS because managing all this with PLC / SCADA, having several databases, can be complicated and expensive. PLC vs DCS – When is the DCS better? In what cases is it clear that we should go to the DCS solution? If we meet the following requirements the solution will probably be the DCS: -We have many I / O (> 2000) and a high number of analog outputs (> 200). -We have multiple distributed controllers that communicate with each other. -We use transmitters and intelligent valves that parameterize and diagnose in remote. -We need to integrate the control system with a MES system or with other automation and / or information systems (safety system, electrical control system, ERP, asset management, data analytics, etc.) -We need a system with a high level of redundancy (CPUs, I / O, bus). -It is a batch application with many recipes and complex. -We need a multi-operator HMI and advanced alarm management. -The Plant will probably be expanded in less than five years. -We have other similar plants with DCS and advanced control algorithms that we have optimized over time. PLC vs DCS – When is the PLC / SCADA better? In what cases is it clear that we should go to the PLC solution? If we meet the following requirements the solution will probably be PLC + SCADA: -We have less than 300 I / O and few analog outputs. -We only need one or two PLCs. -We do not have special or complex HMI / SCADA requirements. The SCADA alarm management system is sufficient. -This is a plant or package that we do not plan to grow in the next few years. -We already use this type of PLC and SCADA and we do maintenance ourselves. -We do not need the system to be redundant. The decision will not be so clear if we have more than 300 or 400 I / O and we do not fulfill all the previous requirements. In this case we will have to do an analysis of our case and evaluate our standards and requirements. In addition, there may be other factors that tip the scale to one side or the other, such as the following: – Who is going to program and configure the system? (Local integrator, manufacturer, ourselves). – Who will perform the maintenance of the system? – Are we familiar with the system and have a significant stock of spare parts? – Is it a new system? Does it already have proven references in similar applications?
Decision matrix for industrial proposals
Decision matrix for industrial proposals In the first step of the bidding process we must decide whether or not to bid, which is known as “Bid / No Bid” decision. In large industrial companies, it is usual to use a decision matrix to aid analysis and usually consists of question forms that are given a score based on the response. The resulting decision matrix may look like the image. As you see, what it does is to cross two variables, on the vertical axis “the attractiveness of the client or project” and horizontally “the probability of getting the order“. At the beginning of the analysis, it is normal to check if any of the exclusion criteria is met, such as: If the project or solution requested by the client is technically not feasible. If there are legal restrictions or the country risk is high. If we do not have sufficient resources available or the required delivery time is impossible to meet. If there is any unforeseen commercial or contractual risk (customer insolvency, etc.) If it is not our “core business“. Questionnaire to evaluate the attractiveness of the client or project. 1-Commercial complexity: Legal issues, language, licenses, etc. Guarantee of payment. Due to our partner and / or subcontractors. 2-Market complexity: Political and economic situation of the country. Previous experience in the country and / or the market. Cost of local content required by the client. 3-Complexity of technical risk: Our knowledge and experience of specifications. The feasibility of meeting the required deadlines. Technical demands, performance parameters, applicable standards, environmental requirements, etc. The cost of development and engineering if the project is new. The risks of third parties (partners, subcontractors, etc.) 4-What is the probable Ebit? 5-What is the future potential? For after sales business. For similar orders. Etc. 6-What is the current relationship with the customer? Questionnaire to evaluate the probability of obtaining the order. 1-Probability that the project is carried out. Is there a budget, a location, etc.? 2-Our relationship with the client. What is our success rate with this client? How does the client see us in this project? 3-Competitors. How many there are? Do we have any advantages? Is the specification oriented towards our products or solutions? Do we have more references than our competitors? Each answer will have a higher or lower score. The total of each questionnaire will place us in the previous decision matrix and will help us make the decision to offer or decline. Turnkey projects or solutions offers, where there is equipment supply, engineering and commissioning, is where it is most important to carry out such an evaluation.
Automate proposals in industrial sector We want to make some reflections on how to automate proposals during the sales process and the benefits that we get, especially in the case of industrial companies. Many of the companies still have a very manual sales process with a lot of improvement margin. What we want is to automate the repetitive tasks of the proposal process with the following objectives: Reduce the resources needed for the preparation of bids, in terms of time and personnel. If we achieve this we will have more resources for other tasks related to the client, or just to be able to make more offers with the same resources. Reduce errors, especially in relation to cost estimation. Improve the presentation of the offer. We want to show our interest in obtaining the order. Prepare a cash-flow calculation of the project that is as close as possible to reality. Document the offer well, for future reviews or for similar cases. In summary, what we are looking for is to professionalize the proposal process as a whole. This type of automation is also part of the process of digital transformation in which we are all immersed. In the industrial sector the use of MS-Office tools is very usual, especially Excel and Word. We have our own spreadsheets, better or worse made, that we are re-using from one offer to another. Many of the large multinational companies that sell components have corporate programs to offer material references. But when we talk about offers where there are also services, that is, hours of design, engineering, programming or commissioning, is when it is more important to professionalize the process. Usually, as the weight of services rises, risks also increase, and it is when automating some of the phases of the proposal process is more worthwhile. Phases of the generation process of the bid What phases of the proposal process can be automated? Which ones are worthwhile? In general, we can say that almost all of them can be automated, although the benefits of doing so are not the same in all phases. The stages of the process can be simplified according to the following: Reading and analysis of the documents of the offer, such as technical specifications, commercial conditions, guarantees and penalties, the required delivery period, etc. “Bid / No bid” Decision. It is best to have some internal procedure that allows us to make a quick evaluation of the client/project attractiveness, the probability of success and the risks. Phase of design and pre-engineering. Depending on the case (budgetary or firm offer) we will have to perform some design tasks to calculate the costs with the smallest possible error. There are hardware manufacturers that have “configurators” for their clients that facilitate a lot some of the tasks (the control architectures configurators of Siemens or Rockwell Automation are good examples of this). Estimation of material costs and man-hours. Main costs should be updated, even if we have similar prices from past projects. The estimation of hours, always a difficult task, should consider the actual data of used hours by profile of some similar recent project. Preparation of technical documents, such as lists of materials and services, lists of recommended spare parts, data sheets of instruments and equipment, dimensional drawings, etc. Calculation of sale prices and payment terms based on the cash flow of the project. The ideal payment milestones are those that give us a positive cash-flow throughout the life of the project. Often this is not possible because the client does not accept them, but we must try to get closer. Creation of the project schedule and the delivery time. In the bid phase, it is normal to prepare a schedule of the main tasks without going into detail (engineering and / or software development, purchasing, manufacturing, testing and commissioning). Preparation of the list of exceptions and deviations from the customer’s specifications. It is an important document, especially if it’s a firm offer. Writing of the main document of the offer (sometimes separating the technical offer from the commercial one), and the letter to the client, and including other standard documents as well (list of references, technical catalogs of the products, organization chart of the project, etc.) Internal approval phase (with our managers). Sometimes we will have to revise the offer by going back to one of the above points. Offer revisions after the first meetings with the client. There are phases that cannot be automated, such as reading and analysis of specifications (phase a) and the list of deviations (phase h). There are others that we can automate to a certain degree, such as the Bid / No bid decision (creating simple automated forms that help us to perform a project categorization and evaluation), and there are others that are very liable to be automated (phases c, d, e, f, g and i). For the automation of phases d-e-f-g-I we have developed the tool “Bid Generator 1.0” which is available on this website. This first version is based only on Excel with numerous macros in Visual Basic. The most efficient way to automate an offer in the industrial sector is to base its format or structure on a short main document, as a summary, and many Annexes, since it is relatively easy to generate the Annexes automatically (creation, format and PDF generation). It is also simple, from pre-configured templates, the generation of the main document and the letter to the customer. This first version of “Bid Generator“ makes all these things, using only Excel. In future versions, we will add other macros to be able to work also with Word and PowerPoint. In many companies, MS-Word is used to create the main document of the offer, and in many cases, ends up being a document with too much information that is not easy or comfortable to handle by the client. A good option is to create a simple Word template with the summary of the offer, the commercial terms and the list of attachments. The customer will appreciate it. In any case, to automate proposals requires you invest some time in the first offers that are the ones you will use as models in other similar ones. The time savings in the proposal process will depend on several factors, such as complexity, project size, knowledge of the customer and its specifications, etc. As an example, we can mention that in an offer of an ESD System for a refinery (Emergency Shut-Down) we can save around 30% of the hours of the proposal process. As we said at the beginning, let’s not forget that to automate proposals not only pursues the goal of saving time, but reducing errors and improving the presentation and quality of the offer. The better we perform more accurately the calculations of the costs, the less contingencies we have in the offer, and therefore we will reduce the sale price, increasing the chances of success.
Industry 4.0 for beginners
Industry 4.0 for beginners The terms Industry 4.0, Digitalization, Internet of Things and Smart Factory are used to refer to the same thing, the digital transformation of the industry, both machines and production processes, and in general of all the productive economy. This article will try to explain it, but from a simple plane to try that non-experts to understand. We have lived many important technological changes in the last 10 years that have changed many things, we refer to development of internet, of smart phones, of telecommunications, etc., which have greatly changed the way we relate and do the things. Nowadays it is something natural. Now we have to incorporate all this to the factories, production plants in general and know how to do it correctly to get more efficient and flexible productive processes, according to market demands. Some call it the fourth industrial revolution and others prefer to describe it as an evolution; we have no doubt that we are at the beginning of the fourth industrial revolution. When we look in perspective within 50 years we will say that this revolution began in the early twenty-first century. The third industrial revolution marked the passage of electrical factory to the entrance of electronics and computers to automate plants; then there was no Internet or mobile devices. Has this fourth industrial revolution already started? The answer is yes, although we are in the beginning, but there are industries that have begun to incorporate these technologies. There are a number of technological advances that have created the ideal basis for this revolution: 1-The new materials and manufacturing technologies have drastically decreased the size of electronic devices and the computing power of processors has raised significantly. 2-In a few years the speed and quality of the communications have multiplied by 1000. Mobile technology has developed at breakneck speed. 3-Internet appeared in the 90s that has changed everything, from our day to day until business models. Does anyone now imagine a world without internet? On this basis the software has been greatly developed, the functionalities have moved from hardware to software, and there are many standard programs available to small and medium companies. The softwares for simulation and integrated engineering are just a few examples. One of the amazing advances of this digital transformation is called by some “digital twin“. For example, you can create virtual prototypes to avoid costly investments, the time to market is greatly reduced, and the costs and selling prices also. This requires powerful 3D programs where all variables are simulated as if it were real, the test environment is safe and dangerous situations can be simulated and tested, additionally training is also faster and more effective. The automotive industry is already benefiting from it by raising productivity 2 or 3 times and by reducing significantly the time to market. The integrated engineering programs are other interesting advances. Traditionally engineers have used different systems for the design of the plant, causing some inconsistency of data from different disciplines (process, civil, electrical, I & C, etc.) and incomplete documentation. These new tools work with a single database and allow several teams from different areas to work simultaneously, the result is a significant reduction in costs and engineering hours. There are examples already on the market as COMOS or EPLAN. What’s “Big Data” or rather “Industrial Big Data“? Due to the great advance in communications, in data processing capacity and in other technologies such as electronics manufacturing, companies now face a huge data growth. The traditional software for processing is no longer valid. The industrial sensors are now intelligent, they capture and process data, and even have some autonomy to make decisions, but you need to transform that data into useful information to improve production and profitability of the plant. “Big Data” has to be converted into “Smart Data”. All this requires the development of software applications of another dimension, not only able to analyze large volumes of data but able to process complex algorithms and provide specific operational intelligence of each sector. Many applications, able to analyze all this data and present them so that they are really useful for industries, are under constant development. One example is the software XHQ, very introduced in the oil & gas industry. Industries already have much information but 90% do not take advantage. A very high percentage of industrial processes does not work optimally. Talent missing? Definitely further guidance is needed in universities for getting this preparation, as it is happening for example in the faculties of Mathematics due to the high demand for professionals capable of creating those algorithms that convert Big Data into information really useful, into Smart Data. The data is in the cloud to be shared to go optimizing production processes. Factories and machines are also generators of knowledge that should be analyzed and used. Cybersecurity must protect assets and users against all types of attacks and threats in the cyber environment. It is necessary to strengthen industrial environments against such new threats. This is one of the barriers to digitalization, many companies are reluctant to store data in the cloud, but the alternative of not doing so will probably be worse. Cloud computing is a new service model that provides resources to process information, providing users with standardized services that can be used in a flexible and adaptive way. Enables companies to focus on their core business, reduce costs and get access to more powerful resources. The fourth industrial revolution has just begun. Companies that do not transform will struggle to survive. This transformation has to be progressive always looking for answers to the same question: What we can and want to improve? In summary, manufacturers have to adapt to the new market by reducing costs and delivery times, mass production becomes more individual and personalized, innovation needs to be on the market much faster, safety and the environment are a priority and all this having in mind that online selling has changed business models. There is much more competition that now it’s global.
PLC module sizing
PLC module sizing We want to do some thinking on how to make the PLC module sizing as to the density of the cards, i.e., about what’s the optimal number of channels, and we see that in many cases is more important than it looks like. At first glance all tend to think that the more channels the module has the better it is, especially in medium-large size architectures. In some previous post we have shown some examples: –Optimize PLC architecture: here we compare several architectures of 197 I/Os, using the same PLC model, with 32 and 16 channels cards. –BMS Safety PLC: it is a real application comparing architectures from different PLC manufacturers and different I/O cards. Surely the casuistry is huge. In any case, the cost of the PLC is very important and therefore we must always try to look for the architecture that best suits the project in question (for example, if the customer is requesting 144 digital inputs (120 + 20% of spares) we would have 16 spares more (that we don’t need) when using 32-channels modules, and none with 16-channels modules). The analysis should be complete, i.e., the complete I/O architectures are compared, the costs are calculated, the distribution of the racks in the control cabinets is considered and the physical space they occupy as well.. The first three can make very quickly by using “IO_Builder“, and for the analysis of physical space we may use “Cabinet Layout“. When architecture is simple and we have no restrictions to distribute the signals this analysis is not important, but it can be essential in more complex architectures. In what cases we should analyze the PLC module sizing? 1-When you have to distribute the I/Os based on the location of the instruments and equipment in the Plant (if the PLC/DCS controls different units or sub units with more or less critical equipment, if there is duplication of equipment or packages, for example primary and back-up pumps, etc.) 2-When there are redundant I/Os (1oo2/2oo2) and/or with 2oo3 logic. 3-When there are technical reasons in the wiring of the control cabinet (if there are I/Os at different voltages that we should wired separated or we have special modules, for example, intrinsically safe, or have standard modules and “fail safe” in the same cabinet, etc.) 4-When we compare the cost of different models of PLCs. 5-When we use different families of I/Os in the same PLC (e.g. in the case of Rockwell PLCs: ControlLogix 1756 and Flex IO modules). Generally we don’t pay much attention to this issue of PLC module sizing, but there are many upgradable architectures in this regard, both technically and economically. In future posts we will show you some more examples.
BMS Safety PLC
BMS Safety PLC Let’s see an example of I/Os distribution in the case of a safety PLC for a Boiler Protection and Burner Management System for a Heat Recovery Steam Generator with post-combustion. The boiler has 8 burner ramps as shown in the image. It is advisable to use a safety PLC and verify the design and SIL level of the Safety Instrumented Functions with a tool such as SILcet or others. The distribution will be such that the failure of one card does not cause a boiler trip. We group the signals as follows: a) General I/Os with 2oo3 logic: these are critical signals that trip the boiler, such as a high pressure in the main gas line, a high level in the steam drum, high pressure in the steam output, etc. Most are analog ones. b) General I/Os with 2oo2 logic: these are redundant signals related to major equipment whose malfunction causes a total or partial trip, for example the main gas shut-off valves. c) General single I/Os (group “p1” in the picture): these are signals that do not directly cause a trip, for example permissives to start up, signals of general igniter valves, outputs to lamps, etc. d) Single I/Os for burner ramps 1 to 4 (group “p2“): these are specific signals of the first 4 burner ramps, such as outputs to close/open the gas shut-off valves of each ramp, flame detectors, etc. The failure of a signal (card fault or failure in the field instrument) will cause the trip of some or all ramps. e) Single I/Os for burner ramps 5 to 8 (group “p3“): these are specific signals of the last 4 burner ramps, such as outputs to close/open the gas shut-off valves of each ramp, flame detectors, etc. The failure of a signal (card fault or failure in the field instrument) will cause the trip of some or all ramps. Let’s now see several examples of distribution of signals performed with IO_Builder. We have considered 2 cases with cards available from manufacturers of safety PLCs. Cases A and C: with PLC cards of 24 DI, 10 DO and 6 AI (all fail safe). Cases B and D: with PLC cards of 32 DI, 32 DO and 32 AI (all fail safe). In this case we use high-density modules that are not so common in safety PLCs. The data of “I/Os excess” refers to signals we don’t need as in the above amounts we already included spares. In model 1 we obtain a similar hardware price in both cases so the choice is clearly technical. From the standpoint of availability we prefer the case A. In the case of model 2 the price is clearly lower in D. This architecture uses a single I/O Rack, and therefore must have a redundant power supply. For the purposes of availability we should analyze what is the MTBF (Mean Time Between Failures) of the common parts (rack and bus at least), and other factors if we go deeper. The choice between a manufacturer and another will depend on the total cost and other factors (depending on if I’m the end user or the system integrator). In any case this type of analysis is important to do depending on the complexity of the architecture. Finally we show a couple of images with architectures of cases A and C generated by IO_Builder.
Optimize PLC architecture
Optimize the PLC architecture This post explores the impact that the I/O distribution and density of the modules can have on the cost of hardware, that is how we can optimize the PLC architecture according to the requirements of each project. We have used the tool “IO_Builder” that allows us to make different distributions of signals and calculate its cost very quickly. In the examples we have used real hardware prices of a known PLC manufacturer. We will see that the cost of the PLC is not always reduced using modules with higher amount of channels, in fact this is only true when there are only single I/Os (logic 1oo1) and no restrictions to distribute the signals. We will see this with an example. The picture shows the first example for a PLC with 197 I/Os including the spares required by the customer. We have compared 3 different architectures, the first two with 32-channel digital modules and the other with 16-channel. For the analog modules we have considered 8 channels in all three cases. The second difference is the criteria used to distribute the signals. In the first two cases the channels of the triple (logic 2oo3) and double (redundant IOs with logic 1oo2/2oo2) signals must be in separate racks, and in the third case is sufficient to place them in different modules. To increase availability, only in the third case, we have considered that the power supply and communication modules are redundant. What we want is to ensure that the failure of a module only affects to a channel of triple and double signals. We see that the highest price is when we use 32-channel digital modules because signals excess is very high (signals we don’t need). The shown prices only include the periphery (I/Os, racks, power supplies and communication modules). The price reduction in the third case is significant and in many cases this architecture is valid (the channels must be located in different modules but they can be in the same rack). We have made a second example without redundant or 2oo3 signals (only with single I/Os). In the first two cases we have distributed the I/Os by areas of the plant and in different racks, and in the last two there are not restrictions. The result is shown in the image. The main conclusion that can be drawn from the analysis is as follows: The way to distribute the I/Os in the racks and modules of a PLC/DCS clearly impacts on the cost of the architecture and, in many cases, significantly. If you are a System Integrator we recommend you to do this analysis from the bid phase as these price differences can decide the award in your favour or against you. On the other hand, you can give more value to your proposal by offering customer other options with an I/O distribution to provide more “availability” and an easier maintenance of the Plant. This type of analysis used to be complicated and required a long time, but now are extremely simple with a tool like “IO_Builder” that is designed specifically for this. In the above examples we have always used the same model of PLC. IO_Builder is even more useful when comparing different models of PLCs, from the same manufacturer or different. We will see this in a next post.
Select the PLC modules
Select the PLC modules The first step in selecting the PLC modules is to analyze the technical requirements of the application. In most cases this analysis will guide us in selecting the PLC and the I/Os. The next step will be to do different considerations that affect the total cost of the automation solution. In this article we will focus only on the inputs / outputs modules and leave the rest of the hardware for another time. 1-Signal Range: voltage, current, temperature sensor. If we use different voltages it must be taken into account in the wiring of the control cabinet. For example, if the digital output voltage is 120 VAC, it is advisable not to mix with wires of 4-20 mA signals. On the other hand low voltage ranges are safer for the personnel. 2-Resolution of analog signals: Normally a resolution of 12 bits is sufficient (in this article the concept and the error introduced by the A / D converter is explained). Keep in mind that to raise the resolution 1 or 2 bits increases significantly the price of the module. 3-Electrical Isolation: There are different levels of isolation (channel to channel, among groups of channels, channel to earth, etc.). Standard modules do not have channel to channel isolation but isolation among groups is sufficient in most applications. Not mixing up the common and the ground is usually a good practice. Modules with channel to channel galvanic isolation are expensive so it is necessary to analyze this aspect carefully. 4-Diagnosis: Detection of internal faults in the module is an interesting plus but you have to pay it. If the downtime of the machine or process is important then be worth paying this extra cost to avoid or greatly reduce downtime. 5-SIL Certification: If your application requires a safety PLC we must use the corresponding modules whose cost is much higher. These modules incorporate many diagnostic functions and have a redundant internal structure with logic 1oo2D or 2oo3D (in this article we explain what is a safety PLC). 6-Extreme conditions: Manufacturers often have special modules to work in extreme conditions, such as high temperature. Its cost is much higher. 7-Classified Area: In hazardous zones, such as in certain areas of refineries, areas are classified into different categories. This implies having to design the safety PLC or DCS meeting some stringent technical requirements. Typically, in the case of inputs / outputs, they must be intrinsically safe, it implies either put intermediate barriers or use intrinsically safe modules available only for some PLC models. 8-Speed operation: In positioning applications we require modules that can work with fast and accurate movements. In digital signals, DC ranges allow more continuous switching speed and are more suitable for fast applications. Depending on the case we can use standard or special modules. 9-Redundant architecture: The concept of redundancy is wide and must be analyzed carefully. There are here two concepts: “Safety” and “Availability”. In the first one we normally use the logics 1oo1 (non redundant signal) and 1oo2 (redundant signal), in the second one we use 2oo2 and 2oo3. If we add a “D” (e.g. 1oo2D) it refers to the diagnostics of the signal failure. 10-Remote I/Os: Depending on the layout of instruments and equipment on site it can be interesting to use remote I/O racks close to the field elements. In such cases the usual aim is to save cabling. In critical applications (nuclear, airports, etc.) the target can be to have very high availability by using redundant I/Os in different racks separated several hundred of meters. 11-Extension of existing PLCs: In these cases we may have a physical limitation of space in the cabinet or in the room. This can influence the type of modules used. 12-Density of the modules: It is important in many cases, especially if the distribution of I/Os in the racks and modules is made in an orderly manner and according to certain criteria. Once the analysis is made we will have few doubts about the type of modules to use that we can solve with other considerations affecting the cost. On the one hand we have the investment cost (the so-called CAPEX) and on the other the operational and maintenance costs (OPEX).
Input/Output Architecture in the control system We will give an overview of most common input/output architectures in a control system, depending on whether the priority is “Safety” or “Availability“. 1-Architectures of Inputs The following image shows the most common configurations. The letter “D” in yellow stands for “Diagnosis” and usually is associated with the Safety PLC. 1oo2 logic (1 out of 2) provides a plus of safety, so that failure of one channel does not prevent the shut down action when necessary. This logic does not prevent the spurious trips, even worst, probability of false trip is doubled. 2oo2 logic (logic 2 of 2) solves the problem of spurious trips since it is necessary that 2 channels fails at a time to initiate shut down. The safety of this logic is lower than 1oo2. It is advisable to go to the logic 2oo2D, ie, use safety PLCs and field devices with diagnosis. 2oo3 logic provides the advantages of the previous two and is widely used in industrial processes. It is based on a voting system, so that at least two channels must fails to cause a trip. The correct 2oo3 logic design is not completed using three signals and installing three elements in the field, it is necessary to make a deeper analysis, mandatory if we are required to design according to SIL 2 or SIL-3 IEC-61508. For example: -The field transmitters must not be mounted with common elements (isolation valve, etc.). -It is important to calculate correctly the total PFD (Probability of Failure on Demand) of our system according to IEC-61508. Keep in mind that the PFD is directly related to undetected failures so it is easy to understand that, with 2oo3 logic, the probability that some “hidden failures” occurs is higher than in 1oo2, 2oo2 or 1oo1 since we have 3 items. -etc. The system design must be done as a whole including PLC and all field elements. Remember that the use of products or components certified for SIL applications does not ensure that the entire system ( “SIS” Safety Instrumented System) is certified. 2- Arquitectures of Outputs The considerations to be made, in the case of the outputs, are similar to those already mentioned for the inputs. We will assume, for simplicity, that the field element is unique. In the outputs there is another element to be considered which are intermediate relays. They may be conventional relays or safety relays. In the case of using a safety PLC, we can wired directly to field element or use intermediate safety relays (link with example). If our priority is “high availability” we have several options shown in the following images. 3- Distribution of signal within the modules Another aspect to be careful when designing the control system is how to properly distribute the signals on the input and output modules. It must be done consistently to avoid to reduce safety or availability of the whole. For example: –Double signals (1oo2, 2oo2) or triple ones (2oo3) must be wired to different modules or even different racks. We have designed an architecture builder that helps a lot to do this and to choose the configuration that best fits the specification. -Duplicated field units that use non-redundant inputs/outputs (1oo1 logic) must be wired to different modules and racks. Sometimes this means we can reduce the number of redundant signals to get some cost savings. For example: Boiler with 4 burners and safety PLC SIL-3. Signals that cause the boiler trip with 2oo3 logic for inputs and 2oo2 for redundant outputs. Signals that cause burner trip: non-redundant inputs and outputs of each burner mounted in a different rack. In this way we would need four racks with the following configuration: Racks 1 with redundant power supply: general I/Os and burner 1. Racks 2, 3 and 4 with non-redundant power supply: I/Os of burners 2, 3 and 4. The failure of a signal or a rack mean the loss of a burner (25% of the steam production).
Relay Redundant System
Relay Redundant System Let’s see a simple example of how we can increase the safety of our design using conventional electromechanical relays. Suppose we have a high pressure steam line with three safety pressure switches wired to an intermediate relay. We also make a wired 2 out of 3 logic, so that the circuit is open when at least two switches are detecting high steam pressure. The second contact of each relay is wired to a digital input of PLC to program a 2oo3 logic like the other one. The output to the valve is considered in our design as a “critical output”, so we doubled the output, each wired to an external relay, and the contacts are wired in H-shaped. It is a typical high availability wiring, tolerant to digital output failure or relay failure. The H-shaped output is a good solution in many cases but sometimes has certain risks. If it’s an output normally energized it can short-circuited without notice. If there is a short-circuit in the second digital output our system would remain in a high risk situation. If we do not use a safety PLC, we can at least increase safety by wiring in series the “Redundant Relay System” that is completely independent of the PLC.
Availability versus Safety
Availability versus Safety This is an important point to be considered when designing a control system with a programmable logic controller. The analysis must be performed with all the elements involved: the PLC, field instruments, valves, motors, dampers, etc. In addition, it is essential to analyze the process or machine. The design will be very different for a Water Treatment Plant than for an Emergency Shut down System in a Refinery or for a Boiler Protection System. Let’s see these concepts and ideas about design with a simple part of a process: a line with a fluid, a shutoff valve that opens when PLC output is energized, and a high-pressure sensor which should give closure order to valve when the pressure exceeds a value. Depending on the fluid (water, high pressure steam, gas, nitrogen, etc.) design criteria should be very different. For safety reasons, the pressure switch contact is closed when the pressure is low (contact N.O.) which is usual in the case of any safety instrument. All elements that can fail are shown in red color. Let’s see how we could make the design of the control cabinet (PLC and external intermediate relays). If the priority is for “Availability”, i.e., we want to minimize the probability to close the shutoff valve by any failure, we can design as follows: In this way we ensure the following: 1-Valve will not close due to failure of a PLC input, since we use 2 inputs for the same pressure switch. 2-Valve will not close due to failure of a PLC output, since we use 2 outputs and contacts in parallel. If the priority is the “Safety”, i.e., we do not want any dangerous situation occurs by the failure of some element, even if valve close, we could design as follows: Thus we get that the failure of an input or output produces the immediate closure of the valve. If we want both, “Availability and Safety”, a possible design would be: With this configuration the valve failure is the weakest point. If we want “Availability” we can add another valve in parallel, for “Safety” the second valve should be in series. These examples of design with a general purpose PLC are illustrative of the advantages of the “safety PLC”, especially when we have many I/Os and many critical elements that must be designed with the criterion of “Priority to Safety”. With the safety PLC we save a lot of external wiring. Safety PLC also has a very clear plus: meets the safety standards and incorporates diagnostic functions in all internal components, including CPU, memory, etc.
What’s a safety PLC?
What is a safety PLC? Let’s try to explain it in a simple way for non-experts. The fundamental difference with a general purpose PLC is summed up in one word “Diagnosis”. In addition, there are differences in terms of internal architecture, software and firmware, and certification for applications where compliance with a certain SIL level is required. The safety PLC incorporates many diagnostic functions to detect any possible internal fault in the hardware or firmware, so that a failure in the PLC does not cause any “unsafe” situation. These diagnostics reduce the rates of dangerous undetected failures and the probability of failures used in the SIL calculations. This is the essence that we will explain a little more. It also meets design standards of so called “Safety Instrumented Systems” (abbreviation SIS) which provides international standard IEC-61508, IEC-61511 (process industry), IEC-62061 (machinery industry) and others. Keep in mind that the safety PLC is a subsystem of the Safety Instrumented Function (sensor + PLC + actuator) whose design must be carried out so that it meets a certain SIL level. To design, compare and verify the SIL a tool like SILcet can be used. Let’s see with an example what’s “diagnosis”. The first figure shows a simplified diagram of a digital output of a general-purpose PLC. If the output transistor is short-circuited we have a dangerous failure and the valve does not close when ordered by the CPU. What improvements introduces a safety PLC? We see it in the second figure. For detecting short circuit it uses a diagnostic routine by means of micro-pulses and monitoring output status. With this it can at least give an alarm in case of short circuit. To further act on the output in case of failure it uses a second transistor in series, with an interlock with the monitoring circuit (called “watchdog”) which compares the status of both output transistors. In this way we get a safe output circuit (“fail safe”), fault tolerant from the point of view of Safety. To get also Availability redundant architectures are used, in this example by paralleling two circuits of the same output as shown in the third figure. There are many diagnostic functions in the safety PLC, both CPU and memory as inputs, outputs and communications, and that logically carries an additional cost. It is important to note that the design of a safety system must consider the entire “SIS”, i.e.: the PLC, field devices, electrical supplies, control cabinet design, software, etc. Some designs are very focus on one part neglecting others, obtaining at the end a solution with weaknesses that we must correct. We will see some examples in another post. Statistically there are more failures in sensors and actuators than in the PLC. Finally, international standards make a classification of the applications according to its risk level: SIL-1, SIL-2, SIL-3 and SIL-4 (Safety Integrity Level), being part of the risk analysis to be performed by the SIS designer. In summary, the fundamental differences of a safety PLC respect to general purpose are: 1-Meets design Standards of Safety Systems such as IEC61508, NFPA, FM, etc. 2-It is certified by competent organizations such as TÜV, Exida, etc. 3-Incorporates self-diagnostic routines of all hardware and software to detect any dangerous internal fault. If it occurs, it acts leading the machine or process to a safe situation. Therefore the dangerous undetected rates are lower than in the standard PLC. 4- The cost of the safety PLC is higher on the initial investment (CAPEX) but certainly lower in its total life cycle (OPEX). Go deeper into these and many other concepts in our RECOMMENDED “online” COURSE on Functional Safety: “Design of SIFs and SIL calculation”
Why range 4 – 20 mA
Why range 4 – 20 mA? Since a lot of years the use of 4-20 mA standard range has been popularized because it has many advantages that we are going to discuss in this article. At the end you will understand why we use this range. Many of its advantages are the consequence of raising the 0 point to 4 mA for having a “live zero”. 1- We can detect the wire break when we are below 4 mA. Values under 4 and over 20mA can be used for detecting signal failures. Diagnosis is, no doubt, a great advantage. 2-Two wire field transmitters, the most common, don’t need to be powered, since they do it through the 4-20 mA loop. This is one of the biggest advantages, especially in installations with many instruments. If zero point were 0mA we couldn’t power the transmitter when we were under 3,6mA. 3-Range 4-20 has a 1:5 ratio which is the same that range 3-15psi, very usual in pneumatic. It makes the calculations easier. 4-Range 4-20 mA can transmit HART digital data through the same wires without any interference between both of them. 5-Range 4-20 mA can be used for intrinsically safe signals in hazardous areas because it allows line check. 6-A current signal, generally, is more immune to electric noise than any other voltage signal (0-10 VDC, 1-5 VDC) and also can work in long distances (more than 1 km if nominal power is 24 VDC; raising the nominal value can allow longer distances). 7-In long distances and with intermediate junction boxes there are voltage drops that make impossible and even inadvisable to work with voltage ranges. 8-Using a standard multimeter we can detect failures or check the 4-20 mA loop easily. Besides there is no personal risk if we touch the wire because voltage is 24VDC and the dangerous current threshold for the heart is 30 mA. 9-If we need to convert the signal to a voltage range, for instance inside the control cabinet, in order to wire it to multiple devices, we can do it very easily with a 250Ω resistor. So the signal is converted to 1-5 VDC range.
Control Cabinet Layout (phase 8)
Phase 8: Design of Control Cabinet In this article we discuss some options about the internal layout of the control cabinet (index of design phases). 8-Design of the control cabinet There are many factors that impact the internal design of a control cabinet and, therefore the mounting space required. There is no clear rule on what should be the size of cable ducts, how many I/O racks fit in the panel or whether we should use terminal blocks of one type or another. Here are some of the points that we must consider. –How many racks can fit in each cabinet? We must consider, in addition to the rack size, the amount of modules in each rack and I/O density. If we use high-density modules (32 or 64 I/Os) internal cables ducts should be higher. For example, in a panel 2000 mm. high, using 16 channel digital modules and 8 channel analog modules, with a maximum of 6 or 7 modules per rack, we can mount up to 6 racks 125 mm. high and also have space to accommodate two 24 VDC power supplies and, if needed the CPU rack. -An option to save space is to use double level terminal blocks. We get more space but not as much as we expect because the amount of wires is duplicated requiring larger ducts. We can also use the sides of the cabinet but it is advisable to leave this space for future extensions. -Another aspect to consider is the routes of cables with different voltages. It is usual to have 24 VDC for inputs / outputs and relays, and 125/230 VAC for field outputs. If so, we should try to minimize parallel runs of cables with different voltages to avoid the risk of electrical noise, especially in the analog signals. This involves more internal space. -We also need additional space when having a design with certain levels of redundancy, especially if we have, for example, a 2 out of 3 logic. This implies a minimum of three I/O racks and sometimes even in different cabinets depending on the application. Another typical case is when we have duplicated field elements, such as a main pump and a backup. For a good design we should not use the same I/O module for both pumps. –Customer specifications often require this type of good design practices. If end user is a big industry we can meet with demanding design requirements involving some extra space. In many projects we must separate electronics from “marshalling cabinets” (cabinets for field signals terminals). In PLCdesign we have developed a tool in Excel to help you: Cabinet Layout.
Basic concepts of redundant architecture of PLC
Basic concepts of redundant architecture of PLC In this article we will go over the most usual options to configure a redundant architecture. We will focus on some of the most commons and we will have occasion to study them more deeply in a future post. Depending on the application we will have to determine whether what we want is a “high availability system”, ”a high safety system” or both of them. Also we must always consider the cost of architecture. There are different technologies that we will analyze in the future: TMR (Triple Modular Redundant), QMR (Quadruple Modular Redundant), FMR (Flexible Modular Redundancy), XMR, etc. and different logics 1oo2, 2oo2, etc. 1-Redundant CPU and non redundant I/Os It is a very simple architecture and it is designed with MTBF criterion (Meantime Between Failures) because it is substantially lower in a CPU than in an input/output module. It is a value given by the manufacturer based on a study of probabilities. 2-Redundant CPU and non redundant I/O but well distributed In this case we take a step forward and we distribute the inputs and outputs always thinking about field devices. For example, if we have a unit with two pumps at 50% capacity, we will not mix I/O of each pump in the same modules so that if the module fails we will only lose one of the pumps. Also, if possible, we shall place each pump module in a different rack. 3-Redundant CPU and well distributed mixed I/O We will use a “2 out of 3 logic” (2oo3) for inputs and, optionally, a dual logic or H-shaped for digital outputs. 2oo3 logic philosophy is a simple system of voting. For example, we have to design a pump shut-down when having high level and we have three level sensors placed in the same point. The pump only will be stopped when at least two sensors detect high level, but not if only one is detecting it. This avoids unnecessary shut-downs in case of failure of field instrument or PLC input. It is important to place each of the three inputs in a different module and, if possible, in a different rack. We must decide in which inputs we will use 2oo3 logic and the type “digital or analog“. There is not a general optimum criterion because it depends on each application and on the cost we are willing to pay. We should at least, triple the field signals that cause general shut-downs of the process or of the unit. 4-Redundant CPU and Redundant I/O and/or 2oo3 logic The difference with the previous case is the redundant I/O that we will use for a certain group of I/O looking for its “high availability”. Not all manufacturers allow this type of architecture because it is expensive although there are critical applications that require it. We will treat deeply in next articles this topic of redundant architecture of PLC and explain how to use IO Builder.
Transmitters of 2 and 4 wires
- Why Cpt is so important? 1 March, 2019
- Dangerous failures in the SIS 15 August, 2018
- The diagnostics in the SIS 10 April, 2018
- Common cause failures 6 February, 2018
- Safety Instrumented Function 27 January, 2018
- SIL requirements 20 January, 2018
- Isolators and Terminals (phase 7) 12 August, 2017
- Power Supply (phase 6) 12 August, 2017
- HMI / SCADA (phase 5) 11 August, 2017
- CPU and Architecture (phase 4) 11 August, 2017