M¨alardalenUniversity School of Innovation Design and Engineering V¨aster˚as,Sweden

Thesis for the Degree of Master of Science in Computer Science - Embedded Systems 15.0 credits

PERFORMANCE ANALYSIS OF THE PREEMPTION MECHANISM IN TSN

Lejla Murselovi´c [email protected]

Examiner: Saad Mubeen M¨alardalenUniversity, V¨aster˚as,Sweden

Supervisors: Mohammad Ashjaei M¨alardalenUniversity, V¨aster˚as,Sweden

June 9, 2020 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

Abstract -based real-time network communication technologies are nowadays a promising communi- cation technology for industrial applications. It offers high bandwidth, scalability and performance compared to the existing real-time networks. Time-Sensitive Networking is an enhancement for the existing Ethernet standards thus offers compatibility, cost efficiency and simplified infrastructure, like previous prioritization and bridging standards. Time-Sensitive Networking is suitable for networks with both time-critical and non-time-critical traffic. The timing requirements of time-critical traffic are undisturbed by the less-critical traffic due to TSN features like the Time-Aware Scheduler. It is a time-triggered scheduling mechanism that guarantees the fulfilment of temporal requirements of highly time-critical traffic. Features like the Credit-Based Shapers and preemption result in a more efficiently utilized network. This thesis focuses on the effects that the preemption mechanism has on network performance. Simulation-based performance analysis of a singe-node and singe-egress port model for different configuration patterns is conducted. The simulation tool used is a custom developed simulator called TSNS. The configuration patterns include having multiple express traffic classes. In a single-egress port model, the most significant performance contributor is the response time and this is one of the simulation measurements obtained from the TSNS network simulator. The comparison between the results of these different network configurations, using realistic traf- fic patterns, provides a quantitative evaluation of the network performance when the network is configured in various ways, including multiple preemption scenarios.

i Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

Table of Contents

1. Introduction 1 1.1. Motivation...... 1 1.2. Problem Formulation...... 2 1.3. Thesis outline...... 2

2. Background 3 2.1. Computer Network...... 3 2.2. Ethernet...... 3 2.2.1. Shared (broadcast) Ethernet...... 4 2.2.2. Switched Ethernet...... 4 2.2.3. ...... 5 2.3. Time-Sensitive Networks...... 6 2.3.1. IEEE 802.1AS - Timing and Synchronization for Time-Sensitive Applications6 2.3.2. IEEE 802.1Q - VLAN Priority Queuing...... 7 2.3.3. IEEE 802.1Qav - Forwarding and Queuing Enhancements for Time-Sensitive Streams...... 10 2.3.4. IEEE 802.1Qbv - Enhancements for Scheduled Traffic...... 12 2.3.5. IEEE 802.3br and 802.1Qbu Interspersing Express Traffic (IET) and Frame Preemption...... 14

3. Related Work 18 3.1. Performance evaluation of real-time Ethernet Standards...... 18 3.2. Performance analysis of individual TSN standards...... 18 3.3. Evaluation of TSN with Frame Preemption...... 18 3.4. Existing network simulators...... 19 3.5. Discussion...... 20

4. Method 20

5. Ethical and Societal Considerations 22

6. Effects of the Preemption Mechanism 23 6.1. Schedulability...... 23 6.2. Guard band size...... 23 6.3. Preemptable configuration of scheduled traffic...... 24 6.4. Multiple scheduled traffic classes...... 25 6.5. Effects on the SR traffic class...... 25 6.6. Preemption effects in a multi-SR traffic class configuration...... 26 6.7. Multiple ST and SR traffic classes...... 28 6.8. Lower priorities as express...... 29

7. Time-Sensitive Network Simulator TSNS 31 7.1. Assumptions...... 31 7.2. TSNS egress port model design...... 31 7.2.1. Queuing...... 33 7.2.2. Transmission Selection Algorithm...... 34 7.2.3. Gate mechanism...... 34 7.2.4. Transmission Selection...... 35 7.2.5. Preemption mechanism...... 35 7.3. TSNS model...... 36

ii Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

8. Evaluation 39 8.1. Scenario 1: Single-simulation - Offline generated message set - Per-message results 40 8.1.1. Results...... 41 8.1.2. Discussion...... 42 8.1.3. Conclusion - Scenario 1...... 42 8.2. Scenario 2 - Express ST and multiple express SR traffic...... 42 8.2.1. Results...... 43 8.2.2. Conclusion - Scenario 2...... 44 8.3. Scenario 3 - No ST traffic class...... 45 8.3.1. Results...... 46 8.3.2. Discussion...... 47 8.3.3. Conclusion - Scenario 3...... 47 8.4. Scenario 4 - Express ST and various single express SR - express ST and express LP - High number of traffic classes...... 48 8.4.1. Results...... 48 8.4.2. Discussion...... 49 8.4.3. Conclusion - Scenario 4...... 50

9. Conclusions 51 9.1. Future work...... 51

10.Acknowledgments 52

References 55

iii Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

List of Figures

1 The evolution of Ethernet in the 20th century...... 3 2 Master-slave architecture for time synchronisation in IEEE Std 802.1AS [1]....7 3 Insertion of IEEE 802.1Q Tag in the ...... 7 4 Software Architecture of the Bridge with Strict Priority Selection...... 9 5 Credit-Based Shaper Algorithm operations in different conditions...... 11 6 Software Architecture of the Bridge with the Credit-Based Shaper Algorithm... 11 7 Time-triggered scheduled traffic transmission...... 12 8 Operation examples of the CBS with TAS...... 13 9 Software Architecture of the Bridge with Time-Aware Scheduler...... 14 10 Preemptable MAC (pMAC) and express MAC (eMAC) scheme [2]...... 15 11 Preemption Mechanism Trace Example...... 16 12 Hold and Release mechanism enabled vs. disabled...... 17 13 Research framework [3]...... 20 14 Preemption configuration causing ST traffic latency...... 24 15 Credit behaviour of express Class A ...... 26 16 Example trace with no express frames...... 27 17 Example trace with only Class A - express...... 27 18 Example trace with only Class B - express...... 28 19 Example trace with both Class A and B - express...... 28 20 Response time of LP traffic in express vs. preemptable configuration...... 30 21 TSNS Switch model...... 32 22 Defined TSNS structures...... 33 23 Overall software architecture design of the simulator...... 33 24 Credit flow diagram...... 34 25 TS flow diagram...... 36 26 Preemption flow diagram...... 37 27 TSNS model design...... 37 28 Scenario 3- Bar-graph representation of response time per-class...... 47 29 Scenario 4 - Plot of results - Average response time per traffic class...... 49 30 Scenario 4B - Plot of results - Maximum response time per traffic class...... 50

iv Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

1. Introduction

In traditional communication networks there in no common time-base concept and as a result, these networks can not provide synchronization nor precision timing. Reliable delivery of data is more important than the time window in which these are delivered so there is no need for timing constraints. Soft requirements can not be used in real-time embedded systems that are specified by strong constraints concerning timing. A set of standards is developed defining mechanisms that consider time constraints, for data transmission over deterministic Ethernet networks, called Time-Sensitive Networking (TSN) [4].

TSN is an extension of conventional switched Ethernet network protocols with several additional fea- tures, such as hard real-time guarantee on time-critical traffic and preemption support. The primary standard is IEEE 802.1Q-20181 and there are several amendments on enhancing various mechanisms, such as IEEE 802.1Qav2, IEEE 802.1Qbu3 and IEEE 802.1Qbv4. The key components of a TSN network that enable real-time capability in Ethernet-based machine-to-machine communication are time synchronization, scheduling and traffic shaping and selection of communication paths, path reservations, and fault-tolerance [4]. One of the interesting features in the standards is preemption support for Ethernet frames where an time-critical frame can preempt a low priority Ethernet frame.

A deterministic Ethernet network is a time predictable network. Timing predictability of a system is the term describing that all specified timing requirements are being satisfied once the system starts executing [5]. It is necessary to be able to show or prove this guarantee, using some mathematical analysis, for the system to be called time predictable.

Because of its high bandwidth, compatibility and scalability, Ethernet-based communication is rapidly advancing as it is being widely considered in different markets. For some markets, like the industrial market, the biggest drawback of these networks is their lack of timing predictability in data delivery. The continuously increasing amount of data that is being transmitted in distributed embedded systems, makes satisfying the timing requirements harder. This is the main reason for introducing TSN.

TSN offers time predictability together with high-bandwidth that goes up to 10 Gbps in or- der to implement the throughput necessary for future applications associated with automation, digitalization and Industry 4.0 where high-bandwidth is extremely important considering the large amount of data that is being transmitted and considering the temporal importance of data delivery in networks that transmit safety-critical and control data. The other benefit of TSN is the fact that IEEE is standardizing TSN in such a way that a variety of applications can use it.

1.1. Motivation To evaluate the feasibility of an Ethernet-based communication network for future industry applica- tions, it is important to identify specific and comparable network metrics. Moreover, a detailed quantitative analysis allows for optimizations, and can be used to propose and rate improvements of the network protocols.

As a relatively new standard added to the TSN family of standards, the preemption mecha- nism is not well researched in the context of TSN. The existing research focuses on the default preemption configuration with one express traffic class. This thesis aims to broaden the domain for the preemption mechanism and provide insight into the effect multiple express traffic classes have on the network performance. The effects of having express traffic classes in unconventional configurations is unknown and understanding it can lead to a faster uprising of TSN and better optimisation of Ethernet-based networks. This thesis engages with these corner questions and

1IEEE 802.1Q: Available from [Online]: https://ieeexplore-ieee-org.ep.bib.mdh.se/document/8403927 2IEEE 802.1Qav:Available from [Online]: https://ieeexplore-ieee-org.ep.bib.mdh.se/document/8684664 3IEEE 802.1Qbu:Available from [Online]: https://ieeexplore-ieee-org.ep.bib.mdh.se/document/7553415 4IEEE 802.1Qbv: Available from [Online]: https://ieeexplore-ieee-org.ep.bib.mdh.se/document/8613095

1 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN establishes new hypotheses that contribute to the body of knowledge on the preemption mechanism in TSN.

A simulation-based analysis is suitable and provides the necessary performance estimates. Automo- tive applications have very strict requirements, usually the temporal precision must be within a few microseconds, so it is necessary to simulate the temporal behaviour with high accuracy.

1.2. Problem Formulation The IEEE 802.1Qbu standard defines two modes for a traffic class, which are express and preempt- able. Express frames can interrupt transmission of preemptable frames. Each port of a TSN switch can support up to 8 queues, i.e., support up to 8 traffic classes. We focus on the following three traffic class types that are defined in the IEEE 802.1Q standard: Scheduled Traffic (ST), which high priority traffic class that is transmitted according to a time schedule that is created offline, to ensure no interference from other traffic classes. The traffic is strictly periodic and represents the network control signals. The second type is Stream Reserved (SR) traffic classes with reserved bandwidth, based on the Stream Reservation Protocol (SRP). This traffic undergoes traffic shaping based on credit. In practice the highest-priority SR traffic class is referred to as Class A, then comes lower-priority Class B, then Class C and so on. Lastly, we define Best-Effort (BE) traffic class, which is low-priority traffic, that is handled without temporal and delivery guarantees. BE traffic can be sporadic, aperiodic or periodic and it does not undergo traffic shaping.

For each traffic class, one can configure it to be real-time traffic undergoing a credit-based shaper (SR), scheduled traffic (ST) which is not going through the credit-based shaper, and best-effort traffic class. The TSN standards allow defining a gate mechanism for all traffic classes to control the traffic transmission by preventing any traffic class in favour of ST class. On top of these mechanisms, each queue can be defined as express (that can preempt preemptable lower-priority classes) or as preemptable (that cannot preempt any other classes but can be preempted itself).

As observed, the configuration can be very complex, given many configuration parameters. The objective of this thesis is to investigate what are the effects of configuring multiple traffic classes as express on the performance of all traffic classes, more in-depth understanding of TSN standard and awareness of edge cases and corner cases. This thesis aims at providing an answer to the following research question:

RQ: How is the performance of the time-sensitive network affected by various configurations of traffic classes as express or preemptable?

To answer this question, we investigate various cases with different traffic class configuration patterns, with an emphasis on the preemption configuration where multiple classes are set as express. The cases are evaluated using a simulation tool.

1.3. Thesis outline In Section 2., we give background on the main concepts and terms this thesis is built on. The main outlines are the background on computer networks, Ethernet and lastly TSN, where some highlighted mechanisms are described. In Section 3., we discuss related work on this topic. We then discuss how we deal with the formulated problem, tools and method used to answer the research question, in Section 4. In Section 6. we propose assumptions and possible answers based on the body of knowledge. To verify the accuracy of this approach and evaluate the assumptions, we develop a network simulator described in Section 7. Finally, in Section 8., we present and discuss the results of use-cases and scenarios that address the research questions, and have been run in the simulator. Section 9. summarizes and concludes this thesis.

2 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

2. Background 2.1. Computer Network A computer network is a digital telecommunication network that enables digital information sharing between terminal nodes that in the case of computer networks are computing devices. These devices can execute arithmetical or logical operations from a given instruction or a set of instructions. Computer devices, like routers and switches, are connected either with a physical cable, for example, fiber-optic cables, to enable data transmission or using some wireless method where transmission is supported by electromagnetic waves, for example, Wi-Fi. Computer networks may be classified by many criteria, for example, the transmission medium used to carry their signals, bandwidth, the network’s size, topology, communications protocols to organize network traffic and traffic control mechanism. A communication protocol is a set of rules that allow two or more entities of a communications system to transmit data. The protocol defines the rules, syntax, semantics and synchronization of communication and possible error recovery methods. Internet communication protocols are published by the Internet Engineering Task Force (IETF). The IEEE handles wired and wireless networking, and the International Organization for Standardization (ISO) handles other types. The ITU-T handles telecommunication protocols, and formats for the public switched telephone network (PSTN). For the sake of understanding the terminology, the difference between the terms standard and protocol is stated. A protocol define a set of rules used by two or more parties to interact between themselves. A standard is a formalized protocol accepted by most of the parties that implement it. Not all protocols are standards; some are proprietary. Not all standards are protocols; some govern other layers than communication. An example of a standard outside networking is the standardization of paper sizes. Paper sizes were given abstract names, for example, ”A4”, is a protocol and this protocol was standardized by ISO organization. In networking, IEEE standardized the wired communication protocol Ethernet (IEEE 802.3) and wireless communication protocol Wi-Fi (IEEE 802.11). Annex A gives more details on the categorization and enhancement of standards.

2.2. Ethernet When the IEEE Std 802.3 Ethernet standard was first published in 1985 it described half-duplex communication system using the carrier sense multiple access with collision detection (CSMA/CD) method for collision handling. This specified how the hardware interacts with the transmission medium that is controlled by the (MAC) layer. Besides describing the MAC layer, the standard also described the medium attachment unit (MAU) that is responsible for the Ethernet physical medium. The original IEEE Std 802.3 Ethernet standard described the operations on a coaxial cable medium, supporting a bus topology [6].

Figure 1: The evolution of Ethernet in the 20th century

IEEE 802.3 is a working group and a collection of Institute of Electrical and Electronics Engineers (IEEE) standards that define the physical layer and media access control (MAC) of wired Ethernet.

3 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

CSMA/CD stands for Carrier Sense Multiple Access with Collision Detection. It refers to a set of rules determining how network devices respond when two devices attempt to use a data channel simultaneously, called a collision. Ethernet was introduces using the CSMA/CD protocol, but today it is considered obsolete, as the bus topology today is replaced with the star topology over twisted-pair medium and as hubs are replaced with switches.

2.2.1. Shared (broadcast) Ethernet The Shared Ethernet is a network over a star topology using an infrastructure node or a central device called the hub. The hub is a simple broadcasting device that has a shared bus to which all the devices are connected. It could be seen as a ”bus in a box”. All the devices, except the sender device, would receive the sent data, and all members of the hub would have to wait until the bus is idle before sending data. The star topology made connecting and disconnecting devices in the network easy, as opposed to the true bus topology, where disconnecting one device meant taking down the entire network. Also, in 1984, the star topology of local area networks showed the potential of simple unshielded twisted pair cables. Twisted pair supports full-duplex communication, oppose its predecessor, the coaxial cable used for bus network topology. To understand this advantage, first, the terms full-duplex and half-duplex are explained. A duplex communication system is a point-to-point system composed of two or more connected parties or devices that can communicate with one another in both directions, oppose to a simplex. The two types of duplex communication systems are full-duplex (FDX) and half-duplex (HDX). • In a full-duplex system, both parties can communicate with each other simultaneously. An example of a full-duplex device is a telephone; the parties at both ends of a call can speak and be heard by the other party simultaneously. • In a half-duplex system, both parties can communicate with each other, but not simultaneously; the communication is one direction at a time. An example of a half-duplex device is a walkie- talkie.

The hub as a multiport repeater works by repeating transmissions received from one of its ports to all other ports. It can also detect a non-idle and idle line, as well as sensing a collision. A hub cannot further analyze or manage any of the traffic that comes through it [7]. A hub has no memory to store data and can handle only one transmission at a time. Therefore, hubs can only run in half-duplex mode. Due to a larger collision domain, there was a need for using more sophisticated devices as infrastructure nodes.

2.2.2. Switched Ethernet The first Ethernet switch was introduced by Kalpana in 1990 [8]. The switch is a more sophisticated device in comparison with a hub. It is also an infrastructure node in a star network topology, but the switch can analyze and store the transmissions, hence it can support full-duplex communication. The switch does not broadcast the transmission to all the ports. Instead, it forwards the transmission only to the destination device. It has dedicated data ports for each end device. In the modern Ethernet, the devices of the network usually do not share the same bus or a simple repeater hub, but instead, use a switch. In this topology, collisions are only possible if station and switch attempt to communicate with each other at the same time and collisions are limited to this link. This means that the possibility of collisions is reduced to two devices, the switch, and the end device. Furthermore, the 10BASE-T standard introduced a full-duplex mode of operation which became common with and the standard with . The links in this standard are the twisted pair who support separated, dedicated, one-way channels cables for transmitting and receiving signals, so they don’t share the collision domain. Therefore modern are completely collision-free. Modern Ethernet does not need CSMA/CD. CSMA/CD is still supported for backward compatibility and half-duplex connections. The IEEE 802.3 standard, which defines all Ethernet variants, for historical reasons still bore the title ”Carrier sense multiple access with collision detection (CSMA/CD) access method and physical layer specifications” until 802.3-2008, which uses the new name ”IEEE Standard for Ethernet”.

4 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

Node Hub Switch Link Coaxial shared collision domain link collision domain Twisted shared collision domain individual collision domain

Table 1: The collision domain for different types of nodes and links

Today, the term Ethernet refers to a whole family of closely related protocols characterized by their raw data rates (10 Mbps, 100 Mbps, 1 Gbps or 10 Gbps) and the physical medium on which they operate. Ethernet now runs on a wide variety of physical media. Among the most common are: coaxial cable (thick or thin), many types of copper cable called twisted pair, and several types of fiber-optic cables using a variety of signaling methods and light wavelengths. It has outlasted Token Ring, ATM, FDDI, and other competing LAN technologies. It has extended from the LAN into the wide-area and wireless realms. Through technologies like switching and QoS controls, Ethernet has become the foundation technology for most networking communications today.

2.2.3. Industrial Ethernet Opposed to the office networks systems for which the standard Ethernet is sufficient, the industrial or real-time networks demand more requirements to them, including determinism in reply time, critical requirements of real-time, equipment’s trustworthiness and resistance in hostile environments [9]. Time behavior in Industrial Ethernet is an essential aspect as it is responsible for communication in real-time procedures where priority and determinism are imperative characteristics. In the last three decades, different communication networks for the industrial environment have been developed so that the real-time factor is satisfied. These networks are called fieldbuses and some examples of these networks are: Profibus5, WorldFIP6, Foundation Fieldbus7, Controller Area Network (CAN)8 and DeviceNet9.

At the beginning of the 21st century, the Ethernet, despite being an interesting technology for industrial automation due to its high performance, low cost, and its expressive intolerability, did not support the requirements for industrial application [10]. Around 2005 research was done on Ethernet as an alternative communication system in the real-time industrial automation field. The Ethernet was just studied as an alternative as it had a critical downside compared to other industrial communication networks, which was its non-determinism. This problem was directly related to the non-deterministic control mechanism of the Carrier Sense Multiple Access Collision Detect (CSMA-CD) protocol [11].

Today Ethernet is a leading communication technology for industrial applications. As today’s Eth- ernet is collision-free, there is no need for using non-deterministic control mechanisms. Compared with other existing industrial networks, if offers high bandwidth and a significant reduction of cabling cost. The base standard of Time-Sensitive Networking IEEE 802.1Q introduced real-time support for Ethernet. It offers other benefits as well, such as bounded latency and zero jitter of time-constrained traffic classes and capability to handle multiple traffic classes on the same channel allowing its dominance as a communication system in the automation industry [12].

5Profibus: Available from [Online]: https://profibus.com.ar/ 6WorldFIP: Available from [Online]: http://people.cs.pitt.edu/~mhanna/Master/ch2.pdf 7Fieldbus: Available from [Online]: http://www.fieldbus.org/ 8CAN: Available from [Online]: https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=788104 9DeviceNet: Available from [Online]: https://en.wikipedia.org/wiki/DeviceNet

5 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

2.3. Time-Sensitive Networks In need of time-synchronized low-latency transmission of time-critical data for real-time systems, the IEEE 802.1 Working Task Group started research on standards that were first meant for Audio-Video Bridging (AVB)10. Later on, this working group evolved into the Time-Sensitive Working Group. Simply said, time-sensitive networking (TSN) is the IEEE 802.1 defined standard technology that provides deterministic communication on the standard Ethernet. Deterministic communication is important to many industries, for example, aerospace, automotive, manufacturing, transportation, and utilities. TSN resides at layer 2 of the OSI 8 layer model, the ”Data Link Layer” of the standard Ethernet, defined by IEEE Std 802.3. Time-Sensitive Networking (TSN) means deterministic Ethernet is becoming standardized. Today it counts 11 IEEE Standards grouped under the IEEE 802.1 Working Group. TSN defines a large variety of scheduler and shaper solutions. Time Synchronization provides a mechanism for all network elements to have a common time base. Queuing and Shaping provides a mechanism for network elements to schedule and prioritize traffic. Highlighted technologies of TSN: • 802.1AS Timing and Synchronization – Distributed clock – Precision Time Protocol (gPTP)

• 802.1Q VLAN priority queuing – 8 queues based on VLAN priority tag – Improves reliability but latency and buffering variations still occur on multi-hop networks • 802.1Qav Credit-Based Shaper

– Removes bursts in traffic to generate a constant bit rate – Avoids frame loss • 802.1Qbv Time-Aware Scheduler

– Improves upon VLAN priority queuing by applying time slots – With proper configuration can bound jitter and latency in the network

• 802.1Qbu and 802.3br Frame Preemption and Interspersing Express Traffic – Preemptable traffic and express traffic

2.3.1. IEEE 802.1AS - Timing and Synchronization for Time-Sensitive Applications Synchronising time in TSN is done by distributing time from a centralised time source through- out the network using the master-slave model. The synchronisation is based on the IEEE 1588 Precision Time Protocol, which utilizes Ethernet frames to distribute all information required for time synchronization. IEEE 802.1AS is a subset of the IEEE 1588 that narrows its protocols and mechanisms down to the necessary ones that apply to home networks or industrial automation and automotive industry networks. It also extends the IEEE 1588 to support time synchronisation over WiFi (IEEE 802.11) besides Ethernet (IEEE 802.3) and enables microsecond precision for environments that have tight timing requirements, as industrial applications have.

Figure2 illustrates a general Precision Time Protocol (gPTP) that is responsible for the timing information distribution from the Grand master to all the end points. Every local clock has to be synchronised with the Grand Master Clock. The synchronisation branches from the Grand Master and its clock master port to the downstream clock slave ports in the bridges. Each bridge corrects the delay and propagates the timing information on all downstream ports, eventually reaching the

10AVB: Available from [Online]:https://ieeexplore-ieee-org.ep.bib.mdh.se/document/7451138

6 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

Figure 2: Master-slave architecture for time synchronisation in IEEE Std 802.1AS [1]

802.1AS end points. In the process of synchronisation, every bridge does its synchronization by calculating the link latency and frame residence time.

The frame residence time is the time required for queuing, processing and transmission from master to slave ports within each bridge. The link latency is the propagation delay between two adjacent bridges. The time synchronization accuracy depends mainly on the accuracy of the residence time and link delay measurements. 802.1AS uses a ratio between the local clock and Grand Master Clock oscillator frequencies to calculate synchronized time, and a ratio between local and Clock Master oscillator frequencies to calculate propagation delay.

2.3.2. IEEE 802.1Q - VLAN Priority Queuing According to IEEE 802.1Q, standard bridging uses a strict priority transmission selection algorithm between eight distinct traffic classes, each with different priority level. Arbitration between frames of the same traffic class is usually done in FIFO order. These priority levels are defined by the value of the Priority Code Point (PCP) field in the 802.1Q Tag of a standard Ethernet frame, that was inserted to the original Ethernet frame, as shown in Figure3.

Figure 3: Insertion of IEEE 802.1Q Tag in the Ethernet frame

The IEEE 802.1Q standard introduced the 802.1Q Tag or 802.1Q Header, which is a 4-byte field inserted between the Source MAC address and Ethernet type/length field of the original Ethernet frame. The 802.1Q Header consists of four fields: 1. TPID (Tag Protocol Identifier) This 16-bit field identifies the frame as an IEEE 802.1Q-tagged frame and is set to a constant

7 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

value of 0x8100. 2. PCP (Priority code point) This 3-bit field describes the frame priority level. Value can range from 0 to 7, hence 8 different priority levels can be defined. 3. CFI (Canonical Format Indicator) If this 1-bit field has the value of 1, the MAC address is in non-canonical format. If the value is 0, the MAC address is in canonical format. 4. VID (VLAN Identifier) This 12-bit field identifies the VLAN to which the frame belongs. Value can range from 0 to 4095. The eight priority levels provided by the 3 PCP bits allows groupings of different traffic types. The following eight traffic types recommended by IEEE [13] that can benefit from simple segregation from each other are shown in Table 21:

Traffic type Acronym Description Background BK activities that should not impact the use of the network Best Effort BE for default use by unprioritized applications Excellent Effort EE important best-effort type Critical Applications CA has a guaranteed minimum bandwidth Video VI less than 100 ms delay Voice VO less than 10 ms delay Internetwork Control IC in large networks comprising separate administrative domains Network Control NC guaranteed delivery requirement

Table 2: Traffic types

Table3 shows the correspondence between traffic types and priority values, as well as the PCP value. The default PCP value used for transmission by end stations is 0. At the same time, the default traffic type is Best Effort. Therefor the Best Effort traffic type has a lower PCP value although it has a higher priority than the Background traffic type [13].

Traffic type Acronym PCP Priority Background BK 1 (lowest) 0 Best effort BE 0 (default) 1 Excellent effort EE 2 2 Critical applications CA 3 3 Video VI 4 4 Voice VO 5 5 Internetwork control IC 6 6 Network control NC 7 (highest) 7

Table 3: Recommended traffic type to priority mappings [13]

As mentioned before, the number of queues for one bridge port that have different type of priorities ranges between one and eight. If there is one queue all ready messages are queued in the same queue and arbitrated in the FIFO order. Eight queues can be configured on one bridge port allowing a different queue for each of the eight different priority levels of messages, that is determined by the PCP value. The number of queues corresponds to the number of distinct traffic classes. Each traffic class will have an arbitrary transmission selection algorithm and scheduled timing window. Messages that fall in one of the eight traffic types can be grouped to match the number of traffic classes. This mapping from traffic type to traffic class is the process of queuing frames, shown in Figure4. IEEE recommends a default priority to traffic class mapping given in Table4. The

8 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN reasoning behind such frame classification is given in Annex I of the IEEE 802.1Q Bridges and Bridged Network Amendment [13].

Number of available traffic classes 1 2 3 4 5 6 7 8 P 0 0 0 0 0 0 1 1 1 r 1 0 0 0 0 0 0 0 0 i 2 0 0 0 1 1 2 2 2 o 3 0 0 0 1 1 2 3 3 r 4 0 1 1 2 2 3 4 4 i 5 0 1 1 2 2 3 4 5 t 6 0 1 2 3 3 4 5 6 y 7 0 1 2 3 4 5 6 7

Table 4: Recommended priority to traffic class mapping [13]. The elements in the table represent the queue ID to which each priority is mapped, depending on the number of available queues.

The selection for frame transmission between traffic classes in purely based on queue priority hence the transmission selection algorithm is called the Strict Priority algorithm. Although prioritisation makes a distinguishment between more and less time-critical traffic, it does not give time predictability guarantees even for the highest-priority traffic. Prioritisation alone is not enough to eliminate lower-priority blocking caused by the less time-critical traffic frame transmission. If a switch starts to transmit a frame on its port, this transmission cannot be interrupted, not even by highly time-critical traffic. The high-priority traffic frame has to wait in the switch buffer for the ongoing transmission to finish. This phenomenon is called the buffering effect in the switch. The effect is inevitable with standard Ethernet-based networks and causes non-determinism. Such non-deterministic behaviour is not an issue for certain applications used in environments like office infrastructures where networks are used for file and email transfer. These applications don’t depend on timely delivery of Ethernet frames. However, in automation and automotive car industries, Ethernet-based networks are used for safety applications and closed-loop control, so timely delivery is of utmost importance. AVB/TSN extends the standard Ethernet-based network with real-time capability by introducing mechanisms, like time-triggered scheduling, that ensure timely delivery with soft and hard real-time requirements.

Figure 4: Software Architecture of the Bridge with Strict Priority Selection

9 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

2.3.3. IEEE 802.1Qav - Forwarding and Queuing Enhancements for Time-Sensitive Streams IEEE 802.1Qav Forwarding and Queuing Enhancements for Time-Sensitive Streams defines traffic shaping based on credit-based fair queuing. The selection for frame transmission is based on the queue priority as well as the credit value the queue has. Assigning a Credit-Based Shaper algorithm to a queue separates it into either traffic class A (tight delay bound) or traffic class B (loose delay bound). The Credit-Based Shaper transmission selection algorithm is supported in addition to the strict priority algorithm. Credit, measured in bits, is accumulated to queues as they wait for there frames to be transmitted and is spent by queues while their frames are being transmitted. The rate of credit accumulation and release can be adjusted on a queue-by-queue basis to produce a weighted queuing behavior. The accumulation rate is called idleSlope and is considered while the queue is waiting to be served, and the release rate is called sendSlope and is considered as the queue is being served, meaning its frames are being transmitted. Both idleSlope and sendSlope represent the rate of change of credit, in bits per second. The credit value sets an additional condition for transmission selection on queues that support the Credit-Based Shaper algorithm, as only queues with positive credit are eligible for transmission. It removes bursts in traffic and provides fair scheduling for lower priority queues, by limiting the allocated bandwidth fraction of CBS queues. This bandwidth fraction is given as: idleSlope bandwidthF raction = (1) portT ransmitRate where portTransmitRate is the maximum transmission data rate provided by the MAC Server to the port supporting the egress queue. The value of idleSlope for the queue associated with traffic class N is equal to the operIdleSlope(N) parameter11 and can never exceed portTransmitRate. The operIdleSlope(N) is the actual bandwidth, in bits per second, that is currently reserved for use by the corresponding queue and is calculated based on the Stream Reservation Protocol depending on the usage of allocated bandwidth of all the queues supporting one port. The idleSlope parameter, in conjunction with the size of the frames that are being transmitted using the queue, and the maximum time delay that a queue can experience before it is able to transmit a queued frame places an upper bound on the burst size that can be transmitted from queues that use the algorithm [13].

On the other hand, the sendSlope is negative and determined by the idleSlope as given with the Equation2: sendSlope = idleSlope − portT ransmitRate (2) Figure5 illustrates how the credit-based shaper operates in various conditions. The first timing frame (noted with yellow background) illustrates the frame transmission without frame latency while the other parts illustrate different reasons for frame latency, in the middle part (noted with green background) it is due to conflicting frames, and in the last part (red background) it is due to burst regulation. In the first time frame of the graph a message that is class A is queued as ready for transmission, and it is transmitted right away as the bridge port was idle at the moment of the message’s arrival. As the message is transmitted, the credit is decreasing at the sendSlope rate. When the message has been transmitted, as there are no new messages of traffic class A and its credit is less than zero, no messages are being transmitted. The credit increases by the idleSlope rate and when it reaches the value of zero, it stays on that value as there are no messages ready for transmission. Then a conflicting frame starts being transmitted on the bridge port. As the conflicting frame is being transmitted another message of traffic class A is ready for transmission and the credit is not negative. But this time there will be latency and the conflicting frame can not be preempted. In this period of latency the credit increases at the idleSlope rate. After the conflicting frame has finished its transmission, the class A message can be transmitted resulting in a decrease of credit at the sendSlope rate. This time a positive value of credit is left even after the message transmission has ended, and as there were no other messages ready the value of credit was reset to zero. In the following time frame we again have a conflicting frame, but this time there is a burst of ready messages that can start transmission when the conflicting frame is finished. The

11NOTE - The idleSlope value is equal to the operIdleSlope(N) if the queues are not time-aware scheduled. If the enhancements for scheduled traffic are supported the value of idleSlope is given with the Eq.3.

10 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

Figure 5: Credit-Based Shaper Algorithm operations in different conditions messages will be transmitted in a FIFO order starting with M1, reducing the credit. As long as the credit is zero or positive, the messages can be eligible for transmission. In the figure the credit is positive when M2 has started transmission and becomes negative by the time it is transmitted. As the credit has a negative value, no other messages from the queue can be selected for transmission. The credit will start increasing as there are more ready messages that are not being transmitted, but as long as it is negative the port is free to be utilized by lower-priority queues. If a continuous stream of frames is made available to the shaper algorithm, i.e., there is always one frame queued awaiting transmission when the credit value reaches zero, the shaper will limit the burs transmission giving a fair chance for lower-priority traffic classes. Figure6 illustrates the Transmission Selection Algorithm logical location in the switch from a software architectural perspective.

Figure 6: Software Architecture of the Bridge with the Credit-Based Shaper Algorithm

Although the Credit-Based Shaper provides fair scheduling for low-priority packets, eliminates frame loss and smooths out traffic by removing traffic bursts, unfortunately average delay increases. The recommended mapping of priorities onto traffic classes, and the choice of traffic classes that support particular transmission selection algorithms, is defined in the IEEE Amendments. It follows the principle that all traffic classes that support the credit-based shaper algorithm have higher priority than the rest of the traffic classes, for the algorithm to operate as intended. This is possible by regeneration of priorities.

11 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

2.3.4. IEEE 802.1Qbv - Enhancements for Scheduled Traffic The key to providing on-time delivery of TSN frames is 802.1Qbv. This standard defines a means to transmit certain TSN Ethernet frames on a schedule while allowing non-time-critical TSN Ethernet frames to be transmitted on a best effort basis around the time-critical TSN frames. Time-critical traffic is mostly frames carrying control data in industrial and automotive control applications [14]. As the bandwidth occupied by time-critical traffic is often low, and the cost of providing a dedicated control network can be high, it can be desirable to transmit time-critical traffic with other classes of traffic in the same network, as long as the timing requirements of the time-critical traffic is met. Because of timing synchronization of all the network elements, end devices and bridges implementing IEEE 802.1Qbv can deliver critical communication very quickly and with no discernible jitter in delivery.

TAS utilizes a gate driver mechanism that opens/closes according to a known and agreed upon schedule, for each port in a bridge. In particular, the Gate Control List (GCL) represents this schedule with binary values, 1 or 0 for open or close for each queue, respectively. If the gate is open, that means that frames can be selected for transmission as long as they satisfy the conditions of the transmission selection algorithm associated with the queue. If the gate is closed, the queued frames in the associated queue can not be selected for transmission. In an implementation that does not support enhancements for scheduled traffic, all gates are in a permanently open state. The GCL is executed periodically, and this period is called the gating cycle. The gate mechanism ensures that, at specific times, only one traffic class or a set of traffic classes has access to the egress port. However, to ensure that the remaining traffic classes cannot affect the transmission of the scheduled traffic class, it is necessary to stop its transmission sufficiently in advance. The transmission of non-scheduled traffic has to finish its transmission before the time slot that is reserved for scheduled traffic. This time slot is called protected time slot or protected window. It is important that the last unprotected transmission has completed before protected transmission starts as preemption is not possible. In the worst case, this would mean that the last unprotected transmission would start a maximum-sized frame transmission time before the start of the protected window. This is implemented by adding a guard band just before the protected time-slot. Frame transmission is not permitted during the guard band, and the egress port is in idle state.

Figure 7: Time-triggered scheduled traffic transmission

Figure7 illustrates an example where a non-scheduled frame arrives during guard-band. Without the guard band if the frame started transmission, it would not finish before the start of the pro- tected time-slot, resulting in the scheduled frame not starting transmission as scheduled and having possibly unacceptable latency. However, with the guard-band the frame does not start transmission until the scheduled traffic has finished transmission. In general, the length of the guard-band is the size of the largest non-scheduled traffic frame in the switch [14]. From the microseconds perspective, this is the time needed for transmitting a Ethernet frame of size 1542 bytes, and it depends on the network transmission rate. However, the start of the guard band does not need to be fixed if the implementation can determine, from the size of the next queued non-time-critical frames, that

12 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN there is sufficient time for a frame to be transmitted in its entirety before the start of the protected traffic window. This means being able to calculate frame transmission times in run-time. A more detailed description can be found in Annex Q if the IEEE Amendment [14].

The operations of the credit-based shaper algorithm differ if the switch supports the enhancements for scheduled traffic. Figure8 illustrates the change of credit if combined with time-aware schedul- ing12.

Figure 8: Operation examples of the CBS with TAS

The credit-based shaper algorithm switches that do not support the enhancements for scheduled traffic, are implemented so that if the queue is not being served, the credit when negative always increases. Now with the support of enhancements for scheduled traffic the algorithm operations are different when it is in the guard band time frame or in the protected window. Here the value of credit stays constant. If a frame is short enough to enter the guard band, the credit will decrease until the frame is being transmitted even if it is happening in the guard band. However, when the transmission is finished, the credit does not follow the usual behaviour (no ready frames - credit reset to zero, credit negative - increase, new ready frames - continue transmission - credit decrease). Instead, the credit value is constant until the end of the protected window.

12NOTE - Figure8 illustrates a example with a fixed guard band length.

13 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

The idleSlope value has a different definition if the switch is enhanced with time-aware scheduling. It is defined with Equation3: OperCycleT ime idleSlope = operIdleSlope(N) · (3) GateOpenT ime where OperCycleTime is the operational value of the gating cycle, and GateOpenTime is equal to the total amount of time that the gate of the queue N is opened during the gating cycle [14]. In switches that do not support time-aware scheduling, as mentioned before, the gate is always open, so the total amount of time of the gate being opened in the gating cycle is precisely the gating cycle implying that the idleSlope = operIdleSlope(N) as defined in chapter 2.3.3.. Implementing the Time-Aware Scheduler from a software architecture perspective is illustrated in Figure9, as the Gating Mechanism.

Figure 9: Software Architecture of the Bridge with Time-Aware Scheduler

2.3.5. IEEE 802.3br and 802.1Qbu Interspersing Express Traffic (IET) and Frame Preemption One of the key challenges in future Ethernet-based automotive and industrial networks is high port utilization and low-latency transport of time-critical traffic. Sending Ethernet frames non- preemptively introduces a major source of delay, as in the worst-case, a time-critical high-priority frame might be blocked by a non-time-critical frame, which started transmission just before the time-critical frame. Hence, a time-critical high-priority frame can be delayed by a frame of lower- priority. The IEEE Std 802.3br and IEEE Std 802.3bu introduce Ethernet frame preemption to address this problem. Frame preemption is the suspension of the transmission of a preemptable frame allowing one or more express frames to be transmitted before resuming transmission of the preemptable frame [15].

The IEEE working groups 802.1 and 802.3 collaborated to specify the frame preemption technology since the technology required both changes in the Ethernet Media Access Control (MAC) layer that is under the IEEE 802.3 Working group, as well as changes in the bridge management protocols that are under the IEEE 802.1 Working group. Hence, frame preemption is described in two different standards documents: IEEE 802.1Qbu13 for the bridge management component and IEEE 802.3br14 for the Ethernet MAC component.

13IEEE 802.1Qbu:Available from [Online]: https://ieeexplore-ieee-org.ep.bib.mdh.se/document/7553415 14IEEE 802.3br: Available from [Online]: https://ieeexplore-ieee-org.ep.bib.mdh.se/document/7592835

14 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

The frame preemption mechanism, on the physical level, is specified in the Interspersing Ex- press Traffic (802.3br) standard, while the management and configuration mechanisms for frame preemption are specified in the Frame Preemption (802.1Qbu) standard. To facilitate one-level frame preemption IEEE 802.3br working group separates a given bridge egress port into two MAC service interfaces, namely preemptable MAC (pMAC) service interface and express MAC (eMAC) service interface, as shown in Figure 10. The MAC Merge Sublayer supports interspersing express traffic with preemptable traffic. This is achieved by using a MAC Merge Sublayer to attach an express Media Access Control (MAC) and a preemptable MAC to a single Reconciliation Sublayer (RS) service.

Figure 10: Preemptable MAC (pMAC) and express MAC (eMAC) scheme [2]

Each Ethernet traffic class is mapped to either the express or the preemptable MAC interface. Frames of express classes cannot be preempted and preemptable frames cannot preempt regardless of their priority. Particularly, preemptable frames cannot be preempted by other preemptable frames, and express frames cannot preempt other express frames. Express frames may preempt only preemptable frames. Preemption can occur only between an express and a preemptable frame, where the express frame has higher priority than the preemptable frame. When preemption occurs the transmission of the preempted frame is resumed first than when the express frames have been completely transmitted. A preemptable frame can be preempted multiple times. When frames are preempted they are split into fragments and are reassembled in the MAC layer so that when sent to the Ethernet’s physical layer, they are complete frames. These complete frames transmit the split payload and have slightly different formats so that the first, middle and last fragment of the preempted format can differ. Ports supporting frame preemption, transmit frames where start frame delimiter SFD byte, is replaced by either: • SMD-E for express frames, • SMD-Sx for the start fragment of the preemptable frame or • SMD-Cx for the continuation fragment of the preemptable frame Continuation fragments also have one byte less for the preamble that is used for the FCnt byte that is the fragment counter. For preemptable frames, the CRC sum 4-byte field that is at the end of the frame followed by a IFG, for the exception of the last fragment is MCRC instead of FCS. The last fragment is terminated by FCS, as is the express frame. Another difference in frame format between preemptable frame fragments is that, except the start fragment, there is no need to transmit the source and destination MAC address, or the Q-Tag and Ethernet type as this one-level preemption and only one frame can be preempted at a time. This implies that the payload of the continuation frames has to be at least 60 bytes, compared to the 42 bytes of the start fragment, to meet the minimum Ethernet frame size of 84 bytes. The minimum frame size requirement imposes a constraint on the preemption mechanism; a preemption cannot happen if the frame or its continuation fragment would be split into two fragments that do

15 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN not fulfill the minimum frame requirement. This constraint causes a possible express frame latency. The smallest frame that can be preempted has the payload of 102 bytes, as these can be split into 42 bytes and 60 bytes to form a start and continuation fragment that fulfill the minimum Ethernet frame size. A frame with 101 byte payload is the worst-case frame that cannot be preempted, and will cause the maximum express frame latency. The whole frame size, together with the preamble (7), SMD-Sx(1), DA(6), SA(6), Q-Tag (4), ET(2), FCS(4) and IFG(12) is 143 bytes [16].

In a switch that does not support the preemption mechanism, in the worst-case, a high-priority frames can be delayed by about 123,36 µs per 100 Mbps switch by lower-priority frames. The worst-case is where the high-priority frame is blocked by the maximum size low-priority frame of 1542 bytes, because under non-preemptive frame transmission, a frame, which is in transmission, is guaranteed to finish without interruption. This might be too much for time-critical control applications [16]. With frame preemption the maximum high-priority frame latency due to lower- priority blocking is reduced to about 10,16 µs per 100 Mbps switch. Frame preemption, however, also causes an additional delay for preempted frames caused by the preemption, the preemption overhead, which can potentially have a negative impact on the performance, depending on the circumstances. The preemption overhead is caused by an additional 24 bytes per preemption, that are being sent. This causes a latency of around 1.92 µs per 100 Mbps switch. In a 10 Gbps switch the delay is lowered from around 1 µs to 0.01 µs. The 123,36 µs in a 100 Mbps switch and the 1,23 µs for a 10 Gbps switch are the upper bounds on the additional delay before a MAC Client can send an Express frame when preemption capability is not used. At higher operating speeds the additional delay gets smaller in proportion to the speed. The preemption capability is most useful at lower operating speeds. At higher operating speeds this additional delay gets smaller in proportion to the speed, reducing the advantage of the preemption mechanism.

Figure 11: Preemption Mechanism Trace Example

Meeting time-critical transmission requirements has been introduced with the gating mechanism, but the time-scheduling of traffic also introduced guard-bands. They cause high latency for non- scheduled traffic and ineffective utilization of the network as there is a significant length of time during which no frames can be transmitted. The purpose of this amendment is also to reduce this latency caused by guard-bands. The guard-band is used so that the scheduled traffic will be

16 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

Figure 12: Hold and Release mechanism enabled vs. disabled transmitted without interference. If the preemption capability is used preemptable frames cannot interfere and block express traffic. As we noted above the worst-case lower-priority blocking that an express frame can suffer is the transmission duration of the maximal length of a non-preemptable fragment. To ensure that the scheduled traffic cannot experience this blocking it is enough to introduce a reduced guard-band. This reduced guard-band is an explicit 127-byte guard-band implemented with the Hold and Release mechanism. This mechanism will generate a Hold signal 127-byte time before the protected window to preempt the lower-priority frame. In the case where at the moment the Hold signal is sent, the frame length that is not transmitted is 127 bytes or less preemption will not be possible as this end fragment is non-preemptable. Still, thanks to the Hold signal the preemptable frame fragment will finish transmission before the arrival time of scheduled data. If the untransmitted preemptable fragment was bigger, it would be preempted and there would be a small port utilization loss. But this loss is smaller than it would be without the preemption mechanism, where the standard guard-band is used instead of the reduced one. The Release signal is generated to signal the end of the protected window so that the preempted frame can continue transmission.

The result of using scheduling, HOLD/RELEASE and preemption in combination is that the express traffic’s protected window can be completely protected from interference from preemptable traffic, while at the same time reducing the impact of the protected window on the amount of bandwidth that is available to preemptable traffic.

17 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

3. Related Work

In this section, besides examining the state-of-art of the Frame Preemtion Standard in Section 3.3., we also examine the other key components of TSN individually in Section 3.2., and as a whole in Section 3.1. The first section also takes the alternatives to TSN into account, in terms of hard real-time networks. In related work we describe research and extensions relevant to Ethernet TSN and the preemption mechanism, also we provide a brief overview of existing performance analysis approaches for TSN networks and some experiment results relevant to TSN. Most works we reference are simulation-based experiments similar to our work.

3.1. Performance evaluation of real-time Ethernet Standards In the work by Paula Doyel [17] several existing real-time solutions are evaluated. The existing real-time solutions, whose advantages and disadvantages are addressed, include EtherNet/IP, PROFInet, EtherCAT and . This work is of the first research papers that deal with real-time Ethernet and it served as a base for many following research papers. It also introduced then new IEEE 1588 standard that ensures time synchronization. Performance analysis of the IEEE Std. 802.1 Ethernet Audio/Video Bridging standard has been explored in multiple works, one of them being by Lim et al. (2012) [18] who analyze the AVB standard based on a simulation approach. Ashjaei et al. (2017) [19] discusses modeling and analysis of AVB communication network within a model- and component-based software development framework. Similar performance analysis and system modelling of vehicle Ethernet communication has been done by Jiaheng Qiu [20]. Mubeen et al. (2019) [21] presented the first holistic modeling approach for TSN in model- and component-based software development network. They explicitly modeled the timing requirements imposed by TSN-based communication. Based on these models they present a end-to-end timing model for TSN-based networks of distributed embedded systems. The work by Bello et al. [22] gives an up-to-date overview of the modern technologies concerning distributed embedded systems. In the work TSN is discussed among others. The existing and future automotive software development solutions are the focus of the work. The work concludes that the TSN is the key for future automotive feature development, like autonomous driving. This gives the motivation for further investigation and ultimately for this thesis. The work by Nasrallah et al. [2] is an excessive survey on Ultra-Low Latency Networks including The IEEE TSN and IETF DetNet Standards. They have identified the pitfalls and limitations of the existing standards for networks that need low-latency and research studies on those. This survey can thus serve as a basis for the development of standards enhancements and future Ultra-Low Latency research studies.

3.2. Performance analysis of individual TSN standards Patti and Bello [12] have done a performance analysis in 2019, of the IEEE 802.1Q standard that is one of the base standards of Time-Sensitive Networking. They have done a quantitative evaluation of the network performance in terms of maximum end-to-end delays and absolute jitter, compering different network configurations and traffic patterns. The analysis results are obtained using the OMNeT++ framework. The work in [23] focuses on the simulation approach performance analysis of TSN IEEE 802.1Qbv and IEEE 802.1Qbu standards. Performance is analyzed and measured in terms of end-to-end transmission latency and link utilization, through five scenarios. Other work focuses on the effects of combining TAS with credit-based shaping [24] and [25]. Schedule synthesis is required, e.g., to implement a TDMA scheme with TAS. In the context of TAS, this schedule is referred to as Gate Control List (GCL). The calculation of GCLs for stream-based scheduling is challenging because it is, in general, an NP-hard problem, and is often solved by constraint-based programming or heuristics [26].

3.3. Evaluation of TSN with Frame Preemption Jia et al. (2013) [27] work presented a simulation-based evaluation of the IEEE 802.3br standard. The paper compares the IEEE 802.3z Gigabit Ethernet networks with applied IEEE 802.1Qbu frame preemption mechanism with the existing non-preemptive-based priority scheduling schemes. The results confirm that using frame preemption mechanism reduces the end-to-end latency and

18 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN absolute jitter of time-critical traffic. The same year in Germany an experiment-based evaluation of the frame preemption mechanism was published. The work by Kim et al. [28] proposes a custom preemption mechanism and implement a controller based on that preemption mechanism. The implementation results show that latency and jitters time-critical traffic are reduced when compared to a non-preemptive Ethernet controller. Thiele and Ernst [16] have done an analysis on Ethernet under preemption in 2016. They present a formal worst-case performance analysis of Time-Sensitive Networks with frame preemption. Using a realistic automotive Ethernet setup they derived worst- case latency bounds under preemption. They also compared the worst-case performance of standard Ethernet and Ethernet TSN under preemption with the worst-case performance analysis results of non-preemptive implementations of these standards. To the best of our knowledge, this is the only work that analyses specifically the TSN preemtion standard to derive worst-case performance bounds, hence it is very important work for this thesis as we are analysing the performance of a networks with multiple frame preemtions. A recent study [29] (2020) discusses different ways of using TAS and the preemption mechanism. They present the drawbacks and advantages between stream-based TAS, class-based TAS, and frame preemption. A simulator-based comparison of these mechanisms was conducted using the TSN network simulation tool NeSTiNg, for different scenarios. Moreover, they introduce calculation formulas for class-based scheduling.

3.4. Existing network simulators There are existing TSN simulators like NeSTiNg15 or CoRE4INET16, that have already been developed. Both of these are simulation models in a bigger simulation environment. They were developed in the open-source INET Framework17, which is a model library that supports a wide variety of networks and is especially suitable for researcher and students working with communication networks. It offers different models, link layer protocols, routing protocols, sensor protocols and many more components. Besides evaluation, it can be used for designing and validating new protocols. INET Framework generates simulation models for the OMNeT++ Simulation IDE18. The OMNeT++ Environment provides the simulation kernel and other libraries for network description, simulation configuration and simulation results recording. CoRE4INET is one of the INET Framework extensions that support the TSN components. It is an event-based network simulator of real-time Ethernet. It was first introduced in 2011, as TTE4INET [30], as it supported TTEthernet. When more real-time Ethernet protocols were added the project was renamed from TTE4INET to CoRE4INET, published in [30], to show that it does not only contain time-triggered protocols. The simulator supports: • Best Efford Crosstraffic • IEEE 802.1Q / IEEE P802.1p VLANs and Priorities • Time-Sensitive Networking (TSN) • IEEE 802.1 Audio/Video Bridging (AVB) • TTEthernet (AS6802) • IP over Realtime-Ethernet Another simulation model extending INET Framework to support the TSN components is NeST- iNg. It supports frame tagging, different shaper components and the preemption mechanism. The model was initially developed by a group of students during a curricular project and is continuously extended at the Distributed Systems group of IPVS, University of Stuttgart. David Hellmanns, one of the simulation authors, introduced NeSTiNg at the September 2018 IEEE 802.1 Working Group interim meeting in a presentation [31]. The paper [32] published in 2019 describes the NeSTiNg simulator. Although the performance analysis could have been done using the OMNeT++ environment and

15NeSTiNg Open Source Code: Available from [Online]: https://gitlab.com/ipvs/nesting 16CoRE4INET Open Source Code: Available from [Online]: https://github.com/CoRE-RG/CoRE4INET 17INET Framework: Available from [Online]: https://inet.omnetpp.org 18OMNeT++ Simulation IDE: Available from [Online]: https://omnetpp.org

19 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN one of the developed TSN -supporting simulation models, we decided to build one from scratch. This gives us the complete freedom to manipulate every aspect of the network and the possibility of writing a simple code targeting the specific features we want to evaluate. Nevertheless, the TSN-specific simulation models should be mentioned as an inspiration for the development of some model parts in TSNS.

3.5. Discussion From investigating the related work, we can conclude that the major part that is missing in the body of knowledge of the preemption mechanism, is the analysis for different preemption configurations. Most works that do a simulation-based analysis are limited to simulators that do not support multiple express traffic classes as the network behaviour in such an environment is not known or as it not the focus of the work and as it would have to be investigated as a separate work. That analysis will be done in this thesis, as well as the introduction of the custom simulator tool that focuses on various preemption configuration patterns in a TSN network.

4. Method

The most suitable research method is an experimental method since the purpose of the thesis is to design and implement a simulation tool for TSN and do a performance analysis based on it. Computer science as a relatively modern science discipline investigates in new and better solutions broadly using an experimental method as it is suited for evaluation. As explained by Amaral [33], the experiment is often divided into two phases: exploratory phase and the evaluation phase. The exploratory phase refers to the body of knowledge and helps identify what the questions are that should be asked about the system under evaluation, whereas in the evaluation phase those questions will attempt to be answered. In the exploratory phase the important step of the literature review is done as it will establish the knowledge of the research topic, identify the state-of-the-art in the field, and put the research in the context of the larger body of work by critically assessing the existing articles, conference proceedings, books, and other publications. The published work is not just summarised but also evaluated to understand the open research questions in the field and show the relevance of the thesis research. In Figure 13, based on the framework of research from Nunamaker, Chen and Purdin [3], the two phases are shown graphically.

Figure 13: Research framework [3]

Results from experimentation may be used to refine theories and improve systems, as stated by

20 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

Nunamaker, Chen and Purdin [3]. Many experiment that are conducted in the computer science discipline and research studies based on these experiments are lacking validity and relevance because of the careless fashion in which the experiments were conducted and reported, as noted by Amaral [33].

A good experimental research study follows these conventions, as advised by Jose Nelson Amaral [33]:

• Record keeping: Labeling and filing the results in ways that will make it possible to retrieve and check them later, as an experiment has to be ”replicable”. Annotating, filing, and documenting are essential for the future relevance of an experimental study. • Experimental setup design: Documenting the findings after the exploratory phase, as well as redefining the research question and experiment setup as greater knowledge is obtained.

• Reporting experimental results: The report should be thorough and transparent. The numerical results presented should also be accompanied by a carefully written discussion of the results. This discussion should provide insight into those results and contribute to the body of knowledge. The independent and dependent variables will be specified after the exploratory phase. According to Farkas [34], several performance metrics can be tracked, to test the performance of the network, such as data transport latency, delay variation and data loss. The performance analysis will be done first by an investigation, followed by the simulation on multiple scenarios. The more experiments completed, the stronger the principle is for the hypothesis. It is also a good practice to have several replicate samples for one case for variability estimation.

The steps towards finishing this thesis are: (1) Investigation of the TSN related standards and research on the TSN mechanisms (2) Investigation of the related work on preemption support for Etherenet and in particular for TSN

(3) Analysis of the available simulation frameworks for TSN (4) Investigate the effects of defining multiple express traffic classes on the overall network performance (5) Using a network simulator to experiment with multiple cases to showcase the effects of having more express traffic classes

(6) Results analysis and report writing To understand the state-of-art excessive study on the relevant TSN standards will be conducted as well as the review of related works. The results of the literature review will be summarised in the background section, and the existing work on the subject will be mentioned and explained in the related works section. This is the primary step towards developing a TSN-supporting network egress port simulator, along with analysing the existing simulators. For the development of the network simulator, an already developed basic simulator will be used. The basic simulator supports one fixed network configuration where one ST traffic is express, accompanied by three preemptable traffic classes. The basic simulator supports the synchronised time and the FIFO logic of queues, as well as the credit-based shaper for the two middle-priority queues. To upgrade the simulator so it can support any traffic configuration pattern, we will implement the possibility of having different queue numbers that can be assigned any transmission selection algorithm (the option to support the credit-based shaper or not), the automated calculation of the idleSlope, the gating mechanism of the Time-Aware Scheduler, so that any traffic class can be scheduled, and enable any class the possibility to be express, taking into account the different behaviour in terms of guard-bands. The basic simulator is written in C, as well as the upgraded simulator will.

21 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

To calculate the performance of an egress port model in a sinlge-node network we will calcu- late the wait time of each traffic frame. As preemption affects the time needed for a traffic frame to transmit, as it can interfere with a transmitting frame, the response time of a frame in the egress port will also be computed. Both terms are defined in the beginning of Sec. 6. These will be the output values of the simulator. To get relevant results, we will suggest possible scenarios that could show the effects of the preemption mechanism. Some scenarios will have a fixed traffic configuration and fixed traffic stream set to evaluate an specific expected outcome, other scenarios will have a fixed traffic configuration pattern and randomly generated traffic stream sets in order to evaluate the effects of these traffic configurations regardless of the message set, but rather on the configuration itself. To eliminate the influence of the traffic stream set, a sufficient number of simulations will be run for one configuration. As networks in industrial environment mostly transmit a large number of traffic data, taking in consideration the large number of simulations, the results will not be shown for each traffic stream individually but rather in a compact form. The delay computation will be done for each traffic class with the best-case, worst-case and average value over all the simulation.

5. Ethical and Societal Considerations

This work does not contain any ethical research issues. At the very least, the issues can not be found at this moment of the research. The research is done in hopes to bring new insight into the frame preemtion standard of time-sensitive networks that make networks real-time capable in a standardized way. It is purely a scientific experiment done in a simulated environment without yet known future impact on the economy, political, social and ecologically sustainable development.

22 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

6. Effects of the Preemption Mechanism

In this section, we will do an investigation on the body of knowledge, make assumptions on the effects the preemption mechanism has on the network performance. Moreover, we will explain some conclusions made based on the apriori information collected from the IEEE standards and related works. From some conclusions, we will derive assumptions that will later be considered when configuring the switch in the TSNS simulator, while others will be used to make use cases that will be evaluated with the TSNS simulator. The results of the evaluation can be found in section 8.

To begin, we will first define some terms in which the network performance will be measured. Wait time is the difference between start time and arrival time. Where start time is the time instance that the frame is beginning transmission on the egress port, and arrival time is the time instance when the frame is queued. The wait time delay is caused by the queuing delay and the forwarding latency. Queuing delay of a frame is caused by the FIFO logic of the queue, and it is the time when a frame waits for front frames in the queue to transmit. The forwarding latency is caused by the transmis- sion selection algorithm and it is the time where the traffic class of that frame is waiting to be selected.

Preemption mechanism also affects the response time. Response time is the difference between end time and arrival time. End time is the time instance in where frame transmission is finished. As preemption is a MAC layer operation, frame transmission can be interrupted, thus affecting the response time of a frame. Jitter is the variance of the delays and it shows inconsistencies in frame transmission that is unfavorable for time-sensitive networks used in real-time environments. Jitter in the wait time is called release jitter, and jitter in the response time is called the response jitter.

6.1. Schedulability The effects will be measured and analysed on a schedulable set of traffic streams, because otherwise the traffic frames that don’t meet their deadlines might not even be selected for transmission in bounded simulation time. The latency for these frames is not a finite measurable number. For the frames to meet their deadlines a correct bandwidth fraction has to be allocated for each traffic class, so that the sum of bandwidth fractions don’t exceed the total egress port bandwidth (Eq.4). The bandwidth is allocated to each traffic class depending on the required utility. For the traffic stream to meet its deadline, its utilization has to be the lower bound of the bandwidth fraction allocated to it (Eq.5).

N−1 X bandwidthF raction(n) ≤ portT ransmissionRate (4) n=0

S−1 X transT ime(s) ≤ bandwidthF raction(n) (5) period(s) s=0 where N is the total number of traffic classes, and n represents each class as n = 0, ..., N-1, and where S is the total number of traffic streams of class n, and s represents each stream as s = 0, ..., S-1.

6.2. Guard band size In this section, we will discuss some general conclusions about the guard-band that is protecting time-critical frame transmission from the interference of other traffic streams. The explicit and implicit use of the guard-band is mentioned in the IEEE standards in basic situations. This section presents a broader view on the guard-band for various configurations of traffic classes. The most notable is the transmission selection for a configuration of multiple express traffic classes. The reasoning behind using a guard-band that has the length of the maximal frame size of the traffic set, when preemption is not supported in the switch, is explained in section 2.3.4.. The value of the guard-band is upper-bounded by the time needed to transmit the maximal Ethernet MTU frame with a 1500-byte payload and lower-bounded by the minimal Ethernet frame with 42-byte

23 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN payload. The 42-byte MAC header is added to the payload so the upper-bound is the time needed to transmit 1542 bytes of data (123,36 µs in a 100 Mbps switch) and lower-bounded by the time needed to transmit an 84-byte data frame (6,72 µs in a 100Mbps switch). When the preemption mechanism was introduced in section 2.3.5., the sufficiency of the hold and release (HR) signals instead of a guard-band was explained. The hold signal creates an explicit 127-byte guard-band, which is the maximum non-preemptable frame fragment size. This HR guard-band is sufficient only if the scheduled traffic is being protected from interference of a preemptable traffic frame. With the assumption that the highly time-critical traffic streams are scheduled offline in a way that they don’t interfere with each other’s protected time frames, the guard-bands are targeting less time-critical traffic streams that are not scheduled. If the non-scheduled traffic streams are express the HR guard-band is not sufficient. In a scenario where the lower-priority express frame is being transmitted when the hold signal is generated, the lower-priority frame can not be preempted as express frames can not preempt other express frames as defined by the IEEE standards. The lower-priority would continue transmission and could interfere with the ST frame, that in the worst-case scenario would completely block the highly time-critical data transmission. The HR guard band exists to assure zero latency of scheduled traffic, but the standard allows a switch configuration where the HR guard band is disabled. In that case the worst-case latency of schedule traffic is the time needed to transmit a 127-byte frame. But again this scenario is for the case where the non-scheduled traffic is preemptable. If the non-ST frame is express, the protected window has to be guarded with the standard guard-band, irrelevant from the HR signals.

6.3. Preemptable configuration of scheduled traffic In this section, we will explore a specific preemption configuration where the scheduled traffic class is set to be preemptable. The purpose is to show the derivation of a conclusion that is fundamental for scheduled traffic in networks that support the preemption mechanism. To guaranty no wait time and no release jitter scheduled traffic is always express. This intuitively is true, as scheduled traffic is highly time-critical traffic and should not endure interruptions. But we will show the reasoning behind this assumption that will be included when implementing the TSNS simulator.

In a scheduled traffic transmission with both preemptable and express traffic, when the scheduled traffic is express, the immediate forwarding of scheduled traffic is guaranteed by the protected windows and guard-bands. As explained in section 6.2., the implicit standard guard-band blocks the start of transmission of any non-scheduled traffic, while the HR explicit guard-band will preempt any traffic that is being transmitted by sending the Hold signal to the MAC Merge Sublayer, guaranteeing no interference with the scheduled traffic.

To conclude why this is the only schedulable preemption configuration for scheduled traffic, we will dispute the preemption configuration where ST is preemptable.

Figure 14: Preemption configuration causing ST traffic latency

From Figure 14 we see an exemplary case that shows the unwanted effect of having ST traffic as

24 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN preemptable. First, the reduced HR guard-band is not eligible for use if the traffic it is protecting cannot preempt the interfering traffic. The implicit maxFrameSize guard-band is enough only if it can guarantee that any frame that starts transmission before the guard-band will finish transmission before the protected window. When preemption is not supported, once a frame starts transmission it will finish after execution time. Therefore it is enough to have the guard-band size equal to the maximal sized frame. If preemption is supported, this is not necessarily true as transmission can be interrupted, and once a frame starts transmission, it can take longer than execution time to finish. In Figure 14, we can see that on the lowest-priority frame. It started transmission before the guard-band, as the only arrived frame, and as it is preempted, the transmission will not finish before the protected window. Once the express frame finishes, the continuation frame will be transmitted as it is already in the pMAC.

Even without the lowest-priority traffic, the guard-band cannot protect ST traffic from other express traffic classes. It is important to remember that the implicit guard band is considered in the transmission selection algorithm. Express and preemptable traffic classes have separated transmission selectors for separate MAC layers. In Figure 14 the express frame will be selected for transmission even when in the guard-band and interfere with the scheduled frame.

In conclusion, the explanatory case shows us why HR guard-band cannot be used when scheduled traffic is preemptable, and why even the standard guard-band that is used to guaranty undisturbed ST frame transmission is no longer sufficient if preemption is enabled unless the scheduled traffic is express.

6.4. Multiple scheduled traffic classes In a switch that processes multiple scheduled traffic streams, having them mapped to one or more traffic classes does not make any significant difference in the performance. In a correct offline scheduled set of traffic there is no queuing delay if different stream would be mapped to the same class. Non-zero queuing delay implies that there would be protected time slot overlapping if these same traffic streams would be mapped to individual traffic classes. A multi-ST-class switch configuration does not differ from a single-ST-class switch configuration in terms of performance, if correctly scheduled. However, if there is an overlapping of ST traffic, in a multi-ST-class configuration the higher-priority ST class would have an advantage over a lower-priority ST class, in comparison to a single-ST-class configuration where an ST-frame has the advantage to be selected if it is ahead in the queue.

6.5. Effects on the SR traffic class To analyse the effects of preemption on the traffic classes that support the credit-based shaper, first we will analyse an exemplary case without scheduled traffic, but an express SR traffic class and the default best-effort preemptable traffic class. The preemption mechanism gives an advantage to the express traffic class over the preemtable class. As SR class has higher priority than BE class, it can preempt the transmission of BE frames as soon as it is selected for transmission. This is the fundamental effect of preemption. An SR traffic class is eligible for transmission as soon as the credit is not negative (without the gating mechanism). If it is also express it will start transmitting as soon as is selected. Therefor the credit will not continue increasing. If we ignore the preemption overhead, the credit value of SR traffic will never be positive. With the preemption overhead, the hiCredit value can be calculated as: 16 · 8bits time = (6) CP networkSpeed where timeCP is the time slot where the credit is positive, and that is the time needed to transmit the 16-bytes of preemption overhead (the other 8 bytes don’t affect the express frames) in a switch of bandwidth value networkSpeed.

hiCredit = timeCP · operIdleSlope(SR) (7)

25 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

This is true for any case where a SR traffic class is the highest-priority traffic class as well as the only express class for one egress port.

Figure 15: Credit behaviour of express Class A

In the given preemption configuration lower-priority traffic will suffer from increased response time, as shown in Figure 15, where RT is the response time of the low-priority frame when the preemption is not supported, and RTp is the response time when preemption is supported.

6.6. Preemption effects in a multi-SR traffic class configuration The switch configuration allows multiple traffic classes that support the credit-based shaper algorithm. Having multiple SR traffic classes configured as express does not change the port utilization, but it gives advantage to the SR traffic over the preemptable traffic in terms of latency. Below we will analyse exemplary cases with a traffic class set of higher-priority SR class called Class A, a lower-priority SR class called Class B and a best-effort traffic class BE. (1) First (Fig.16), both Class A and Class B are preemptable, as well as BE. The lower-priority blocking for Class A can be caused by the transmission of both Class B and class BE. In the worst-case, that would be the maximum length of the lower-priority frames. The lower-priority blocking of Class B can be caused by the transmission of the BE class, and in worst-case that is the length of the maximum size BE frame. Class B also suffers from higher-priority blocking from Class A and that blocking time is upper-bounded by the time needed for the credit of Class A to decrease for the value of hiCredit, which depends on the idelSlope of Class A plus the length of the maximum frame of Class A if that frame would start transmission when zero value of credit is reached. Class BE has a higher-priority blocking from both Class A and Class B. In the worst-case where the SR class has traffic bursts, the best-effort traffic class will allocate the rest of the bandwidth that is not allocated by SR classes, that depend on the idelSlope values.

(2) In the same configuration, if only Class A is express (Fig.17), there will be no lower-priority blocking. The latency can only be caused by the limited reserved stream, characterised by its idleSlope value. On the other hand, Class B and class BR suffers from longer response times caused by preemption. (3) If Class B is express, while Class A and class BE are preemptable (Fig.18), compared to the second case, lower-priority blocking of Class A will increase in the worst-case, caused by the interfering express frames. Compared to the first case, Class A is unaffected, and the higher-priority blocking of Class B is not changed, but its lower-priority blocking decreases as the frames can preempt, simultaneously causing bigger response time of preempted BE frames.

26 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

Figure 16: Example trace with no express frames

Figure 17: Example trace with only Class A - express

(4) When both Class A and Class B are express (Fig.19), higher-priority blocking is the same as compared to the first case. Lower priority-blocking is caused by the other express classes. This is calculated like it would be in the first case, reduced for the blocking value of preemptable classes. The response time of SR classes is reduced for the blocking value of lower-priority preemptable frames.

27 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

Figure 18: Example trace with only Class B - express

Figure 19: Example trace with both Class A and B - express

6.7. Multiple ST and SR traffic classes When the switch does not support the time-aware shaper or simply does not schedule any traffic, the port utilization will almost not change when enabling the preemption mechanism. It is slightly increased because of the preemption overhead. Scheduling traffic lowers the port utilization as it adds guard-bands. In cases where the switch has only ST classes as express we can calculate the preemption overhead as shown in [16]. As the less-critical traffic streams can not preempt themselves, there can only be one frame preemption per one protected window. In a time interval the maximum number of preemptions can be found by summing over the maximum number of scheduled frames or protected

28 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN windows of all scheduled traffic classes in that time interval. If we have non-schedulable express traffic classes than the preemption overhead for a time interval can not be calculated like this. For every non-scheduled traffic class that is express the implicit maximum-frame sized guard-band is used when selecting that traffic class for transmission. On the other hand having non-scheduled traffic class as express allows it to preempt all lower-priority preemptable frames. When SR traffic is added besides the express ST traffic class, to analyse the effects of the preemp- tion mechanism on the SR class we focus on the possible lower- and higher-priority blocking. Its higher-priority blocking is caused by guard-bands (the HR guard-band for preemptable SR class and the standard guard-band for express SR classes) and protected windows where ST traffic is being transmitted. Lower-priority blocking comes from possible lower-priority traffic frames that are already transmitting.

Let us analyse a case with a ST class as express and Class A as express. Advantage of hav- ing Class A as express is that it can not be blocked by lower-priority frames. The disadvantage is that its wait time, the time between frame arrival and its transmission can be delayed by the duration of the standard guard-band of the high-priority ST class. In terms of worst-case, if Class A is express the higher-priority blocking is the length of the maximum frame of all the non-scheduled traffic frames plus the length of the protected window. In the worst-case, if Class A was preemptable the higher-prority blocking would be caused by the insignificant length of the HR guard-band and the length of the protected window. At the same time, lower-priority blocking would be the maximum frame size of non-scheduled traffic frames that have a lower-priority than Class A. This decreases the possibility of the worst-case low-priority blocking to be the overall non-scheduled maximum frame. And more importantly, the possibility of the lower-priority frame to be maximum frame is less, compared to the fact that the standard guard-band always has the maximum frame size. From here we can also make some assumptions:

(1) The more various frame lengths there are in the message set, the less the possibility that a preemptable SR will be lower-blocked by the maximum frame size. (2) The lower the priority of the SR class the lower the possibility that the maximum frame size is in one of the traffic frames beneath it, in terms of priority. This might lead to a even more significant disadvantage of having SR classes as express (Scenario 4).

In conclusion, having traffic classes that support the Credit-Based Shaper as preemptable results in higher performance.

6.8. Lower priorities as express Now we will consider non-time-critical streams as express. Traffic streams that are best-effort or background and supported by strict priority selection algorithm. Having lower-priority traffic classes as express cancels the advantages that the preemption mechanism brought. The main goal of preemption was to disable lower-priority blocking and to reduce the guard-band by which it would increase port utilization. A lower-priority express class would be blocked by the standard guard-band causing large idle port slots and poor utilization. At the same time the low-priority traffic classes don’t benefit regardless of being express as they can not preempt higher-priority traffic classes. The effect we are talking about will be illustrated in Figure 20. We can see the effect the preemption configuration has on the response time of the low-priority traffic class. In the first example the low-priority frame is express, compared to the second example where it is preemptable. In both examples we have an express scheduled traffic class and a middle-priority preemptable frame. The response time of the LP traffic class is notable increased when the LP traffic is express. This value is denoted as RT e in the Figure. Response time of LP traffic class when it is preemptable is denoted with RT p. Preemption has this effect on the lower-priority traffic class when there is scheduled traffic, in the network otherwise, there is no difference. From the Figure 20, we can also see that the preemption configuration of LP traffic class does not affect the higher-priority classes.

29 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

Figure 20: Response time of LP traffic in express vs. preemptable configuration

.

30 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

7. Time-Sensitive Network Simulator TSNS

TSNS is a time-sensitive network simulator developed as a tool for performance analysis of the TSN Preemption mechanism. It simulates a computer network switch with one egress port. The objective was simulating the switch behaviour, and this behaviour is based on the IEEE 802.1Q Amendments. It is important to follow the standards as the TSN Working Group main aim is to standardize the real-time Ethernet. The environment used for the development of the simulator is Visual Studio 2019 from Microsoft and it was written in C language. The design of the simulator will be described in the chapters below in more detail.

7.1. Assumptions The TSNS network bridge egress port simulator is built on the following assumptions: a) Minimum time increment is the time needed to transmit 1 byte of data. b) Traffic streams are periodic and evaluation will be done only with schedulable traffic sets. c) Traffic streams have period values form a range of numbers that are factors of some least common multiple. d) The rate at which credit is increased for traffic classes that support CBS is calculated so that the reserved bandwidth for that class is lower-bounded by the sum of utilization of the messages in that class and upper-bounded by the bandwidth fraction that is not reserved by the other traffic classes. e) Scheduled traffic is always express when the preemption mechanism is supported (the reasoning for this assumption is explained in section 6.3.). f) Scheduled traffic supports the strict priority transmission selection algorithm. Scheduled traffic includes control traffic that is highly time-critical traffic, so it is mapped to a separate queue and does not get limited bandwidth bounded by the credit-based shaper [25]. g) Scheduled traffic is protected at every traffic frame arrival. h) For the objective of performance analysis, one scheduled traffic class is enough to support all the highly time-critical traffic. If the time-aware scheduler is enabled all the scheduled traffic will be mapped to the highest priority traffic class/queue (the reasoning for this implementation design is explained in section 6.2.). i) When the preemption mechanism with enabled Hold and Release is supported, the Hold signal is generated only when a preemptable message is being transmitted otherwise, the implicit guard-band of maximum Ethernet frame size will protect the port so it is idle for the scheduled time frame (the reasoning for this assumption is explained in section 6.2.). j) There are no hardware delays or failures.

7.2. TSNS egress port model design The TSNS is an Ethernet network switch model. It simulates the functionality of a switch that has one ingress port and one egress port, as shown in Figure 21. In a single-port switch the functionality comes down to selecting traffic for transmission in the correct order based on the importance of the traffic and the bandwidth reserved for each traffic taking into account the timing requirements. The switch can be configured by the user, the changeable variables are: • Number of queues • Transmission Selection Algorithm for each queue • Preemption mechanism enabling for each queue The other user input that can be generated manually or automatically are:

31 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

Figure 21: TSNS Switch model

• A list of Ethernet messages form the ingress port • The Gate Control List The egress port bandwidth and value of one time unit are also customizable. In the code they are represented as global variables networkSpeed and timeSlot. The number of simulations and duration of one simulation can be changed by changing the values defined under the names simNumber and simTime, respectively. To meat the timing requirements, the simulator has simulated a common clock in form of a tick counter. The tick counter in the code is represented with the variable ttime. Three big components of the switch that are implemented as structures are: ◦ queue as QueueType

◦ traffic as EthrMsgType ◦ gate control operation as GCOType Lists of these objects are the QueueList that holds all the queues configured in the switch, the EthMsgList that holds all the messages from the ingress port and the GCL that hold all the gate control operations. All lists are dynamically allocated in the memory. Different priorities are assigned to different traffic types. The priority value is implemented as a EthrMsgType attribute called priority. This valiable is of type trafficType, it is a defined enumerated type that holds eight values (BE = 0,BK = 1,EE = 2,CA = 3,VI = 4,VO = 5,IC = 6,NC = 7).

In a single-simulation experiment, the user can input the traffic stream set and Gate-Control List manually, using the message generate.c and gcl generate.c files. To run these we call the functions messageInit() and makeGCL(). For a multi-simulation experiment, the results should be independent of the message set, the user assigns the number of messages for each traffic class BE nbr,BK nbr, EE nbr, CA nbr, VI nbr, VO nbr, IC nbr, NC nbr and sets the messages configuration so that different messages are generated automatically per simulation, by calling function msgGenerate(). The message set is configured by assigning the number of different message utilizations MaxNumberOfDifferentMessages and assigning the set of utilizations, as well as the number and set of different initial offset values MaxNumberOfDifferentOffsets. To automatically generate the GCL list the user sets which traffic class is schedulable apriori by setting the scheduled attribute of the QueueType.

32 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

Figure 22: Defined TSNS structures

Figure 23: Overall software architecture design of the simulator

7.2.1. Queuing Based on this value, the simulator uses the queuing mechanism to assign each message to the corresponding queue based on Table4. The queuing mechanism and its correspondence with the rest of the simulator is shown in Figure 23. The messages that are assigned to a queue are temporarily stored in an array called messages. It is an attribute of the QueueType structure, as shown in Figure 22. In every clock tick, the function readyMessageToQueue(ttime) is called to check if any message instance from the Ethernet message list is ready for transmission, based on the offset and period of the messages. If a message instance is ready, it will be queued. Temporal

33 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN handling of the messages is done in MessageTimeHandle(ttime) function.

7.2.2. Transmission Selection Algorithm The user chooses the transmission selection algorithm for each queue, by either setting the value of cbs attribute value for a credit-based shaper algorithm or resetting the cbs value for a strict priority algorithm. If the strict priority algorithm is the only one supporting the queue that the first message from that queue will be selected for transmission as long as all the messages in the higher-priority queues are not ready or not eligible for transmission. On the other hand, if the credit-based shaper algorithm is supporting the queue, there is an additional requirement that has to be satisfied for the first message of that queue to be selected. Credit is assigned to the queue and it changes every clock tick according to the rules noted below. The credit-based shaper algorithm implicitly still supports the priority-based selection, as a queue is not selected if there is an eligible higher-priority queue, irrelevant from the credit value.

This simulator has the option of manually setting the values of idleSlope for each SR class, or automatically calculating the value based on that queue’s utilization of the egress port, as shown in Equation8:

X Cm cycleDuration idleSlope(N) = · networkSpeed · (8) Tm gateOpenT imeNonST m∈N where Cm is the transmission time of message m, and Tm is the period of message m. cycleDuration is the duration of the GCL, and gateOpenT imeNonST is the cycleDuration minus all the protected windows in one cycle. This gives us the duration per cycle where the gates of non-scheduled traffic are opened.

Figure 24: Credit flow diagram

7.2.3. Gate mechanism Unlike the IEEE 802.1Qbv standard the AVB ST provides scheduled windows, which consider the period and the length of each transmitted message. Therefore, ST Windows are scheduled only when there are ST messages to be transmitted and they are sized according to the frame length

34 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN of the specific ST message, thus entailing a more efficient bandwidth utilization. [25] In a switch that supports the Time-Aware Scheduler each queue has a gate whose state is controlled by a Gate Control List. The length of the list depends on the messages being transmitted through the switch. The GCL is generated offline to guaranty a scheduled behaviour in the network. The schedule is defined during the design of the system based on the apriori information of the arrival time and transmission time of time-critical data. The GLC is time-aware and as the time is synchronised, when the time-critical frame arrives, the GCL will change gate state accordingly and as scheduled. The GCL is a list of Gate Control Operations GCO, that are structures consisting of the time component ctime when the gate operation is executed and a list of gate states for each queue that will be immediately assigned to the queue when the operation executes. After delay ticks have passed the next gate operation will execute as the delay determines how long the current gate state will last. The last attribute of a GLO is the repeat value that is only true when the list is at the end and the first gate operation of a new gating cycle is being executed. The gate operations allow any combination of open/closed states to be defined, and the mechanism makes no assumptions about which traffic classes are time-critical and which are not, any such assumptions are left to the designer of the sequence of gate operations, and it is their responsibility to write a correct sequence.

7.2.4. Transmission Selection The transmission selection in the model part that is influenced by the gating mechanism, the credit value and queue states. It is the mechanism that determines which queue will be selected for transmission and moved to the MAC layer. If the preemption is disabled (preemptEnable = 0) the selected frame will be transmitted immediately and without interference. If the preemption mechanism is supported (preemptEnable = 1) there will be a transmission selection for each the pMAC and eMAC, and once a preemptable frame is selected, the undisturbed transmission is not guarantied as the pMAC frames can be preempted on the MAC layer. The flow diagram of the TS decision making is shown in Figure 25. At each tick the TS mechanism is called, and it will start selection as long as the egress port is idle, meaning the next best eligible frame can be sent to the MAC. The Strict Priority selection is implemented simply by giving advantage to higher-priority queues by checking if they are eligible before checking the rest.

7.2.5. Preemption mechanism The preemption mechanism is called in every clock tick. Preemption can only occur if the preemption mechanism is enabled (preemptEnable = 1), if there is a frame transmitting in that tick and if the transmitting frame is preemptable. The behaviour of the mechanism is shown with a flow diagram in Figure 26.

35 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

Figure 25: TS flow diagram

7.3. TSNS model In the tsns config.h header file the user sets the values: (1) simNumber, which is the number of simulations the simulator will make in one run. (2) simTime, which is the time in us needed for one simulation to finish.

(3) HoldAndRelease, which is the explicit guard-band and is enabled if this value is 1, and disabled when the value is 0. It will only be included when the preemption mechanism is enabled. (4) preemptEnable, which enables the preemption mechanism if the value is 1, and disables it if the value is 0. MaxNumberOfDifferentMessages will determine the number of different types of messages the simulator generates automatically, in terms of period and transmission time. item MaxNumberOfDifferentOffsets determines the number of different initial offsets the generated messages can have. In the switch configure.c file the user sets the switch configuration:

36 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

Figure 26: Preemption flow diagram

Figure 27: TSNS model design

(1) timeSlot, which is the value of the time increment in µs, it is a global variable that represents the synchronised clock tick.

(2) networkSpeet, which is the value of the egress port bandwidth, in Mbps units. (3) NumberOfQueues, and also the number of messages of each traffic type; BE nbr, BK nbr, EE nbr, CA nbr, VI nbr, VO nbr, IC nbr, NC nbr.

(4) For each queue the user sets the values of cbs, that determines the transmission algorithm for that queue, scheduled 19, that determines if the traffic class is scheduled and express, that

19scheduled - this value is not an explicitly input value, rather determined by the GCL. Using the GCL as an

37 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

determines if the traffic class is express or preemptable.

To evaluate the switch model performance we measure: (1) Messages.txt file stores the last simulation automatically generated messages and the values of there highlighted parameters. (2) Timeline.txt that stores the executing message at each time tick in the last simulation.

(3) Results.txt that stores timing values for each message and each queue: (a) Minimum/Average/Maximum wait time of all message frames in all simulations. (b) Minimum/Average/Maximum response time of all message frames in all simulations. (c) Minimum/Average/Maximum release jitter of all message frames in all simulations. (d) Minimum/Average/Maximum response jitter of all message frames in all simulations.

input is also possible in TSNS, but this thesis is experiment oriented and implicit GCL list is used to implement multiple-simulation experiments

38 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

8. Evaluation

In this section, we will conduct a simulative assessment of various cases of different preemption configuration patterns. The scenarios present different use-cases that intend to confirm assumptions made in Sec. 6. The simulation results are used to evaluate network performance levels. These results are measured for each message frame in the egress port during run-time of 10ms, using the TSNS simulator tool from Sec. 7. The measured values refer to either measurements of each message (traffic streams) or each queue (traffic class). The traffic streams get mapped to traffic classes, and measurements of each traffic stream will have an impact on the measurement values of the appropriate traffic class. In the transmission selection process measurement of

• the arrival time of each frame • the start time of each frame • the end time of each frame are saved and then the TSNS simulator, based on these values calculates the:

• Minimum/average/maximum wait time for each message • Minimum/average/maximum response time for each message • Minimum/average/maximum jitter of wait time for

• Minimum/average/maximum jitter of response time for each message We will refer to these values as per-message results. On the other hand when we write per-queue results, we refer to: • Minimum/average/maximum wait time obtained from all messages that use the queue

• Minimum/average/maximum response time obtained from all messages that use the queue • Minimum/average/maximum release jitter obtained from all messages that use the queue • Minimum/average/maximum response jitter obtained from all messages that use the queue The average values of one message or queue are the average measurements of each message packet of that message or the queue the message is mapped to. These average results are heavily dependent on the message set and the temporal relation between the messages in the set. To eliminate this dependency, we conduct the experiment multiple times to get the parameters of a distribution that will not depend on the message set. These parameters will be the mean value Average time and the variance that is lower-bounded by the Maximum time and upper-bounded by the Minimum time. All three values will be measured for the wait time, response time, release jitter and response jitter.

A simulation with the same preemption configuration is run a large number of times. The bigger the simulation number, the more valuable are the experiment results, leaving only the switch configuration the effective input parameter, as it is the only fixed setting through the simulations. The simulated network model has the bandwidth set to 100 Mbps. We choose this bandwidth as the preemption mechanism has a more significant impact on lower-bandwidth networks, as concluded in Sec. 6.2. The traffic streams periods in the order of few milliseconds that are typical of the automation applications [35].

In the following sections we present four hand-made scenarios that are intended to support the assumptions made in Section 6. Scenario 1 has a fixed message set in a network that supports time-aware scheduling and credit-based shaping. It intends to support assumptions made on the effects one or more SR classes being set as express. The results are per-message. Scenario 2 has similar preemption configuration as Scenario 1, comparing a sub-scenario with no express SR classes to a sub-scenario with multiple SR classes. Both sub-scenarios have an express ST class and preemptable BE class. However, this scenario presents the results per-queue with a larger number

39 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN of messages that are generate in run-time over 1000 simulations. It supports the conclusions from Section 6.7. In Scenario 3 the messages are generated in run-time over 1000 simulations. The switch configuration is fixed and it supports credit-based shaping, but not tame-aware scheduling. The scenario intends to evaluate the assumptions made in Section 6.6. on the performance of different express SR classes when there is no scheduled traffic. The results are per-queue. Scenario 4 presents a case with a higher number of traffic classes, of which four are SR. In different sub-scenarios the effects of having each SR class as express, besides the express ST is presented. One sub-scenario even evaluates the effects of having low-priority express traffic has on the performance.

8.1. Scenario 1: Single-simulation - Offline generated message set - Per- message results This scenario aims to represent and analyse the results for individual messages. We will limit this use-case to a small number of messages. This is a single-simulation scenario with specifically chosen message set. In this scenario, we will have the same traffic stream set of six different messages, while we change the switch configuration. The independent variable is the express value for each traffic class. If express value is set to 1 that means that traffic class will be served by the eMAC otherwise, the traffic class is preemptable and served by pMAC. The traffic stream set for the simulation is given in Table5.

Message Priority Period (µs) Transmission time Offset M0 7 (NC) 4000 43 0 M1 7 (NC) 8000 83 100 M2 5 (VO) 4000 43 0 M3 5 (VO) 4000 43 0 M4 3 (CA) 8000 83 0 M5 3 (CA) 4000 43 0 M6 1 (BE) 8000 83 0

Table 5: Scenario 1 - Traffic stream set

This scenario will validate the possibility of running the TSNS simulator with a predefined set of messages and offline generated GCL list. This use-case is evaluating the assumption that in combination with scheduled traffic the SR traffic will have less latency is it is preemptable, noted in section 6.7.

Simulations number 1 Simulation time (µs) 10000 Enable preemption 1 Hold and release 1 Network speed (Mbps) 100 Time slot (µs) 1

Table 6: Scenario 1 - Switch configuration

The switch configuration is shown in Table7. The switch has a fixed number of queues, Num- berOfQueues = 4. Based on Table4, MO and M1 will be mapped to queue Q3, which is a ST traffic class. Messages M2 and M3 will be mapped to queue Q2, which is Class A, messages M4 and M5 will be mapped to Q1, which is Class B and lastly, M6 will be mapped to queue Q0, which is a BE traffic class.

The Express column represents the preemption configuration pattern for three sub-scenarios. Scenario 1.1 is the default case, where only scheduled traffic is express. This scenario is used as

40 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

Queue Priority Scheduled TS Algorithm Express 1.1 1.2 1.3 Q0 0 0 0 0 0 0 Q1 1 0 1 0 0 1 Q2 2 0 1 0 1 1 Q3 3 1 0 1 1 1

Table 7: Scenario 1 - Queue (switch) configuration the base for comparison in all evaluation examples. In Scenario 1.2, we will change Class A that is supported by queue Q2 to be express as well. Lastly, we change Class B that is supported by queue Q2 to be express as well, in Scenario 1.3.

8.1.1. Results The results obtained from the TSNS simulator in Scenario 1 configurations are represented in Table8. The rows represent the seven messages defined in Table5. The columns are grouped into four dependable variables that are the measures of network performance. Every variable has a sub-column for each preemption configuration shown in Table7.

Traffic Wait time (µs) Response time Release jitter Response jitter streams 1.1 1.2 1.3 1.1 1.2 1.3 1.1 1.2 1.3 1.1 1.2 1.3 M0 0 0 0 43 43 43 0 0 0 0 0 0 M1 0 0 0 83 83 83 0 0 0 0 0 0 M2 43 136 136 86 179 179 0 140 140 0 140 140 M3 299 270 276 342 313 319 134 91 99 134 91 99 M4 86 43 226 261 198 309 0 0 0 0 0 0 M5 336 336 336 379 379 379 375 375 375 375 375 375 M6 261 261 43 344 387 149 0 0 0 0 0 0

Table 8: Scenario 1 - Results - Average delays per-message

41 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

8.1.2. Discussion If we analyse the Results Table8, we can conclude the following: • (MO and M1 analysis) Changing the preemption configuration of SR traffic classes does not affect the ST traffic class. • (M2 and M3, Scenario 1.1 and 1.2 analysis) Setting Class A as express will increase the wait and release time of the first-in-queue traffic stream (M2). The other message (M3) are influenced by the recovery of credit more than the preemption configuration change. • (M4 and M5, Scenario 1.1 and 1.2 analysis) As the response time of the first-in-queue message of Class A increases, so does the response time of the first-in-queue message (M4) of Class B decreases. The same goes for the wait time. The reasoning behind this is that if the higher-priority class does not utilize the port it leaves the port free for the lower priority class. The second-in-queue message (M5) is again influenced mostly by the stream reservation of Class B. • (M6, Scenario 1.1 and 1.2 analysis) The best-effort is not affected in changing Class A from preemptable to express. • (M2 and M3, Scenario 1.2 and 1.3 analysis) Setting Class B from preemptable to express does not affect the higher-priority classes. • (M4 and M5, Scenarios 1.2 and 1.3 analysis) The wait and response time of first-in-queue message (M4) in Class B is increased, as was the latency of Class A in Scenario 1.2. The second-in-queue message (M5) in unaffected. • (M6, Scenario 1.2 and 1.3 analysis) As best-effort message (M6) is the first lower-priority class after Class B it benefits the most from Class B not utilizing the port, and both wait time and response time decrease. • The jitter in all scenarios is mostly unaffected.

8.1.3. Conclusion - Scenario 1 The small number of messages caused the robustness of the jitter values as port was utilized in one period of time with the repetitive behavior in the network. This could be different with more messages. The dependency of the second-in-queue messages, that is mostly dictated by the idleSlope value, could be changed by increasing the idleSlope. The main conclusion is that the response time of classes that support the Credit-Based Shaper increases if they are set as express, which was the assumption stated in Sec. 6.7. At the same time the response time of the first lower-priority class decreases.

8.2. Scenario 2 - Express ST and multiple express SR traffic This scenario compares the performance variables for two switch configurations. The switch supports 5 traffic classes (ST, Class A, Class B, Class C, BE). The first configuration has only ST class as express, and the second configuration has ST together with all the SR classes as express. The Minimum, average and Maximum values per-class are measured over a number of 1000 simulations that last 10 ms. The message set configuration is as follows: • MaxNumberOfDifferentMessages = 9, the period and size of each traffic stream are uniformly chosen from the set {(500µs, 62bytes), (1ms, 125bytes), (2ms, 250bytes), (4ms, 500bytes), (8ms, 1000bytes), (12ms, 1500bytes), (16ms, 2000bytes), (24ms, 3000bytes), (48ms, 6000bytes)} so that the load of each stream is equal to 1 Mbps and the the utilization is 1%. • MaxNumberOfDifferentOffsets = 6, the duration of the offset is uniformly chosen from a set {0, 0, 0, 100µs, 200µs, 0}. • idleSlope A = 15, idleSlope B = 45 and idleSlope C = 35.

42 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

Simulations number 1000 Simulation time (µs) 10000 Enable preemption 1 Hold and release 1 Network speed (Mbps) 100 Time slot (µs) 1

Table 9: Scenario 2 - Switch configuration

Traffic type BK BE EE CA VI VO IC NC Quantity 0 9 7 0 5 3 9 15

Table 10: Scenario 2 - Traffic stream set configuration

Queue Priority Scheduled TS Algorithm Express 2.1 2.2 Q0 0 0 0 0 0 Q1 1 0 1 0 1 Q2 2 0 1 0 1 Q3 3 0 1 0 1 Q4 4 1 0 1 1

Table 11: Scenario 2 - Queue (switch) configuration

8.2.1. Results The results for Minimum, Average and Maximum are represented in three different tables. Each table is built as the Result Table in the previous scenario. All the values are in µs.

Queue Wait time Res time Rel jitter Res jitter streams 1.1 1.2 1.1 1.2 1.1 1.2 1.1 1.2 Q0 1046 299 1126 345 0 0 0 0 Q1 567 1300 611 1422 0 0 0 0 Q2 374 941 403 981 0 0 0 0 Q3 100 687 129 715 0 0 0 0 Q4 0 0 43 43 0 0 0 0

Table 12: Scenario 1 - Results - Minimum delays per-class

Queue Wait time Res time Rel jitter Res jitter streams 1.1 1.2 1.1 1.2 1.1 1.2 1.1 1.2 Q0 3128 1329 3275 1513 1276 1710 436 647 Q1 1858 4303 1998 4569 1220 1662 500 721 Q2 1175 2252 1274 2377 1276 1751 530 736 Q3 436 1163 525 1249 1294 1758 596 793 Q4 0 0 43 43 1303 1789 635 826

Table 13: Scenario 1 - Results - Average delays per-class

43 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

Queue Wait time Res time Rel jitter Res jitter streams 1.1 1.2 1.1 1.2 1.1 1.2 1.1 1.2 Q0 9148 4766 9170 5920 5712 6321 2859 5184 Q1 6074 9958 6587 14291 6242 6623 2961 5549 Q2 4041 6656 4506 7198 6441 6888 2964 5318 Q3 1688 2680 2014 3027 5687 7718 2964 5238 Q4 0 0 43 43 5352 6659 2709 5197

Table 14: Scenario 1 - Results - Maximum delays per-class

8.2.2. Conclusion - Scenario 2 In a network that supports the preemption mechanism, if we assign only the scheduled traffic to be express, the whole traffic class set will benefit. By assigning the SR classes to be express, only the best-effort class benefits, while the SR has an increase in wait and response time (around 241%(BC), 114%(AV ERAGE), 75%(WC)). The best-effort response time has decreased response time in the second scenario (around 69%(BC), 54%(AV ERAGE), 35%(WC)).

44 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

8.3. Scenario 3 - No ST traffic class This scenario is again a multi-simulation case study. It studies the assumptions and remarks from Sec. 6.6., where Time-Aware Shaping is not supported, or simply there is no scheduled traffic. Scenario 3. has three different sub-scenarios where each has changed preemption configuration patterns of the three SR classes - Class A, Class B and Class C.

All six sub-scenarios have the same traffic stream set configuration, shown in Table 15. Other parameters of the message set configuration have the values as follows: • MaxNumberOfDifferentMessages = 9, the period and size of each traffic stream are uniformly chosen from the set {(500µs, 62bytes), (1ms, 125bytes), (2ms, 250bytes), (4ms, 500bytes), (8ms, 1000bytes), (12ms, 1500bytes), (16ms, 2000bytes), (24ms, 3000bytes), (48ms, 6000bytes)} so that the load of each stream is equal to 1 Mbps and the the utilization is 1%.

• MaxNumberOfDifferentOffsets = 6, the duration of the offset is uniformly chosen from a set {0, 0, 0, 100µs, 200µs, 0}. Scenario 3 has the switch configuration pattern as follows (Tab.9): • The portTransmissionRate is equal to 100 Mbps and it is assigned to the variable networkSpeed.

• The time increment is 1 µs and is assigned to the variable timeSlot. • The scenario has seven traffic classes and this value is assigned to NumberOfQueues. There are three traffic classes that support the Credit-Based Shaper (CBS = 1) and one best-effort supporting Strict Priority transmission selection algorithm.

• The scenario has three SR traffic classes, NumberOfCBS = 3, based on the total traffic classes utilization, we calculate the utilisation that is assigned to each SR class, and choose a idleSlope value in that range, lower-bounded by the utilization of that individual traffic class. The idleSlope values are: idleSlope A = 35 Mbps ,idleSlope B = 25 Mbps and idleSlope C = 20 Mbps.

• Preemption with the HR guard-band is allowed, HoldAndRelease = 1 and preemptEnable = 1.

Traffic type BK BE EE CA VI VO IC NC Quantity 3 15 10 5 7 7 6 6

Table 15: Scenario 3 - Traffic stream set configuration

Queue Priority Scheduled TS Algorithm Express 3.1 3.2 3.3 Q0 0 0 0 0 0 0 Q1 1 0 1 0 0 1 Q2 2 0 1 0 1 0 Q3 3 0 1 1 1 1

Table 16: Scenario 3 - Queue (switch) configuration TS Algorithm: 1 (CBS) and 0 (SP)

45 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

8.3.1. Results All the result values are measured in the TSNS simulator tool, and the units are µs.

Traffic Wait time (µs) Response time Release jitter Response jitter streams 3.1 3.2 3.3 3.1 3.2 3.3 3.1 3.2 3.3 3.1 3.2 3.3 Q0 1019 1019 1019 1069 1072 1071 0 0 0 0 0 0 Q1 1863 1863 1753 1988 1993 1842 0 0 0 0 0 0 Q2 478 320 509 508 349 539 0 0 0 0 0 0 Q3 12 28 27 32 45 51 0 0 0 0 0 0

Table 17: Scenario 3 - Results - Minimum delays per traffic class

Traffic Wait time (µs) Response time Release jitter Response jitter streams 3.1 3.2 3.3 3.1 3.2 3.3 3.1 3.2 3.3 3.1 3.2 3.3 Q0 3868 3867 3868 3983 3999 3997 1653 1640 1656 1725 1713 1735 Q1 4983 4983 4905 5361 5396 5265 1720 1709 n1739 1693 1677 1705 Q2 3577 3489 3608 3743 3643 3775 1007 1002 997 1633 1626 1650 Q3 142 165 165 192 215 114 955 956 948 1702 1685 1708

Table 18: Scenario 3 - Results - Average delays per traffic class

Traffic Wait time (µs) Response time Release jitter Response jitter streams 3.1 3.2 3.3 3.1 3.2 3.3 3.1 3.2 3.3 3.1 3.2 3.3 Q0 8329 8329 8329 8551 8580 8572 7239 7217 7239 8228 8172 8298 Q1 7413 7413 7311 8005 8048 7775 7341 8244 7242 7434 7171 7434 Q2 7284 7240 7284 7559 7483 7576 8094 8094 8094 7389 7157 7540 Q3 592 596 636 710 714 755 8154 8154 8154 8064 8208 8064

Table 19: Scenario 3 - Results - Maximum delays per traffic class

46 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

8.3.2. Discussion

Figure 28: Scenario 3- Bar-graph representation of response time per-class

Figure 28 illustrates a graphic representation of the results easier to visualize than a table repre- sentation. If we focus on the BE traffic class we observe that it has the biggest variation from minimum values to maximum values. The unexpected result is that the BE traffic is not affected by the change of SR Classes from preemptable to express. Another interesting thing to notice is that Class C has always decreased response time in Scenario 3.3 (yellow bar), that is when it is set to be express. The same goes for Class B. When observing Class B we see that it has decreased response time for Scenario 3.2 (red bar). Class A is set to be express for all scenarios, but we notice that the lowest response time for Class A is Scenario 3.1, that is when it is the only express traffic class.

8.3.3. Conclusion - Scenario 3 It is easy to conclude that the preemption configuration has a different effect on SR traffic classes from the one in networks with scheduled traffic (Scenario 1 and Scenario 2). Setting SR traffic as express now decreases its response time. This aligns with the conclusions made in Sec. 6.6.. Unfortunately, setting middle-priority SR class as express increases the response time of higher- priority express SR classes. This effect is expected, as now the HP express ST traffic has higher low-priority blocking as it cannot preempt the MP express SR traffic classes. A result that does not align with assumptions from Sec. 6.6. is the effect the preemption configuration has on the best-effort traffic. As is observed from the results, the best-effort traffic is not affected. This could be due to the credit computation method or the message set configuration that has a defined set of periods, message length and offsets. This can be investigated in future work, with optimised port utilization and CBS calculations.

47 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

8.4. Scenario 4 - Express ST and various single express SR - express ST and express LP - High number of traffic classes Scenario 4. has six different sub-scenarios where each has changed express SR traffic classes out of three SR classes. Scenario 4.1 shows the default configuration that supports the preemption mechanism, only scheduled traffic being express. Scenarios 4.2, 4.3, 4.4 and 4.5 have one shaped traffic class as an additional express traffic class to the express ST class - Class A, Class B, Class C and Class D, respectively. Scenario 4.6 shows the effect of having one low-priority non-shaped traffic class as express.

All six sub-scenarios have the same traffic stream set configuration, shown in Table 20. Other parameters of the message set configuration have the values as follows: • MaxNumberOfDifferentMessages = 9, the period and size of each traffic stream are uniformly chosen from the set {(500µs, 62bytes), (1ms, 125bytes), (2ms, 250bytes), (4ms, 500bytes), (8ms, 1000bytes), (12ms, 1500bytes), (16ms, 2000bytes), (24ms, 3000bytes), (48ms, 6000bytes)} so that the load of each stream is equal to 1 Mbps and the the utilization is 1%.

• MaxNumberOfDifferentOffsets = 6, the duration of the offset is uniformly chosen from a set {0, 0, 0, 100µs, 200µs, 0}. Scenario 4 has the switch configuration pattern as follows: • The portTransmissionRate is equal to 100 Mbps and it is assignet to the variable networkSpeed.

• The time increment is 1 µs and is assigned to the variable timeSlot. • The scenario has seven traffic classes and this value is assigned to NumberOfQueues. Under assumptions made in Section 6., only the highest-priority traffic class is scheduled (ST = 1) and supports Strict Priority transmission selection algorithm, the two lowest-priority traffic classes are best-effort, also supporting Strict Priority transmission selection algorithm, while the rest traffic classes support the Credit-Based Shaper (CBS = 1). • The scenario has four SR traffic classes, NumberOfCBS = 4, based on the total traffic classes utilization, we calculate the utilisation that is assigned to each SR class, and choose a idleSlope value in that range, lower-bounded by the utilization of that individual traffic class. The idleSlope values are equal and set to 15 Mbps.

• Preemption with the HR guard-band is allowed, HoldAndRelease = 1 and preemptEnable = 1.

Traffic type BK BE EE CA VI VO IC NC Quantity 3 11 10 2 5 7 3 5

Table 20: Scenario 4 - Traffic stream set configuration

8.4.1. Results The results are intended to show the impact on setting one of the SR classes as express, along with the express ST class. The results should not depend on the message set. We have six different switch configurations, and each will be run for 1000 simulations. In each simulation the results are obtained over a period of 10 ms. The results that are measured are the response times of each message frame, grouped together to show the response times per-class. To eliminate the message set dependency, and average response time per-class has been calculated. The results of all six configuration per seven traffic classes is shown in Table 22.

48 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

Queue PR ST CBS Express 4.1 4.2 4.3 4.4 4.5 4.6 Q0 0 0 0 0 0 0 0 0 0 Q1 1 0 0 0 0 0 0 0 1 Q2 2 0 1 0 0 0 0 1 0 Q3 3 0 1 0 0 0 1 0 0 Q4 4 0 1 0 0 1 0 0 0 Q5 5 0 1 0 1 0 0 0 0 Q6 6 1 0 1 1 1 1 1 1

Table 21: Scenario 4 - Queue (switch) configuration

Traffic Response time (µs) streams 4.1 4.2 4.3 4.4 4.5 4.6 Q0 4692 4692 4691 4648 4618 3167 Q1 2855 2852 2792 2851 2594 5243 Q2 1548 1544 1253 1532 2383 1562 Q3 1651 1646 697 1839 1703 1661 Q4 557 539 1308 563 610 570 Q5 235 421 300 240 280 248 Q6 45 45 45 45 45 45

Table 22: Scenario 4 - Results - Average response time per traffic class Average response times over 1000 simulations lasting 10 ms, HR enabled.

8.4.2. Discussion For an easier and faster overlook of the table results, Figure 29 illustrates the results graphically. There is a consistent effect that can be observed in each sub-scenario. When the SR class becomes

Figure 29: Scenario 4 - Plot of results - Average response time per traffic class Graph shows the highers response time for each traffic class based on the switch configuration. express its response time increases, as concluded in the previous scenarios, the sub-cases 4.2 (red bar), 4.3 (yellow bar), 4.4(purple bar), and 4.5 (green bar) where we can see the increased response time for Class A, Class B, Class C and Class D, respectively. It can also be observed that in a

49 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN scenario where the SR class has increased response time, the first-lower-priority class has decreased response time. The same behavior can be seen in the last Scenario 4.6 (light blue bar), the increased response time of the class (EE) that changed from preemptable to express, and the decreased response time of the first-lower-priority class (BE). The effect is most influential in this scenario. From the bar-graph, we can observe that the effect is less noticeable in Scenario 4.4, and more noticeable in Scenario 4.3.

8.4.3. Conclusion - Scenario 4 We can conclude that having SR class as express increases its delays, simultaneously decreasing the response time of the first-lower-priority class as it takes advantage of the now non-occupied port. Before the first-higher-priority SR class was express it would have occupied the port. This effect is even more noticeable on the low-priority non-shaped class. This is because this class does not reserve the stream and it is not shaped, thus the traffic stream shape is only influenced by the preemption configuration. SR classes still undergo the Credit-Based Shaper making the traffic flow more robust to change.

If we look at the number of messages in each traffic class, from Table ?? and the results, we see that the mentioned effects was more noticeable in Scenario 4.3. where Class B preemption configuration is changed, and Class B has the biggest number of messages (12 messages). On the other hand the mentioned effect is less noticeable in Scenario 4.4, where Class C preemption configuration is changed, and Class C has the smallest number of messages (2 messages). We are not deriving any conclusions about how much the response time change varies from class to class as the response time change is not only influenced by the class and preemption configuration (switch configurations), but also by the traffic stream set and the credit computation method. To show this we measured the results (Fig. 30) of exactly the same switch configuration case with a different message set shown in Table 23. The same correspondence can be observed between the number

Traffic type BK BE EE CA VI VO IC NC Quantity 14 7 5 15 1 1 8 15

Table 23: Scenario 4B - Traffic stream set configuration

Figure 30: Scenario 4B - Plot of results - Maximum response time per traffic class Graph shows the highers response time for each traffic class based on the switch configuration. of messages on a class and how much the response times change. Nevertheless, the response time

50 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN always changes in the same way, just not for the same value. The change is influenced by the preemption configuration pattern and is the focus of this thesis. In future work the dependence on the message set configuration can be eliminated from the results.

9. Conclusions

In this work we reflect on the effects a configuration with multiple express traffic classes has on the performance. The starting point of this study is that a multi-express network behaviour description exists in the literature, without (up to our knowledge) any qualitative or quantitative comparison to single-express configurations. Performance analysis has been done on single-express TSN networks and from start we know the benefits the preemption mechanism has on the network performance. This thesis is the pioneering work to overcome the current limitation of a single-express traffic class bridge. In Sec. 6. an analysis has been done on the effects the switch configuration and preemption configuration have on the performance. The research question asks how is the performance of the time-sensitive network affected by various configurations of traffic classes as express or preemptable? When scheduled traffic in a network is set to be express the benefits are well known and researched. Making it express leads to a better link utilisation. In a network that does not support time-triggered scheduling, setting the high-priority traffic as express decreases the response time of the high-priority traffic classes on the cost of low-priority traffic classes. The literature review together with the simulative results done in this thesis show that the multi-express traffic classes effects vary in networks that support both the Time-Aware Scheduling and the Credit-Based Shapers, from networks that support only the Credit-Based Shapers. In networks where time-triggered scheduling is not supported, different preemption configuration don’t change the total port utilisation significantly. By setting a SR class as express its response time is decreased, on the cost of the first-lower-priority class. There has no significant effect on the higher-priority classes or the rest of the lower-priority classes. In a network that supports the Time-Aware Shaper, setting non-scheduled traffic as express has an contrary effect. If SR class is set to be express its response time increases. The first-lower-priority class that is preemptable has reduced response time. Adding more non-schedulable traffic classes to the eMAC will ultimately annul the preemption mechanism effect. As the use of the reduced guard-band is increased, so is the port utilisation. Simulative results are obtained using a custom developed egress port simulator that supports the multi-express traffic class configuration. The simulative results, presented in Sec.8. support the assumptions on the effects of various preemption configurations of traffic classes. The section presents the outcomes of multiple hand-made scenarios, each with different switch and preemption configurations. Each variations has been evaluated measuring the response and wait time of each transmitted frame in the switch, run in 1000 simulations. The results also show that the dependency is not purely on the preemption configuration, adding to the limitation of this thesis.

9.1. Future work In the future work we will have a completely randomised and automated run-time generation of the message set, as well as the idleSlope value and the GCL. The idleSlope computation will be optimised (e.g. using method that are proposed in existing work). For the gating mechanism we will use an improved synthesis tools for the GCL that is based on heuristic algorithms and optimization theory (using Array Theory Encoding). In the simulator implementation, overlapping of scheduled traffic is not solved. In various duration of simulations and different message sets these overlapping could happen. In the future work we will refer to that problem. As future work we will address a response time analysis for the TSN combined with the frame preemption mechanism supporting multiple-express classes. The simulation results in sec 8 give measured worst-case data, but is is limited to the stochastic behavior in the 1000 simulated-cases with limited selection of messages. True worst-case analysis results will be computed, not measured. Future work will extend the effects of various preemption configurations to switches with multiple egress ports, and further to multi-bridge networks, with end-to-end delay analysis.

51 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

10. Acknowledgments

I would like to express my gratitude to my supervisor Mohammad Ashjaei for always being involved, for the continuous guidance and advice throughout the thesis. I would like to thank my examiner Saad Mubeen who followed my work and provided feedback in crucial moments. Thank you for the thorough investigation of this report. Lastly, but certainly, not least I would like to thank my parents and my brother for their love and support from long distances.

52 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

References

[1] Wikipedia contributors. (2009) Audio video bridging - figure 3 – clocking hierarchy. [Online]. Available: https://en.wikipedia.org/wiki/File:Clock-Master.pdf

[2] A. Nasrallah, A. S. Thyagaturu, Z. Alharbi, C. Wang, X. Shao, M. Reisslein, and H. ElBakoury, “Ultra-low latency (ull) networks: The ieee tsn and ietf detnet standards and related 5g ull research,” IEEE Communications Surveys & Tutorials, vol. 21, no. 1, pp. 88–145, 2018. [3] J. Jr, M. Chen, and T. Purdin, “Systems development in information systems research,” J. of Management Information Systems, vol. 7, pp. 89–106, 01 1991.

[4] “Ieee standard for local and metropolitan area network–bridges and bridged networks,” IEEE Std 802.1Q-2018 (Revision of IEEE Std 802.1Q-2014), pp. 1–1993, July 2018. [5] S. Mubeen, E. Lisova, and A. V. Feljan, “Timing predictability and security in safety-critical industrial cyber-physical systems: A position paper,” Applied Sciences–Special Issue ”Emerging Paradigms and Architectures for Industry 4.0 Applications”, vol. 10, no. 3125, pp. 1–17, April 2020. [6] D. Law, D. Dove, J. D’Ambrosia, M. Hajduczenia, M. Laubach, and S. Carlson, “Evolution of ethernet standards in the ieee 802.3 working group,” IEEE Communications Magazine, vol. 51, no. 8, pp. 88–96, August 2013.

[7] B. Hallberg, Networking: a beginner’s guide. Osborne, 2002. [8] R. J. Kohlhepp, “The 10 most important products of the decade—number five: Kalpana etherswitch,” Network Computing, p. 117, 2000. [9] R. De, M. Valentim, A. Morais, G. Brand˜ao,and A. Guerreiro, “A performance analysis of the ethernet nets for applications in real-time: Ieee 802.3 and 802.3 1 q,” IEEE International Conference on Industrial Informatics (INDIN), 07 2008. [10] O. Dolejs, P. Smolik, and Z. Hanzalek, “On the ethernet use for real-time publish-subscribe based applications,” in IEEE International Workshop on Factory Communication Systems, 2004. Proceedings. IEEE, 2004, pp. 39–44. [11] F. B. Carreiro, R. Moraes, J. A. Fonseca, and F. Vasques, “Real-time communication in unconstrained shared ethernet networks: The virtual token-passing approach,” in 2005 IEEE Conference on Emerging Technologies and Factory Automation, vol. 1. IEEE, 2005, pp. 8–pp. [12] G. Patti and L. L. Bello, “Performance assessment of the ieee 802.1 q in automotive applications,” in 2019 AEIT International Conference of Electrical and Electronic Technologies for Automotive (AEIT AUTOMOTIVE). IEEE, 2019, pp. 1–6.

[13] “Ieee standard for local and metropolitan area networks–bridges and bridged networks,” IEEE Std 802.1Q-2014 (Revision of IEEE Std 802.1Q-2011), pp. 1–1832, 2014. [14] “Ieee standard for local and metropolitan area networks – bridges and bridged networks - amendment 25: Enhancements for scheduled traffic,” IEEE Std 802.1Qbv-2015 (Amendment to IEEE Std 802.1Q-2014 as amended by IEEE Std 802.1Qca-2015, IEEE Std 802.1Qcd-2015, and IEEE Std 802.1Q-2014/Cor 1-2015), pp. 1–57, 2016. [15] “Ieee standard for local and metropolitan area networks – bridges and bridged networks – amendment 26: Frame preemption,” IEEE Std 802.1Qbu-2016 (Amendment to IEEE Std 802.1Q-2014), pp. 1–52, Aug 2016.

[16] D. Thiele and R. Ernst, “Formal worst-case performance analysis of time-sensitive ethernet with frame preemption,” in 2016 IEEE 21st International Conference on Emerging Technologies and Factory Automation (ETFA), 2016, pp. 1–9.

53 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

[17] P. Doyle, “Introduction to real-time ethernet ii,” The Extension—A Technical Supplement to Control Network, vol. 5, no. 4, 2004.

[18] H.-T. Lim, D. Herrscher, M. J. Waltl, and F. Chaari, “Performance analysis of the ieee 802.1 ethernet audio/video bridging standard.” in SimuTools, 03 2012, pp. 27–36. [19] M. Ashjaei, S. Mubeen, J. Lundb¨ack, M. G˚alnander,K. Lundb¨ack, and T. Nolte, “Modeling and timing analysis of vehicle functions distributed over switched ethernet,” in IECON 2017 - 43rd Annual Conference of the IEEE Industrial Electronics Society, Oct 2017, pp. 8419–8424.

[20] J. Qiu, “Performance analysis and system modelling of ethernet-based in vehicle communication,” Master’s thesis, KTH, School of Information and Communication Technology (ICT), 2017. [21] S. Mubeen, M. Ashjaei, and M. Sj¨odin,“Holistic modeling of time sensitive networking in component-based vehicular embedded systems,” in 45th Euromicro Conference on Software Engineering and Advanced Applications (SEAA), 2019, pp. 131–139. [22] L. L. Bello, R. Mariani, S. Mubeen, and S. Saponara, “Recent advances and trends in on-board embedded and networked automotive systems,” IEEE Transactions on Industrial Informatics, vol. 15, no. 2, pp. 1038–1051, 2019. [23] H. Sulji´cand M. Muminovi´c,“Performance study and analysis of time sensitive networking,” Ph.D. dissertation, 06 2019. [24] L. Zhao, P. Pop, and S. Craciunas, “Worst-case latency analysis for ieee 802.1qbv time sensitive networks using network calculus,” IEEE Access, vol. PP, pp. 1–1, 07 2018. [25] M. Ashjaei, G. Patti, M. Behnam, T. Nolte, G. Alderisi, and L. Lo Bello, “Schedulability analysis of ethernet audio video bridging networks with scheduled traffic support,” Real-Time Systems, p. 1–52, 02 2017. [26] R. S. Oliver, S. S. Craciunas, and W. Steiner, “Ieee 802.1qbv gate control list synthesis using array theory encoding,” in 2018 IEEE Real-Time and Embedded Technology and Applications Symposium (RTAS.

[27] W.-K. Jia, G.-H. Liu, and Y.-C. Chen, “Performance evaluation of ieee 802.1 qbu: Experimental and simulation results,” in 38th Annual IEEE Conference on Local Computer Networks. IEEE, 2013, pp. 659–662. [28] J. Kim, B. Y. Lee, and J. Park, “Preemptive switched ethernet for real-time process control system,” in 2013 11th IEEE International Conference on Industrial Informatics (INDIN). IEEE, 2013, pp. 171–176. [29] D. Hellmanns, J. Falk, A. Glavackij, R. Hummen, S. Kehrer, and F. D¨urr,“On the performance of stream-based, class-based time-aware shaping and frame preemption in tsn,” in IEEE International Conference on Industrial Technology (ICIT), 02 2020. [30] T. Steinbach, H. D. Kenfack, F. Korf, and T. C. Schmidt, “An extension of the omnet++ inet framework for simulating real-time ethernet with high accuracy,” in Proceedings of the 4th International ICST Conference on Simulation Tools and Techniques, 2011, pp. 375–382. [31] D. Hellmanns, “NeSTiNg A Network Simulator for Time-sensitive Network- ing,” 2018. [Online]. Available: http://www.ieee802.org/1/files/public/docs2018/ new-hellmanns-tsn-simulator-0918-v03.pdf

[32] J. Falk, D. Hellmanns, B. Carabelli, N. Nayak, F. D¨urr, S. Kehrer, and K. Rothermel, “NeSTiNg: Simulating IEEE time-sensitive networking (TSN) in OMNeT++,” in Proceedings of the 2019 International Conference on Networked Systems (NetSys), Garching b. M¨unchen, Germany, Mar. 2019. [33] J. N. Amaral, “About computing science research methodology.” Citeseer, 2011.

54 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

[34] J. Farkas, “Introduction to ieee 802.1: Focus on the time-sensitive networking task group,” Vancouver, BC, Canada, Mar, 2017.

[35] S. Rinaldi, P. Ferrari, N. M. Ali, and F. Gringoli, “Iec 61850 for micro grid automation over heterogeneous network: Requirements and real case deployment,” in 2015 IEEE 13th International Conference on Industrial Informatics (INDIN). IEEE, 2015, pp. 923–930. [36] “Ieee standard for local and metropolitan area networks– virtual bridged local area networks amendment 12: Forwarding and queuing enhancements for time-sensitive streams,” IEEE Std 802.1Qav-2009 (Amendment to IEEE Std 802.1Q-2005), pp. 1–72, 2010. [37] “Ieee standard for ethernet amendment 5: Specification and management parameters for interspersing express traffic,” IEEE Std 802.3br-2016 (Amendment to IEEE Std 802.3-2015 as amended by IEEE St802.3bw-2015, IEEE Std 802.3by-2016, IEEE Std 802.3bq-2016, and IEEE Std 802.3bp-2016), pp. 1–58, Oct 2016.

55 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

Appendix A

The documentation of all the IEEE standards may be obtained free of charge at the following website: IEEE.

AVB Working group (802.1) standards: • 802.1AS-2011/Cor 1-2013 - IEEE Standard for Local and metropolitan area networks– Tim- ing and Synchronization for Time-Sensitive Applications in Bridged Local Area Networks– Corrigendum 1: Technical and Editorial Corrections

• 802.1AS-2011/Cor 2-2015 - IEEE Standard for Local and metropolitan area networks– Tim- ing and Synchronization for Time-Sensitive Applications in Bridged Local Area Networks– Corrigendum 2: Technical and Editorial Corrections TSN Working Group (802.1Q) standards: • 802.1Q-2018 - IEEE Standard for Local and metropolitan area networks–Bridges and Bridged Networks [4]. This standard was first published as IEEE Std 802.1Q-1998, making use of the concepts and mechanisms of LAN Bridging that were introduced by IEEE Std 802.1D and defining additional mechanisms to allow the implementation of Virtual Bridged Networks, and it is constantly being revised. The cited is the 2018 revision, incorporating both IEEE Std 802.1QbvTM-2015 and IEEE Std 802.1QbuTM-2016 amendments to the 2014 revision. The amendments to IEEE Std 802.1Q, that are given below, provide enhancements to the original 802.1Q Standard. • 802.1Qav-2009 - IEEE Standard for Local and metropolitan area networks– Virtual Bridged Local Area Networks Amendment 12: Forwarding and Queuing Enhancements for Time- Sensitive Streams [36].

• 802.1Qca-2015 - IEEE Standard for Local and metropolitan area networks— Bridges and Bridged Networks - Amendment 24: Path Control and Reservation • 802.1Qbv-2015 - IEEE Standard for Local and metropolitan area networks – Bridges and Bridged Networks - Amendment 25: Enhancements for Scheduled Traffic [14].

• 802.1Qbu-2016 - IEEE Standard for Local and Metropolitan Area Networks—Bridges and Bridged Networks — Amendment 26: Frame Preemption [15]. Ethernet Working Group (802.3) standards: • 802.3br-2016 - IEEE Standard for Ethernet Amendment 5: Specification and Management Parameters for Interspersing Express Traffic [37]. Note that listed standards and there amendments are just a part of a bigger group of standards that is always being revised and is constantly growing. The listed standards are the ones needed for a better understanding of this thesis. The rest of the amendments introduced to this day can be found in the Introduction of following documents: IEEE Std 802.1Q, IEEE Std 802.3br.

56 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

Appendix B

TSNS is a time-sensitive network simulator developed as a tool for performance analysis od the TSN Preemption mechanism. TSNS supports the CBS, TAS and the Preemption Mechanism for multiple express traffic classes. It is developed in the Microsoft Visual Studio 2019 version 16.5.29920.165. The simulator solution is located in a zip folder called TSNS. The solution is launched from the driver file called main.c. It is located in the Source Files folder with 10 other source files. All the other source files implement one of the TSN mechanisms. The main.c uses these mechanisms via there header files located in the Header Files folder. The source files, beside the driver are: • switch configuration.c

• message generate.c • gcl generate.c • table.c

• queuing.c • credit based shaper.c • gating mechanism.c • transmission preemption.c

• preemption mechanism.c • write results.c There are 10 homonymous header files and one header file called tsns config.h, that holds all the global variables, defined constants, structures and types. The overall software design is illustrated in Fig. 23.

To configure the network the user has to set the values of the following variables (location: tsns config.h): • MaxNumberOfDifferentMessages

• MaxNumberOfDifferentOffsets • simNumber • simTime • HoldAndRelease

• preemptEnable There are multiple options how the rest of the network and the traffic streams can be configured: (1) Offline scheduling - manually writing the GCL

(2) Run-mode scheduling - setting specific classes as scheduled (3) Manually writing each traffic stream offline (4) Manually writing the message set configuration offline and run-time message generation (5) Manually writing the idleSlope values offline

(6) Run-time idleSlope calculation based on the message set

57 Lejla Murselovi´c Performance Analysis of the Preemption Mechanism in TSN

A) If the user is configuring the gating mechanism using method (1), then the GCL (location: gcl generate.c) has to be written before run-time. In the driver file the GCL has to be allocated, functions makeGCL() is called to set the GCL values, and checkGCL() is called every clock tick to update the gates state. If method (1) is used where the GCL is created offline, the user was aware of the message set and there timing characteristic. To method (1) is used together with method (3). To write the messages manually (location: message generate.c) the user writes the ID, period, length, priority and initial offset for each individual traffic stream. In the driver the memory is allocated for the list of messages, calling the function messageInit() sets the values of the messages in run-time.

B) If the user is not offline configuring the message set, it is still needed to determine the number of messages for each class, and the set of periods, lengths and initial offsets from which the messages will be generated in run-time. To configure these sets the user sets (location: message generate.c): (1)BE nbr, BK nbr, EE nbr, CA nbr, VI nbr, VO nbr, IC nbr, NC nbr;

(2) periodSet (3) lengthSet (4) offsetSet In this method the GCL cannot be written offline, to the user has to set which traffic class is sched- uled (location: switch configure.c), and the simulator will schedule this traffic streams in run-time. In the driver, the memory is again allocated for the messages, calling the function msgGenerate() will generate new messages in run-time, from the pre-determinet set of messages and number of messages.

C) There are two ways of seting the IdleSlope values, and both can be used with the offline method (A) and the run-time method (B). Manually, the values are set offline (location: credit based shaper.c) and the function idleSendInit() is called from main. The other option is to call the function idleSlopeAut() and let the simulator calculate the slope values.

Lastly, the user sets the traffic class configuration (location: switch configure.c): (1) timeSlot

(2) networkSpeed (3) NumberOfQueues (4) which queue of these supports the CBS, which is express/preemptable

Calling the function configure() from main, sets these values in run-time.

In the file write results.c the user has to set the path of the .txt files where the simulation results will be saved. The simulator tool has several different type of results that are saved in separate text document files:

(1) Results (2) Messages (3) Timeline (4) TimeLogMessages

(5) TimeLogQueues

58