Performance Evaluation of Linux CAN-Related System Calls
Total Page:16
File Type:pdf, Size:1020Kb
Performance evaluation of Linux CAN-related system calls Michal Sojka, Pavel Píša, Zdenekˇ Hanzálek Czech Technical University in Prague Faculty of Electrical Engineering Technická 2, 121 35 Praha 6, Czech Republic {sojkam1,pisa,hanzalek}@fel.cvut.cz Abstract security more seriously. Not only that people started proposing message authentication techniques for CAN Linux kernel contains a full featured CAN bus network- bus [6,7,8], but various other techniques to increase au- ing subsystem. It can be accessed from applications via tomobile security are discussed. One possibility is to use several different interfaces. This paper compares the per- intrusion detection systems (IDS) that analyze CAN bus formance of those interfaces and tries to answer the ques- traffic and try to detect anomalies. Before developing an tion, which interface is most suitable for capturing traf- IDS one needs a way to efficiently receive CAN frames fic from a big number of CAN buses. Motivation for this from the whole car so that IDS can process them. It is work is the development of various CAN traffic analyzers natural to use Linux for prototyping the IDS and the goal and intrusion detection systems. Besides traditional UNIX of this paper is to evaluate which Linux system calls have interfaces we also investigate the applicability of recently the lowest overhead and are thus suitable for logging of introduced “low-latency sockets” to the Linux CAN sub- big number of CAN interfaces. system. Although the overhead of Linux in general is The contribution of this paper is manifold. First, we quite large, some interfaces offer significantly better per- present comprehensive performance evaluation of multi- formance than others. ple Linux system calls that allow an application to interact with CAN networks. Second, we investigate the applica- bility of recently introduced “low-latency sockets” to the 1. Introduction Linux CAN subsystem. Third, we compare the overhead of the whole Linux CAN subsystem with the raw hard- Controller Area Network (CAN) is still the most ware performance represented by lightweight RTEMS ex- widespread automotive networking standard today, even ecutive. in the most recent vehicle designs. Although there are This paper is structured as follows. Section2 provides more modern solutions available on the market [1,2], a brief introduction to the internals of the Linux network- CAN represents reliable, cheap and proven solution, mak- ing stack. We describe our methodology for system call ing it the preferred choice in the industry. Although the evaluation in Section3, followed by results and discus- newer technologies such as FlexRay or Ethernet are used sion in Section4. The paper is concluded in Section5. more and more, it seems unlikely that CAN is going to be phased out in foreseeable future. 2. Linux networking stack Linux kernel contains a full featured CAN bus net- working subsystem. Together with can-utils package, The architecture of the Linux networking stack is de- it represents versatile and user friendly way for working picted in Figure1. We describe the main components of with CAN networks. Linux CAN subsystem is based on the stack, which are necessary for understanding the re- Linux networking subsystem and shares a lot of code with mainder of this paper. it. This is a source of significant overhead [3], because the First, we describe the transmission path (green arrows networking subsystem is tuned for high throughput Ether- on the left). Application initiates transmission process by net networks with big packets and not for up to eight byte invoking send() or write() system call on a socket. long CAN frames. For example, one sk_buff structure Both calls end up in the socket layer. In case of CAN raw that represents one CAN frame in the Linux kernel has sockets this is raw_sendmsg() function in can/raw.c. size 448 bytes on our system. Nevertheless, due to user This function allocates an sk_buff structure, copies the friendliness and widespread availability of Linux, its CAN CAN frame to it and passes the result to the lower layer. subsystem is used a lot. Protocol layer (af_can.c) immediately forwards the re- Recently, after publishing information about CAN bus- ceived sk_buff to the queuing discipline. If the device related attacks on automotive electronic control units driver can accept a CAN frame for transmission, queue- [4,5], automobile manufacturers started taking cybernetic ing discipline just passes the frame to the driver, which 978-1-4799-3235-1/14/$31.00 ©2014 IEEE Application (user space) Application TX socket RX socket Socket layer (can/raw.c, af_packet.c) read/recv write/send RX queue Protocol layer (af_can.c) Kernel af_can.c, raw.c vcan Queuing Kernel discipline Poll Figure 2: Virtual CAN method. ISR Device driver counterparts can operate on multiple messages (hence the CAN controller (HW) name). This method requires the use of virtual CAN in- Figure 1: Linux networking stack architecture. Solid terface (vcan). Therefore, it evaluates only the overhead green arrows represent transmit data flow, red ones receive of the system calls and the socket and protocol layers. It data flow. Dotted arrows represent notifications. does not take into account the overhead of device drivers and queuing disciplines. The method works as depicted in Figure2. Two raw instructs the CAN controller (hardware) to transmit the CAN sockets are created, one for transmission and one frame. Then, send() returns. If, on the other hand, the for reception. Both are bound to the same virtual CAN device is busy, queuing discipline stores the frame and interface. For the reception socket we set the maximum send() returns at that time. Later, when the device is size of the reception socket queue so that it can hold all ready, the devices driver asks the queueing discipline to frames that we plan to send. This is accomplished by call- supply a frame for transmission. ing setsockopt() with SO_RCVBUFFORCE parame- The reception path is shown with red arrows on the ter. Then we send the planned number of CAN frames right. When a CAN frame is received by the CAN con- to the transmission socket and measure the time of this troller, it signals to the CPU an interrupt request (IRQ), operation. Every send operation involves copying the which is handled by the device driver in the interrupt ser- CAN frame from the application to the kernel, creating vice routine (ISR). The ISR runs in the so called hard-IRQ sk_buff structure representing the sent frame and stor- context of the Linux kernel. It checks the reason of the in- ing it in the reception queue of the receiving socket (green terrupt and in case of receiving a new frame it asks Linux arrow in the figure). The latter step is performed by the to schedule a so called NETRX soft-IRQ at some time af_can.c when it detects that the socket is bound the later. In normal situation, soft-IRQ is executed just af- virtual CAN interface. Once all frames are sent, we start ter exiting the ISR. It calls device driver’s poll() func- reading them out of the reception socket. We measure how tion, which reads the CAN frames out of the hardware, long does the reading take (red arrow). stores them into sk_buffs and passes them to the pro- The source code of the application implementing this tocol layer. Protocol layer clones the received sk_buf method is available in our repository1. in order to deliver it to every socket interested in receiv- ing this frame. Then, the cloned sk_buff is passed to 3.2. Gateway-based method the socket layer for storing the frame in the socket queue. This method evaluates the CAN system call perfor- When an application invokes read() or recv() system mance by creating a Linux-based CAN gateway and mea- calls, it receives the frame from the socket queue. suring how long does it take for the gateway to route a frame from one CAN bus to another. We use the same 3. Methodology hardware for the gateway but change the software that im- plements it. This method evaluates the performance of all We developed two methods for system call perfor- network stack layers i.e. device driver, queuing discipline, mance evaluation. One serves mainly for comparison of protocol layer and sockets. system call overhead and does not require any special hardware setup. The second method takes into account all components involved in communication such as device Test bed The test bed used for this method is depicted drivers, protocol layer and system calls, but requires com- in Figure3. There is a PC with Kvaser PCI quad-CAN plex hardware setup. SJA1000-based card and the CAN gateway running on a MPC5200 (PowerPC) system. The PC and the gate- 3.1. Virtual CAN interface method way (GW) are connected via a dedicated Ethernet network (“crossed” cable) which is used only for booting the GW This method can be used for performance compari- over network. This is not used when running the experi- son of read() system call with recvmmsg() and of ments. The gateway is implemented either in Linux or in write() with sendmmsg(). The difference between those system calls is that read() and write() oper- 1https://rtime.felk.cvut.cz/gitweb/can-benchmark.git/blob/HEAD: ate on a single message (CAN frame) whereas their mmsg /recvmmsg/can_recvmmsg.c 2 CAN bus 0 msg 1 CAN gateway (Linux) CAN bus 1 msg 1' GW latency Duration time Total latency RX timestamp 1 RX timestamp 2 Figure 4: Definition of latency. Figure 3: Testbed configuration. Traffic generator In our experiments, we generated the traffic by periodically sending CAN frames. The period differed depending on the experiment, but in general it RTEMS.