1

2 GENIVI Alliance

3 GENIVI Document TR00029

4 GENIVI Lifecycle Domain Overview

5 Technical Report

6 Version 1.0

7 2014-08-14

8 Sponsored by: 9 GENIVI Alliance

10 Abstract: 11 This document is an export of the GENIVI Wiki Lifecycle Domain. It provides an overview of the 12 architecture and requirements for the complete Lifecycle Domain

13 Keywords: 14 GENIVI, Lifecycle

15 License: 16 This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

17 18 19 20

GENIVI Alliance. 2400 Camino Ramon, Suite 375, San Ramon, CA 94583, USA http://www.genivi.org

1 Copyright © 2014, Company ABC, Company XYZ. 2 All rights reserved. 3 The information within this document is the property of the copyright holders and its use and disclosure are 4 restricted. Elements of GENIVI Alliance specifications may be subject to third party intellectual property 5 rights, including without limitation, patent, copyright or trademark rights (and such third parties may or may not 6 be members of GENIVI Alliance). GENIVI Alliance and the copyright holders are not responsible and shall not 7 be held responsible in any manner for identifying, failing to identify, or for securing proper access to or use of, 8 any or all such third party intellectual property rights. 9 GENIVI and the GENIVI Logo are trademarks of GENIVI Alliance in the U.S. and/or other countries. Other 10 company, brand and product names referred to in this document may be trademarks that are claimed as the 11 property of their respective owners. 12 This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. 13 The full license text is available at http://creativecommons.org/licenses/by-sa/4.0 14 The above notice and this paragraph must be included on all copies of this document that are made. 15 GENIVI Alliance 16 2400 Camino Ramon, Suite 375 17 San Ramon, CA 94583, USA

GENIVI Alliance. 2400 Camino Ramon, Suite 375, San Ramon, CA 94583, USA http://www.genivi.org

1 Revision History

2 Document revision history

Date Version Author Description 2014- 1.0 David Yates Initial revision exported from the GENIVI Wiki on the 08-14 14th Sept 2014

3

GENIVI Alliance. 2400 Camino Ramon, Suite 375, San Ramon, CA 94583, USA http://www.genivi.org 1. SysInfraEGLifecycleDef ...... 2 1.1 SysInfraEGLifecycleClosedItems ...... 8 1.2 SysInfraEGLifecycleExecPrjctBootManager ...... 11 1.2.1 Node Startup Controller Description ...... 12 1.2.2 SysInfraEGLifecycleExecPrjctBootManagerArchitectureOverview ...... 13 1.2.3 SysInfraEGLifecycleExecPrjctBootManagerDBusInterfaces ...... 15 1.2.3.1 ReviewBootManagerDBusInterfaces ...... 17 1.2.4 SysInfraEGLifecycleExecPrjctBootManagerDependencies ...... 19 1.2.5 SysInfraEGLifecycleExecPrjctBootManagerExcaliburSchedule ...... 19 1.2.6 SysInfraEGLifecycleExecPrjctBootManagerFunctionalScope ...... 21 1.2.7 SysInfraEGLifecycleExecPrjctBootManagerInternalArchitecture ...... 22 1.2.8 SysInfraEGLifecycleExecPrjctBootManagerMinutes20120611 ...... 23 1.2.9 SysInfraEGLifecycleExecPrjctBootManagerMinutes20120618 ...... 24 1.2.10 SysInfraEGLifecycleExecPrjctBootManagerMinutes20120625 ...... 25 1.2.11 SysInfraEGLifecycleExecPrjctBootManagerMinutes20120702 ...... 26 1.2.12 SysInfraEGLifecycleExecPrjctBootManagerMinutes20120709 ...... 26 1.2.13 SysInfraEGLifecycleExecPrjctBootManagerMinutes20120716 ...... 26 1.2.14 SysInfraEGLifecycleExecPrjctBootManagerMinutes20120813 ...... 27 1.2.15 SysInfraEGLifecycleExecPrjctBootManagerRequirements ...... 28 1.2.16 SysInfraEGLifecycleExecPrjctBootManagerTestScenarios ...... 32 1.2.17 SysInfraEGLifecycleExecPrjctBootManagerTrackingInJIRA ...... 33 1.2.18 SysInfraEGLifecycleExecProjectBootManagerLegacyApps ...... 33 1.2.19 SysInfraEGLifecycleExecProjectBootManagerSystemdInteraction ...... 34 1.2.20 SysInfraEGMinutes20120529 ...... 35 1.3 SysInfraEGLifecycleExecPrjctNodeResMgmt ...... 37 1.4 SysInfraEGLifecycleExecPrjctNSM ...... 38 1.4.1 Node State Manager Description ...... 40 1.5 SysInfraEGLifecycleSubsysDoc ...... 40 1.5.1 SysInfraEGLifecycleConcept ...... 43 1.5.1.1 SysInfraEGLifecycleBootMgmtSOW ...... 47 1.5.2 SysInfraEGLifecycleConceptBootmanagementSplit ...... 54 1.5.2.1 SysInfraEGLifecycleConceptBootmanagementShutdown ...... 55 1.5.2.2 SysInfraEGLifecycleConceptBootmanagementStartup ...... 57 1.5.3 SysInfraEGLifecycleConceptCgroupsInput ...... 62 1.5.4 SysInfraEGLifecycleConceptMultiNode ...... 63 1.5.5 SysInfraEGLifecycleConceptRefinement ...... 64 1.5.6 SysInfraEGLifecycleConceptSystemBorder ...... 65 1.5.7 SysInfraEGLifecycleDesignStructDef ...... 66 1.5.8 SysInfraEGLifecycleDesignWhiteBoxUcRealization ...... 68 1.5.9 SysInfraEGLifecycleFABlackBoxUcRealization ...... 71 1.5.10 SysInfraEGLifecycleNSMData ...... 71 1.5.11 SysInfraEGLifecycleRAUsecases ...... 77 1.5.12 SysInfraEGLifecycleRequirements ...... 79 1.5.13 SysInfraEGSystemd ...... 106 1.5.13.1 SysInfraEGSystemdPOC ...... 120 SysInfraEGLifecycleDef

Subdomain Lifecycle

Scope of work item Navigation within this subdomain Management of work item Execution Projects Risks, issues and open questions Subdomain documentation List of all persons contributed to EG SI Lifecycle Wiki Overview of Wiki pages Presentations for Lifecycle Dublin San Jose Paris Shanghai Barcelona San Diego Misc. Presentations

Scope of work item

The scope of this work package is a full development activity of Lifecycle. That means starting with studying the past / history of this package; specify the system- and software architecture; doing the design and implementation of the common parts including the integration and test.

Finally the Lifecycle system architecture specification parts will be located in the GENIVI compliance architecture model. The software and the design specification you can find in the implementation model section of the GENIVI model.

However, additionally the interfaces will be delivered as header file independent if there will be a common implementation or not.

The functional scope of Lifecycle:

The current activity will cover:

Power-, Node state-, Boot- and Resource management specified in detail. Interfaces to/of Thermal- and Supply management. Thermal- and Supply management itself will be handled as a black box only. Additionally the architecture is taking in account different deployments (one- and multi-node architectures). Please find more details at the concept description and following pages.

Navigation within this subdomain

At the bottom of each page linked arrows are located which helps to navigate within this subdomain. It provides the red-line for reading the pages.

Management of work item

If you have some remarks or concerns, you see some problems or you have just a question please fill in your item in the first table. If you want to address some action, assigned or not, please use the second table.

These tables reflect our main status / activities.

Execution Projects Node Startup Controller WIKI page of the Node Startup Controller Project Project

Node State Mangement WIKI page of the Node State Management Project Project

Risks, issues and open questions

The column options are: Category: Risk, Issue, Question

ID Category Submitter Topic Answer Replied by (Name,Company) (Name,Company)

1 Question Torsten To release the Christian Muck, Hildebrand, compliant Alexander Wenzel (BMW) BMW Continental lifecycle Manfred Bathelt (BMW) requirements it Christian Muck (BMW) is necessary to Fabien Hernandez (PSA) know who are the mandatory reviewers.

2 Question David Yates, In the The term "middleware" can be replaced with "GENIVI". It is Holger Behrens, Continental requirements also assumed that middleware doesn't provide a GUI. Wind River the term "Middleware" is used - what does this refer to in the context of GENIVI?

3 Question David Yates, What is the Consumers Holger Behrens, Continental GENIVI - Interpretation depends on the HW partitioning in question. In Wind River definition of case there is an automotive/vehicle controller, or some parts of the terms the PSM are running on the vehicle controller (OSEK), while "Consumer" platform state handling is done on both CPUs (OSEK and and "Device" ) by a bunch of SW components - aka. consumed as used in the Device - here my interpretation is that for example the PSM Requirements? part running on the vehicle controller could potentially power-up/down the CPU running Linux (aka. system), or the CD/DVD drive (aka. device), based on platform state.

4 Information AMM, Paris During the Following the meeting we have looked into whether this would David Yates, presentation be possible and come to the conclusion that currently this is Continental on the Node not possible as we use the ordering of the registration during State Manager start-up to define the ordering for shutdown. (AMM, Paris) it was suggested Perhaps in the future something can be improved here with a by a number of static ordering for core components but currently we do not listeners that believe that the extra complexity of doing this is warranted to we could save one D-Bus message from each consumer during the reduce IPC start-up process. overhead by having a static shutdown consumer registration process. 5 Information AMM, Paris During http://www.freedesktop.org/wiki/Software/systemd/Generators\\ David Yates, conversations Continental with Lennart Poettering We have looked through the documentation for the Generators during the and can see that they are very useful for certain topics. AMM, he However, we currently do not see a use case where they can suggested we assist in the current Lifecycle concept. look at Theoretically we could have used generators to handle the generators to LUC during the start-up process but dynamically altering the see whether target files based upon the LUC persistency information before they are of use the systemd dependency graph is created. However, this to us. would have required a similar amount of effort as is needed to do this in the Node Startup Controller as currently defined and would have delayed the starting of the Mandatory components.

Therefore we fell that we will currently leave the concept as is for now but will consider the use of generators in the future.

Jira item lists

In general all tasks and activities are maintained within Jira.

Tasks list

This list of items gives you an overview about the main activities (tasks) within subdomain Lifecycle (including the status and the result). Therefore it reflects quite easily the subdomain status (what is done, what is ongoing, what next).

Sub-tasks of User Management (18 issues) Type Key Summary Plannedstart Due Assignee Status Aug David GT-5 Lifecycle - System Architecture 05, Yates 2011 Reopened Dec Lifecycle - Boot Manager - Freeze SW Platform Req. and UC David GT-1297 21, Realizations Yates 2011 Resolved Jan Lifecycle - Node Startup Controller - Component Realization and David GT-6 25, Compliance Yates 2012 Resolved May Lifecycle - Node State Mgmt - Freeze Concept, SW Platform Req. and David GT-1299 04, UC Realizations Yates 2012 Resolved Jun Milestone: Deliver implementation of phase 1 functionality to GENIVI Jannis GT-2065 26/Jun/12 26, for approval Pohlmann 2012 Resolved Jun Jannis GT-2057 Technical specifications and phase 1 of implementation 05/Jun/12 26, Pohlmann 2012 Resolved Jul Incremental development of status reporting, startup cancellation, Jannis GT-2066 27/Jun/12 17, startup prioritization and launching of applications at runtime Pohlmann 2012 Resolved Jul Milestone: Deliver a feature-complete implementation of the Boot Jannis GT-2068 24/Jul/12 24, Manager including phase 2 functionality to GENIVI for approval Pohlmann 2012 Resolved Jul Improvements of the code and product quality by means of testing, Jannis GT-2067 17/Jul/12 24, reviewing and cleaning u Pohlmann 2012 Resolved Jul Jannis GT-2058 Phase 2 of implementation and bug fixing/quality improvements 27/Jun/12 24, Pohlmann 2012 Resolved Jul GT-2069 Write manual pages and API documentation 25/Jul/12 27, Unassigned 2012 Resolved Jul David GT-1300 Lifecycle - Node State Mgmt - Freeze final Architecture and SOW 27, Yates 2012 Resolved Aug Make testing the unit startup implementation of the Boot Manager 03, GT-2070 30/Jul/12 Unassigned reproducible 2012 Resolved Aug Prepare compliance documents required for delivery into the GENIVI GT-2071 06/Aug/12 10, Unassigned BIT team 2012 Resolved Aug Packaging the Boot Manager and aiding BIT team with the integration Jannis GT-2072 13/Aug/12 20, into the GENIVI release Pohlmann 2012 Resolved Aug GT-2059 Documentation and integration 25/Jul/12 27, Unassigned 2012 Resolved Jan Lifecycle - Node State Manager - Component Realization and David GT-1301 16, Compliance Yates 2013 Resolved Aug David GT-938 Lifecycle - Boot Manager graphical tooling 14, Yates Open 2013

If you can not see the Jira issue lists, you are not member of the Jira Project Tracking group.

Action list

This list give you all actions which are tracked with Jira for the subdomain Lifecycle. Those actions supports the actual Lifecycle tasks.

Sub-tasks of User Management (5 issues) Type Key Summary Plannedstart Due Assignee Status Feb Care about getting the cancellation of an ongoing startup procedure into the David GT-552 02, In system architecture guidelines Yates 2012 Progress Apr David GT-556 Check about the policy of the out of memory killer behavior in the kernel 25, Yates 2013 Open Apr There should be investigations on how the feature of the kernel David GT-555 30, works today across multiple CPUs. Yates 2013 Open May There should be a good explanation about how to use exceptional resource David GT-554 24, management and what are pitfalls in the documentation and guidelines Yates 2013 Open

Christian GT-557 See if BMW can provide some feasibility test cases on how to use cgroups Muck Open

If you can not see the Jira issue lists, you are not member of the Jira Project Tracking group.

All closed tasks and actions

All closed items

Communication

Minutes

In general there is no Lifecycle specific teleconferences. Therefore no separate minutes are done. The list of all EG-SI minutes can be found here.

Here are some interesting minutes for this topic:

Kickoff meeting for requirements and documentation collection (SysInfraEGMinutes20110210) Kickoff meeting of this work item (SysInfraEGMinutes20110113) Presentation of systemd - filesystem mounting (SysInfraEGMinutes20110317) Kickoff meeting as F2F for the execution project Bootmanager [Nowadays renamed to NodeStartupController] ( SysInfraEGMinutes20120529)

Telco

Currently it is not planned to have a special regular cyclic teleconference for this work item Lifecycle. In case there is a need, please send an email to me (Torsten Hildebrand) with the item to be discussed.

Released assets This table will contain all important released documentation, specification and information.

The column options are: Status:Reviewed, Released, Archived

ID Name of asset Purpose of asset Author Release Status (Name,Company) date

1 20110322_Mounting_with_systemd.ppt Explanation of systemd filesystem Christian Muck 2011-03-17 Reviewed parallelization and mounting works (BMW)

Subdomain documentation

The Subdomain documentation (SysInfraEGLifecycleSubsysDoc) will contain the following:

A list of former/legacy documentations and specifications Lifecycle in system context Requirements analysis (the compliance requirements, use cases and scenarios) Vehicle Requirements SW Platform Requirements Functional analysis (the black box realization based on use cases and scenarios) Use Cases (from vehicle point of view) Static views Dynamic views Design Structure definition Blocks / Components Activity diagrams Public interfaces Design analysis (the white box realization based on use cases and scenarios) Process views Integration Dependencies Deployment Test / Compliance Test criteria Test cases

List of all persons contributed to EG SI Lifecycle Wiki David Yates 290 , Torsten Hildebrand 233 , Jannis Pohlmann 148 , 122 , Christian Muck 100 , Ben Brewer 29 , Gunnar Andersson 21 , Fabien Hernandez 20 , Markus Boje 11 , Axel Endler 9 , Sriram Gopalan 8 , Jonathan Maw 5 , Jean-Pierre Bogler 3 , Manfred Bathelt 3 , Erik Boto 2 , Holger Behrens 2 , Alex Brankov 1

Overview of Wiki pages

The tree representation shows all child pages of this parent side:

SysInfraEGLifecycleClosedItems SysInfraEGLifecycleExecPrjctBootManager Node Startup Controller Description SysInfraEGLifecycleExecPrjctBootManagerArchitectureOverview SysInfraEGLifecycleExecPrjctBootManagerDBusInterfaces ReviewBootManagerDBusInterfaces SysInfraEGLifecycleExecPrjctBootManagerDependencies SysInfraEGLifecycleExecPrjctBootManagerExcaliburSchedule SysInfraEGLifecycleExecPrjctBootManagerFunctionalScope SysInfraEGLifecycleExecPrjctBootManagerInternalArchitecture SysInfraEGLifecycleExecPrjctBootManagerMinutes20120611 SysInfraEGLifecycleExecPrjctBootManagerMinutes20120618 SysInfraEGLifecycleExecPrjctBootManagerMinutes20120625 SysInfraEGLifecycleExecPrjctBootManagerMinutes20120702 SysInfraEGLifecycleExecPrjctBootManagerMinutes20120709 SysInfraEGLifecycleExecPrjctBootManagerMinutes20120716 SysInfraEGLifecycleExecPrjctBootManagerMinutes20120813 SysInfraEGLifecycleExecPrjctBootManagerRequirements SysInfraEGLifecycleExecPrjctBootManagerTestScenarios SysInfraEGLifecycleExecPrjctBootManagerTrackingInJIRA SysInfraEGLifecycleExecProjectBootManagerLegacyApps SysInfraEGLifecycleExecProjectBootManagerSystemdInteraction SysInfraEGMinutes20120529 SysInfraEGLifecycleExecPrjctNodeResMgmt SysInfraEGLifecycleExecPrjctNSM Node State Manager Description SysInfraEGLifecycleSubsysDoc SysInfraEGLifecycleConcept SysInfraEGLifecycleBootMgmtSOW SysInfraEGLifecycleConceptBootmanagementSplit SysInfraEGLifecycleConceptBootmanagementShutdown SysInfraEGLifecycleConceptBootmanagementStartup SysInfraEGLifecycleConceptCgroupsInput SysInfraEGLifecycleConceptMultiNode SysInfraEGLifecycleConceptRefinement SysInfraEGLifecycleConceptSystemBorder SysInfraEGLifecycleDesignStructDef SysInfraEGLifecycleDesignWhiteBoxUcRealization SysInfraEGLifecycleFABlackBoxUcRealization SysInfraEGLifecycleNSMData SysInfraEGLifecycleRAUsecases SysInfraEGLifecycleRequirements SysInfraEGSystemd SysInfraEGSystemdPOC

Presentations for Lifecycle

This section contains links to all the presentations that have been made for the Lifecycle domain during EG meetings and AMM's

Dublin systemd_lennart.pptx

San Jose

GENIVI_Lifecycle_NSMData.ppt GENIVI_AMM_SanJose_LifecyclePOC.ppt Lifecycle Model Walk Through and System Shutdown Concept.pptx

Paris

AMM_Paris_LifecycleNSM.ppt AMM_Paris_LifecycleResourceMgmt.ppt

Shanghai

GENIVI AMM Lifecycle.ppt GENIVI AMM LifecycleStatus.pptx

Barcelona

Recording from the presentation of the GENIVI Resource Management concept

(If the video embedding above does not work in your browser, then download the attachment.)

GENIVI AMM Resource Management.ppt GENIVI AMM Lifecycle Working Session.ppt HealthMonitoring.doc UCR_DynamicResources.doc

San Diego

GENIVI San Diego Infrastructure Usage.pptx Persistency_SWL.doc ProfileManagerUsageQuery_SWL.doc NodeHealthMonitor_SWL.docx

Misc. Presentations

This section contains some misc. presentations used for presenting proposed changes in components.

GENIVI Lifecycle LUM Handling.ppt SysInfraEGLifecycleClosedItems

Subdomain Lifecycle - All closed Jira items / issues

JIRA Issues (64 issues) Type Key Summary Assignee Reporter Priority Status Resolution Created Updated Due Add the Lifecycle component - Jun David David Jun 18, Jul 29, GT-2757 Node Health Monitor to the Fixed 28, Yates Yates 2013 2013 Compliance 5.0 (Gemini) as PC Resolved 2013 Jul Apply source code review David Pavel Jun 12, Aug 16, GT-2740 Fixed 01, checklist to a component Yates Konopelko 2013 2013 Closed 2013 May GENIVI wiki pages overhaul: David Gianpaolo May 22, May 22, GT-2655 Fixed 31, pages owned by David Yates Yates Macario 2013 2013 Resolved 2013 Make PC "Node Resource Jun David Gianpaolo Dec 20, Jul 29, GT-2351 Manager" included into Fixed 28, Yates Macario 2012 2013 Compliance-5.0 Resolved 2013 To check possibility of support for various different packages David David Dec 17, Mar 14, GT-2336 that could simplify the sw Fixed Yates Croft 2012 2013 download process for Closed application download EG-AUTO - Derive Feb David Fabien Nov 06, Jun 03, GT-2287 requiremetns and use cases Fixed 01, Yates Hernandez 2012 2013 from Security Threats Model Closed 2013 Jun EG-AUTO - Gathering GENIVI David Fabien Nov 06, Jan 27, GT-2272 Fixed 28, Security requirements of Gemini Yates Hernandez 2012 2014 Closed 2013 Nov Review Security Threats David Fabien Nov 05, Dec 11, GT-2258 Fixed 30, allocated to EG-AUTO Yates Hernandez 2012 2012 Closed 2012 Plan creating tests for Jul David Pavel Jun 01, Mar 07, GT-2010 components owned by Fixed 06, Yates Konopelko 2012 2013 Automotive EG Closed 2012 Mar Adaption required to agreed David David Mar 02, Feb 11, GT-1905 Fixed 23, requirement SWRD_CP_LC_81 Yates Yates 2012 2013 Resolved 2012 Lifecycle - Boot Management - Dec Intergrate review comments David Markus Dec 08, Dec 08, GT-1630 Fixed 14, from F2F into SW platform Yates Boje 2011 2011 Closed 2011 requirements Lifecycle - Boot Management - Feb Add a use case how systemd David Gunnar Dec 08, Feb 08, GT-1625 Fixed 16, should be used to support SW Yates Andersson 2011 2013 Closed 2012 Update. Dec Lifecycle - NSM - Node State David Gunnar Dec 08, Jan 26, GT-1619 Fixed 22, Types Overview review on F2F Yates Andersson 2011 2012 Resolved 2011 Apr User Management - LUC David Gunnar Dec 08, Mar 07, GT-1618 Fixed 12, storage Yates Andersson 2011 2013 Resolved 2012 Dec David Philippe Nov 28, Apr 04, GT-1524 Manage liaison with systemd Fixed 13, Yates Robin 2011 2012 Resolved 2011

David Philippe Nov 14, Mar 07, GT-1437 License check Fixed Yates Robin Resolved 2011 2013

David Philippe Nov 14, Mar 07, GT-1436 Prepare statement of work Fixed Yates Robin Resolved 2011 2013 GT-1435 Evaluate existing Open Source David Philippe Fixed Nov 14, Mar 07, Software projects Yates Robin 2011 2013 Resolved Get statement of work ratified Apr David Philippe Nov 14, Apr 18, GT-1424 by EG leads and SysArch Team Fixed 27, Yates Robin 2011 2012 and nominate technical mentor Resolved 2012 Dec David Philippe Nov 14, Jan 26, GT-1421 Prepare statement of work Fixed 21, Yates Robin 2011 2012 Resolved 2011

Evaluate existing Open Source David Philippe Nov 14, Mar 07, GT-1420 Fixed Software projects Yates Robin Resolved 2011 2013 Work the safety critical functionality missing warning Nov David Markus Oct 25, Nov 17, GT-1347 into the use case. Put a note Fixed 17, Yates Boje 2011 2011 that error memory is not in the Closed 2011 use case. Create and describe dedicated Dec David Gunnar Oct 25, May 10, GT-1346 steps within the Jira Lifecycle Fixed 15, Yates Andersson 2011 2012 milestones Closed 2011 Feb Lifecycle - Include AMM David Gunnar Oct 25, Mar 02, GT-1343 Fixed 09, discussion topics into workflow Yates Andersson 2011 2012 Resolved 2012 Lifecycle - Node State Manager Jan David Gunnar Oct 08, Mar 07, GT-1301 - Component Realization and Fixed 16, Yates Andersson 2011 2013 Compliance Resolved 2013 Lifecycle - Node State Mgmt - Jul David Gunnar Oct 08, Mar 07, GT-1300 Freeze final Architecture and Fixed 27, Yates Andersson 2011 2013 SOW Resolved 2012 Lifecycle - Node State Mgmt - May David Gunnar Oct 08, Feb 11, GT-1299 Freeze Concept, SW Platform Fixed 04, Yates Andersson 2011 2013 Req. and UC Realizations Resolved 2012 Lifecycle - Boot Manager - Dec David Gunnar Oct 08, Jun 20, GT-1298 Freeze final Architecture and Fixed 07, Yates Andersson 2011 2012 SOW Closed 2011 Lifecycle - Boot Manager - Dec David Gunnar Oct 08, Mar 01, GT-1297 Freeze SW Platform Req. and Fixed 21, Yates Andersson 2011 2012 UC Realizations Resolved 2011 Communication protocol: review Oct David Patrice Sep 01, Jul 30, GT-1168 diagnsotic requirement with Fixed 14, Yates Ancel 2011 2012 OEMs Closed 2011 June F2F: Work out how WR Nov lifecycle related source code / David Gunnar Jul 11, Mar 07, GT-941 Fixed 23, information can be fed into the Yates Andersson 2011 2013 Resolved 2011 next steps (workflow)

June F2F: Actions from Node David Markus Jul 11, Aug 05, GT-940 Fixed State Information review Yates Boje Closed 2011 2011

Lifecycle - Boot Concept Torsten Markus Jul 11, Aug 25, GT-939 Fixed approved Hildebrand Boje Closed 2011 2011 June F2F: Add requirement on David Markus Jul 11, Aug 04, GT-936 escalation strategy for Fixed Yates Boje 2011 2011 constantly crashing components Closed

June F2F: Work out the new BB David Markus Jul 11, Aug 05, GT-935 Fixed use cases Yates Boje Closed 2011 2011

June F2F: Discuss new David Markus Jul 11, Aug 04, GT-934 Fixed requirement "Fast Reboot" Yates Boje Closed 2011 2011 Provide Lifecycle content for Jan David Gunnar Jul 11, Jan 25, GT-920 technical session at AMM Paris Fixed 13, Yates Andersson 2011 2012 (System Infrastructure EG) Resolved 2012

Corrections on System David Markus Jul 07, Aug 05, GT-915 Fixed Recovery BB Use Cases Yates Boje Closed 2011 2011 Lifecycle - Relevant David Gunnar Jul 07, Feb 08, GT-914 Non-functional requirements in Fixed Yates Andersson 2011 2013 the UML Model integrated Closed Bring the systemd decision to Nov Torsten Christian May 10, Nov 17, GT-568 system architecture team for Fixed 16, agreement Hildebrand Muck Closed 2011 2011 2011 Oct Enlighten the SW loading use David Christian May 10, Feb 11, GT-565 Fixed 11, case within the architecture. Yates Muck 2011 2013 Resolved 2012 How does boot management Nov knows when a certain start or David Christian May 10, Feb 09, GT-553 Fixed 29, shutdown state is reached by Yates Muck 2011 2012 Closed 2011 systemd Integrate the cancellation of an ongoing startup procedure into Nov the concept that the services David Christian May 10, Apr 12, GT-551 Fixed 23, need to be written Linux Yates Muck 2011 2012 Closed 2011 compliant and react right to to signals into the concept Identify legal contraints to put May David Philippe May 01, Jul 23, GT-541 Automotive EG requirements Fixed 10, Yates Robin 2011 2012 into public Closed 2011 Dec How to deal with car David Philippe Apr 19, Mar 07, GT-496 Fixed 23, configuration data ? Yates Colliot 2011 2013 Resolved 2012 Import agreed vehicle May David David Apr 14, Oct 20, GT-470 requirements into GENIVI EA Fixed 26, Yates Yates 2011 2011 model Closed 2011 Rough structure definition Jun description, interfaces to out of David David Apr 14, Jun 22, GT-469 Fixed 16, scope blocks (Thermal & Supply Yates Yates 2011 2011 Closed 2011 mgmt) Apr Define critical use cases for David David Apr 14, Jun 22, GT-468 Fixed 15, system start-up and shutdown Yates Yates 2011 2011 Closed 2011 May David Philippe Mar 28, Jul 23, GT-340 Review component description Fixed 13, Yates Robin 2011 2012 Closed 2011 Split lifecycle requirements into Mar David Markus Mar 09, Apr 14, GT-272 vehicle level and SW platform Fixed 17, Yates Boje 2011 2011 etc Closed 2011 Push the lifecycle architecture Jul Torsten Markus Mar 09, Jul 21, GT-270 definition from F2F to System Duplicate 08, Hildebrand Boje 2011 2011 Architects for review Closed 2011 Do the final update on the Mar David Markus Mar 09, Mar 17, GT-267 lifecycle requirements as Fixed 17, Yates Boje 2011 2011 discussed in F2F meeting Closed 2011 Think about if supply management has a direct Mar Torsten Markus Mar 09, Apr 14, GT-266 impact on lifecycle or if it is Fixed 17, Hildebrand Boje 2011 2011 something different and can be Closed 2011 postponed. Call for review on genivi-dev list May as soon as the table is David Markus Mar 09, Jun 22, GT-260 Fixed 09, prepared. (see agenda item Yates Boje 2011 2011 Closed 2011 above) Mar Write process description on top Torsten Markus Mar 09, Apr 14, GT-258 Fixed 11, of the wiki page. Hildebrand Boje 2011 2011 Closed 2011 Communication protocol: Mar David Philippe Feb 17, Jul 23, GT-191 Vehicle level requirements and Fixed 31, Yates Robin 2011 2012 use cases Closed 2011 Lifecycle - Node Startup Jan David Philippe Jan 13, Mar 07, GT-6 Controller - Component Fixed 25, Yates Robin 2011 2013 Realization and Compliance Resolved 2012 Provide Automotive EG Feb David Jeremiah Feb 15, Mar 06, GD-137 roadmap for the next Fixed 20, Yates Foster 2013 2013 compliance release Closed 2013 Provide Automotive EG Aug David Pavel Aug 08, Mar 14, GD-122 roadmap for the next Fixed 15, Yates Konopelko 2012 2013 compliance release Closed 2012 Wind River Platform for David Jeremiah Oct 29, Nov 14, GCR-36 Infotainment 5.0 iMX6 SABRE Fixed Yates Foster 2012 2012 Lite Closed Review TCS GENIVI Platform - Jul David Pavel Jun 25, Jul 18, GCR-24 0x86 / Compliance 1.0 Fixed 18, Yates Konopelko 2012 2012 Registration Closed 2012 Plan for Automotive EG Nov David Frederic Nov 17, Nov 28, GBL-213 contribution for baseline release Fixed 30, Yates Bourcier 2012 2012 E-1.2 Closed 2012 Aug Plan for Auto EG contribution David Frederic Aug 01, Sep 13, GBL-146 Fixed 10, for baseline release E-0.2 Yates Bourcier 2012 2012 Closed 2012 Plan Automotive EG Jun David Frederic Jun 18, Jul 30, GBL-124 contribution for baseline release Fixed 28, Yates Bourcier 2012 2012 E-0.1 Closed 2012

If you can not see the Jira issue lists, you are not member of the Jira Project Tracking group.

SysInfraEGLifecycleExecPrjctBootManager

Execution Project Node Startup Controller

1. Team 2. Statement of Work 3. Project Schedule 4. Weekly Status Reports 5. Requirements 6. Architecture Overview 7. Documents and Specifications 8. Implementation 9. Baseline Integration Status 10. Telcos 11. Overview of Wiki Pages

1. Team

GENIVI representatives:

Gunnar Andersson (EG-SI Lead) David Yates (GENIVI Project Manager / Mentor, Topic Lead Lifecycle)

Codethink representatives:

Alex Brankov (Codethink Project Manager) Jannis Pohlmann (Technical Lead) Ben Brewer (Backup Technical Lead) Francisco Marchena (Engineer) Jonathan Maw (Engineer)

2. Statement of Work

This execution project aims at transforming the Node Startup Controller (former called Boot Manager) from an Abstract Component into a Specific Component by delivering an implementation of the Node Startup Controller to be integrated into the GENIVI Excalibur release.

This wiki page provides an overview of all information generated as part of the project. This includes the project schedule, requirements, an overview of the architecture, pointers to the implementation, documents and technical specifications as well as progress reports in the form of telco minutes.

3. Project Schedule

Schedule and project tracking for Excalibur

4. Weekly Status Reports

This section will include summaries of all weekly progress reports.

cw2412 cw2512 cw2612 cw2712 cw2812 cw2912 cw3312

5. Requirements

Consolidated Requirements Specification Excalibur Dependencies

6. Architecture Overview

Functional Scope Architecture Overview Internal Software Architecture Interaction with systemd Legacy application start/shutdown

7. Documents and Specifications

Node Startup Controller D-Bus interfaces Test Scenarios

8. Implementation

The source code repository for the boot manager is available here: https://git.genivi.org/git/gitweb.cgi?p=node-startup-controller;a=summary

Working with GENIVI Git repositories is described this wiki page.

9. Baseline Integration Status

This section will summarise the integration of the Node Startup Controller implementation into the GENIVI baseline.

10. Telcos

Weekly telcos have been set up on Mondays, 15:00 - 16:00 CET. This section lists the minutes of these calls.

Telco 2012-06-11 Telco 2012-06-18 Telco 2012-06-25 Telco 2012-07-02 Telco 2012-08-13

11. Overview of Wiki Pages

Node Startup Controller Description SysInfraEGLifecycleExecPrjctBootManagerArchitectureOverview SysInfraEGLifecycleExecPrjctBootManagerDBusInterfaces ReviewBootManagerDBusInterfaces SysInfraEGLifecycleExecPrjctBootManagerDependencies SysInfraEGLifecycleExecPrjctBootManagerExcaliburSchedule SysInfraEGLifecycleExecPrjctBootManagerFunctionalScope SysInfraEGLifecycleExecPrjctBootManagerInternalArchitecture SysInfraEGLifecycleExecPrjctBootManagerMinutes20120611 SysInfraEGLifecycleExecPrjctBootManagerMinutes20120618 SysInfraEGLifecycleExecPrjctBootManagerMinutes20120625 SysInfraEGLifecycleExecPrjctBootManagerMinutes20120702 SysInfraEGLifecycleExecPrjctBootManagerMinutes20120709 SysInfraEGLifecycleExecPrjctBootManagerMinutes20120716 SysInfraEGLifecycleExecPrjctBootManagerMinutes20120813 SysInfraEGLifecycleExecPrjctBootManagerRequirements SysInfraEGLifecycleExecPrjctBootManagerTestScenarios SysInfraEGLifecycleExecPrjctBootManagerTrackingInJIRA SysInfraEGLifecycleExecProjectBootManagerLegacyApps SysInfraEGLifecycleExecProjectBootManagerSystemdInteraction SysInfraEGMinutes20120529

Node Startup Controller Description

This is the draft proposal of the public web page text for NSC. Node Startup Controller (NSC), part of the Lifecycle subsystem in GENIVI

Introduction

The Node Startup Controller (NSC) was introduced into the lifecycle package for GENIVI in order to handle some startup and shutdown functionality. It essentially "extends" systemd to implement some IVI requirements that are not done by systemd itself because they are not generally applicable for all Linux systems (as determined through discussion with the systemd community). However, similar functionality might be desired in non-automotive systems so we hope this can be useful and/or develop into something shared across domains.

The main areas of responsibility for the NSC are:

Last User Context (LUC) Management

Legacy Application Shutdown

Target Startup Monitoring

Last User Context (LUC) Management

The Last User Context (LUC) Management holds information about what applications the user was using in the last lifecycle in order to allow the same applications to be restored automatically or prioritized over other applications the next time the system boots up. Applications are stored in the LUC in groups whose start order is defined at build time. The choice is fully flexible but for a typical Automotive IVI system, these groups could be:

Foreground application

Audible application

Background applications

The LUC management simply stores the systemd unit name (e.g. navigation.service) for each LUC-managed application and communicates with systemd to start up these applications shortly after boot. Systemd D-Bus interfaces are used instead of simply a stored systemd target file on disk. This is to allow the behavior to be flexible according to system policy and because the behavior might also be dynamic as described in the next section. In the current implementation some configuration is available to control the startup order.

Legacy Application Shutdown

In this context "Legacy applications" means applications that do provide a systemd unit file but are otherwise unaware of, or do not make use of other GENIVI/Lifecycle components. This means that the applications will not actively register themselves as a shutdown consumer in the Node State Manager (NSM). (That is the normal way for the NSM to know what applications are active and can be controlled in order to sleep/shutdown. To solve the "legacy applications" the NSC provides a mechanism to separately register such applications as shutdown consumers.

Whenever the Node State Manager decides to perform a shutdown it will ask the shutdown consumers to shut down in reverse order of their creation. To the NSM it does not matter whether or not the consumers are registered by applications themselves or by the Node Startup Controller (the "legacy applications" case).

There is some other behavior also during shut-down which should be mentioned here fore completeness, although it is controlled by the Node State Manager (NSM) as opposed to NSC. Specifically it is the requirement to implement "cancelled shutdown" which is not typically present in non-IVI Linux systems.

Target Startup Monitoring

The Node Startup Controller is responsible to inform the Node State Manager when a particular boot state has been reached (e.g. focused.target, lazy.target are completed). To achieve this the Node Startup Controller (NSC) monitors signals from systemd indicating the completion of target/unit completion and then contacts NSM through its D-Bus interface.

SysInfraEGLifecycleExecPrjctBootManagerArchitectureOverview

Boot Manager Architecture Overview

The high-level architecture of the boot manager implementation looks is defined as follows: The functionality described on the Functional Scope page is provided by three different D-Bus services. These services and their interaction with the rest of the system are described in this document.

All D-Bus interfaces are versioned to allow incompatible future interfaces to be developed using the same naming prefix.

The interfaces org.genivi.BootManager.BootManager1 and org.genivi.BootManager.LegacyAppHandler1 are implemented in a executable for start-up performance reasons. org.genivi.BootManager.BootManager1

This service implements and provides the following functionality:

LUC Management: A method to register one or multiple applications with the LUC; each of these applications is associated with one of the three types (audible, background, foreground) A method to deregister one or multiple applications with the LUC; each of the applications is optionally associated with a type for which it should be removed from the LUC

The org.genivi.BootManager.BootManager1 implementation interacts with the rest of the system as follows:

It is started by systemd. It registers itself with systemd's watchdog mechanism. It subscribes to the org.freedesktop.systemd1.Manager service to receive JobRemoved signals. It registers itself with the com.conti.NodeStateManager.Consumer service as a shutdown consumer at start-up. It queries the com.conti.NodeStateManager.LifecycleControl interface for whether or not it should restore the LUC. If required, it restores the LUC applications by starting them in three groups. It registers itself with the com.conti.NodeStateManager.Consumer service as a shutdown consumer at start-up. It allows applications and other components to register/deregister applications with the LUC. org.genivi.BootManager.LegacyAppHandler1

This service implements and provides the following functionality:

Legacy Application Shutdown A method to register a new shutdown consumer for an arbitrary unit

The org.genivi.BootManager.LegacyAppHandler1 implementation interacts with the rest of the system as follows:

It is started by systemd either because it is implemented in the same executable as org.genivi.BootManager.BootManager1 or because a legacy application has been started and the ExecStartPost command is being executed. It registers itself with systemd's watchdog mechanism. For each legacy application that is registered with the org.genivi.BootManager.LegacyAppHandler1, it creates a new NSMShutdownConsumer object and registers it with the com.conti.NodeStateManager.Consumer service as a shutdown consumer. SysInfraEGLifecycleExecPrjctBootManagerDBusInterfaces

Boot Manager D-Bus Interfaces

NOTE: The interfaces on this page have not been finalized yet.

org.genivi.BootManager1 Interface Specification Data Exchange Format LegacyAppHandler Interface Specification org.genivi.BootManager1

Interface Specification

package BootManager

<** @description: Type used for systemd unit names, e.g., "app1.service". **> typedef UnitName is String

<** @description: Type for arrays of systemd unit names. **> array UnitNameArray of UnitName

<** @description: Interface for managing the GENIVI LUC (Last User Context)

The GENIVI BootManager remembers applications that were used in the last session of a user (LUC). It uses this LUC data to restore these applications on the next start-up.

The BootManager is a passive component in the sense that it does not remember applications on its own; instead, applications need to register and deregister themselves proactively.

Applications can be registered for different LUC types. The allowed values for LUC types are: "foreground", "background" and "audible". Other values will be disregarded. **> interface "org.genivi.BootManager.BootManager1" { <** @description: Begins the LUC registration state. The BootManager must be set into the LUC registration state before RegisterWithLUC is called.

@details: org.freedesktop.DBus.GLib.Async == true **> method BeginLUCRegistration{ }

<** @description: Registers one or more applications for certain LUC types.

Applications may be listed multiple times. For LUC types where only a single application may be registered at a time, the last application in the corresponding list wins.

The Register() method may be called multiple times. Every invocation will add to the current LUC rather than overwriting it.

Unknown LUC types will be ignored.

@details: org.freedesktop.DBus.GLib.Async == true **> method RegisterWithLUC { in { <** @description: A mapping of LUC types to unit name arrays. **> LUCMap apps } }

<** @description: Finishes the LUC registration.

Writes the LUC back to disk atomically and deletes the previous LUC.

The BootManager must be set into the LUC registration state before this is called.

@details: org.freedesktop.DBus.GLib.Async == true **> method FinishLUCRegistration{ } }

Data Exchange Format

The apps parameter of the Register() and Deregister() methods is a dictionary with the following structure:

{ "audible": [ "app3.unit" ], "background": [ "app2.unit" ], "foreground": [ "app1.unit", "app2.unit" ] }

where the different LUC types are the keys of the dictionary and the values of the dictionary are lists of application unit names.

The LUC types are encoding as strings to allow them to be easily introspected and extended in the future. The LastUserContext is a D-Bus property and can be accessed and monitored. It uses the same format as the apps parameter.

LegacyAppHandler

Interface Specification

The LegacyAppHandler uses a command-line interface (CLI) to register apps on start-up using the systemd ExecStartPost field.

It is an interface for registering legacy apps with the NSM.

Legacy applications are applications that provide a systemd unit file but are unaware or do not make use of any GENIVI components.

The GENIVI Legacy App Handler registers these apps with the NSM (Node State Manager) as shutdown consumers, so that when the NSM performs a shutdown it can shut down the application in reverse order of their creation.

The basic interface is:

legacy-app-handler unit-name: An application unit filename. mode: Shutdown mode for which to register. timeout: Time to wait (ms) for the application to shut down (integer).

ReviewBootManagerDBusInterfaces

Review meeting for Boot Manager Interfaces

2011 June 26, 1300-~1440

Participants

Gianpaolo Torsten David Jannis Klaus (until 1400) Gunnar Manfred

Discussion on process.

Should we use Jira items? One for the topic of review and subitems for things that need to be discussed and to document input. (Towards the end of the meeting decided this is not typical for GENIVI - Mail threads and then summarizing the mail thread outcome is what will be used).

For now we'll start with these minutes. Topics:

Is BM duplicating systemd interface?

Is Boot Manager only wrapping systemd, why is BM needed and what does it add?

Abstraction (non-dependence) of systemd was desired. Legacy applications that know systemd could be tempted to use systemd directly. BM interfaces a bit more controlled. Some simplification. systemd interface is a little more complicated.

Manfred: BM interface seems to be synchronous. How about concurrent activity? how will that be handled?

Sriram: All are good reasons, but is BM the right candidate that should fulfill this responsibility? Sriram: The term legacy application is not clear.

BM is an extension of systemd Start and Stop will have conditions - maybe not always allowed. We cannot add such conditions to systemd itself.

Manfred: we never discussed to try this, did we? Lennart Poettering already visited some Genivi meetings in person, and was very open for changes.

For example during a shutdown procedure, it might not be allowed to perform Start on a unit.

All components normally allowed to use systemd interface. BM interface would be limited to only some client, i.e., systemd direct access is not recommended for the equivalent systemd interfaces (Start,Stop,...). Some systemd functions are still planned to be used directly: Example watchdogs. An application must use the systemd notify interface when completed its init.

Sriram: It is far more confusing to use 2 components that overlap in services. Either there should be a full abstraction of systemd OR systemd should be the interface.

Manfred describes a different way to limit/configure systemd behavior that would achieve similar result. Configure systemd D-Bus interface so that only one specific component can access systemd. This component would implement policies.

Group majority feels this is a design change which we cannot use now.

Sriram: I think wrapping systemd is also a design/architecture change, albeit an easy one. However that may not be the right one in the longer term as it impacts how other components will use the wrapper

However, it would be good to pick up and fully understand Manfred's idea. Manfred would like to understand if there is a plugin to change the behavior. Manfred, please write down in email. What is the requirement?

String types to be defined as type defs (with limited value range).

Cannot control this in D-Bus API. We do not expect all applications to have a common header file? A client library could be created later that helps applications use the correct values. + Compile time checking is good but build dependencies arise from including other components' header files. Will we have common header files?

Sriram: What is the constraint in having common header files across GENIVI? I think it will lead to elegant implementation.

Manfred: can´t we use also FrancaIDL definitions instead of common header files?

+ Typedefs can still be used in interface description (Franca) just to define the value range - this should not necessarily imply the usage of header files with strict types. Define typedef in Franca. Gunnar create JIRA Item for the bigger discussion.

LUCHandler Deregistration function needed? 1. Some believe the list in LUC will be cleared at each new lifecycle anyway, and that Register will be used typically at shutdown to store the most recent state, so no need to deregister. 2. OTOH if a system designer want to keep LUC updated at all times (to handle for example unexpected shutdown due to power loss), then continuous Register/Deregister is necessary to keep LUC updated. 3. We have other requirements that assume that LUCHandler SHALL clear the LUC at the start of Shutdown. If we keep these, then situation 2. is not possible. This solution will control the persistence of LUC in an easier way. Versioning of the LUC in Persistence could also be possible (use "old" if power-loss creates a problem)

Manfred: more interesting is the Registration: it needs to be done in the critical path during startup, and there is only ONE and the same component registered in each lifecycle again and again. Registering it via D-Bus means the LUCHandler needs to reside in an own process. Is there an estimate how this could affect performance?

There needs to be a method to notify that LUC has been completely updated. (Typically the implementation might use this to initiate persistent storage, but that is a detail not known to the applications).

The discussion creates new ideas. It seems to need some redesign. Gunnar Create Jira item for this also

Separate mail threads are needed. Most are used to work that way, also in open source projects. Map to Array of strings - too complex type for API? Gunnar, genivi-dev, unless already adequately explained by Jannis' answer.

String values identical with systemd - may change? Gunnar, genivi-dev, unless already adequately explained by Jannis' answer.

Is reporting unit names enough for the List method, or should we give more information? Jannis, genivi-dev

systemd Start modes needed to be exposed in interface? Jannis, genivi-dev

LegacyAppHandler separate component talking to NSM. Why do clients not talk directly to NSM? Gunnar, genivi-dev unless already adequately explained by Jannis' answer.

Enum vs String. Strings are easier to understand for tests, debugging General question. genivi-dev / architect

Sriram: I personally always vote for stronger type. enum also has an advantage of efficiency and error-free characteristics in implementation

Manfred: fully support Sriram´s comment

SysInfraEGLifecycleExecPrjctBootManagerDependencies

Dependencies of the Boot Manager

For Excalibur

Name Minimum Required Reason Version

systemd >= 183 For control over units; 183 required for monitoring target startup and isolation functionality for the node application mode

Linux >= 2.6.39 Required by systemd 183

GLib >= 2.30 For object-oriented and event-driven implementation, main loop functionality and GDBus

Automotive >= 2.2.0 Mandatory for all GENIVI components DLT

Node State >= ? (Excalibur For LUC start-up query, setting system state changes and registration of shutdown consumers Manager version)

SysInfraEGLifecycleExecPrjctBootManagerExcaliburSchedule

Boot Manager Schedule for Excalibur

1. Project Timeframe 2. Work packages 3. Project Tracking in JIRA 4. Milestones in JIRA 5. Tasks in JIRA

1. Project Timeframe

Start Date Summary description of activities Status Target date Expected Workload

2012-05-23 Implementation of the Boot Manager in progress 2012-08-27 2 engineers full-time

2. Work packages

Start Date Summary description of activities Status Target date Expected Workload

2012-05-23 Requirements analysis and high-level design finished 2012-06-04 2 engineers full-time

2012-06-05 Technical specifications and phase 1 of implementation finished 2012-06-26 2 engineers full-time 2012-06-27 Phase 2 of implementation and bug fixing/quality improvements finished 2012-07-24 2 engineers full-time

2012-07-25 Documentation and integration in progress 2012-08-27 1 engineer full-time

3. Project Tracking in JIRA

The main entry point for information about the status of the project in JIRA is the following task: https://collab.genivi.org/issues/browse/GT-2055.

4. Milestones in JIRA

Milestones of the Boot Manager Execution Project (4 issues) Type Key Summary Plannedstart Due Assignee Status Jun Milestone: Deliver final design and project plan to GENIVI for assessment Jannis GT-2060 04/Jun/12 04, and approval Pohlmann 2012 Closed Jun Milestone: Deliver implementation of phase 1 functionality to GENIVI for Jannis GT-2065 26/Jun/12 26, approval Pohlmann 2012 Resolved Jul Milestone: Deliver a feature-complete implementation of the Boot Jannis GT-2068 24/Jul/12 24, Manager including phase 2 functionality to GENIVI for approval Pohlmann 2012 Resolved Aug Milestone: Package integrated into GENIVI release and handover of final Jannis GT-2073 21/Aug/12 21, documentation Pohlmann 2012 Closed

5. Tasks in JIRA

Tasks of the Boot Manager Execution Project (17 issues) Type Key Summary Plannedstart Due Assignee Status Dec David GT-1298 Lifecycle - Boot Manager - Freeze final Architecture and SOW 07, Yates 2011 Closed Dec Lifecycle - Boot Manager - Freeze SW Platform Req. and UC David GT-1297 21, Realizations Yates 2011 Resolved Jan Lifecycle - Node Startup Controller - Component Realization and David GT-6 25, Compliance Yates 2012 Resolved May Liaise with BIT and NSM team, compile a full description of all functional Jannis GT-2061 23/Mai/12 29, and non-functional requirements of the Boot Manager Pohlmann 2012 Closed Jun Define Boot Manager components and their roles/responsibilities; Jannis GT-2062 01/Jun/12 04, produce final design document Pohlmann 2012 Closed Jun Gunnar GT-2056 Requirements analysis and high-level design 23/Mai/12 04, Andersson 2012 Closed Jun Specification of service interfaces, model interaction with systemd; Jannis GT-2063 05/Jun/12 12, design internal software architecture Pohlmann 2012 Closed Jun Incremental development of LUC management and LUC-based unit Jannis GT-2064 13/Jun/12 26, startup Pohlmann 2012 Closed Jun Jannis GT-2057 Technical specifications and phase 1 of implementation 05/Jun/12 26, Pohlmann 2012 Resolved Jul Incremental development of status reporting, startup cancellation, Jannis GT-2066 27/Jun/12 17, startup prioritization and launching of applications at runtime Pohlmann 2012 Resolved Jul Improvements of the code and product quality by means of testing, Jannis GT-2067 17/Jul/12 24, reviewing and cleaning u Pohlmann 2012 Resolved Jul Jannis GT-2058 Phase 2 of implementation and bug fixing/quality improvements 27/Jun/12 24, Pohlmann 2012 Resolved Jul GT-2069 Write manual pages and API documentation 25/Jul/12 Unassigned 27, Resolved 2012 Aug Make testing the unit startup implementation of the Boot Manager GT-2070 30/Jul/12 03, Unassigned reproducible 2012 Resolved Aug Prepare compliance documents required for delivery into the GENIVI GT-2071 06/Aug/12 10, Unassigned BIT team 2012 Resolved Aug Packaging the Boot Manager and aiding BIT team with the integration Jannis GT-2072 13/Aug/12 20, into the GENIVI release Pohlmann 2012 Resolved Aug GT-2059 Documentation and integration 25/Jul/12 27, Unassigned 2012 Resolved

SysInfraEGLifecycleExecPrjctBootManagerFunctionalScope

Functional Scope of the Boot Manager

Overview LUC Management Dynamic Start-up Management Legacy Application Shutdown Target Startup Monitoring Node Application Mode Activation

Overview

Given the requirements specification, the following functional scope for the boot manager implementation has been identified:

LUC Management

The responsibility of LUC (Last User Context) Management is to manage information about applications the user is running in order to allow the same applications to be restored after a reboot of the system. The LUC supports three different types of applications:

Audible applications Applications that are current audible sources within the Head Unit (e.g. radio or CD player) The LUC only allows one audible application to be registered at a time Whatever app registers itself with the LUC as a audible app last overwrites any previous audible app Foreground applications Applications that are currently in the focus of the HMI (e.g., navigation or web browser) The LUC only allows one foreground application to be registered at a time Whatever app registers itself with the LUC as a foreground app last overwrites any previous foreground app Background applications Applications that run in the background but are neither in the foreground nor audible sources The LUC allows multiple applications to be registered as background applications at the same time

A registration and deregistration with the LUC consists of two parameters:

1. A systemd unit filename (e.g. navigation.service), 2. The LUC type, which is one of the following strings: audible, foreground or background. Audible, foreground and background applications are treated as start-up groups, meaning that e.g. all background apps are started in parallel. The order in which these three groups are started by the LUC Management can be configured at build-time.

In order to reduce the amount of communication with the LUC management, multiple applications can be registered and deregistered with the LUC at once.

Dynamic Start-up Management

One of the main responsibilities of the boot manager is to restore the LUC. In addition to this, the boot manager provides an interface for other components to request that other applications be started as soon as possible. This is important to react to situations where the user presses a button in the HMI in order to bring up a certain application (e.g., navigation) during boot.

User events are monitored by the application event handler. If a user event results in the need to start an application, this can be forwarded to the boot manager using its Runtime Application Management features. No special interface is required for this.

Legacy Application Shutdown

Legacy applications are applications that provide a systemd unit file but are unaware or do not make use of any GENIVI components. This also means that they do not register with the NSM (Node State Manager) as a shutdown consumer, which is a requirement for any application in GENIVI.

To solve this problem the boot manager provides a mechanism to register shutdown consumers for individual legacy applications. This works as follows:

1. the boot manager provides a D-Bus method for registering a shutdown consumer for a given unit filename, 2. the boot manager provides a helper script or binary that takes a unit filename and calls the above D-Bus method to register a shutdown consumer for this unit file, 3. an ExecStartPost command is added the unit files of legacy applications that calls the helper script or binary.

Whenever the NSM decides to perform a shutdown it will ask the shutdown consumers to shut down in reverse order of their creation. To the NSM it does not matter whether or not the consumers are registered by applications themselves or by the boot manager.

Target Startup Monitoring

The boot manager is responsible to set certain NSM states when certain targets (e.g. focused.target, lazy.target) have been started within or outside the boot manager. For this, the boot manager needs to monitor systemd for unit start-up events.

As of systemd 183, this is possible through systemd's JobRemoved signal. The boot manager

sets the NSM state to BASE_RUNNING during initialization, subscribes to systemd in order to receive signals from systemd, evaluates the received JobRemoved signals by 1. filtering out the signals that do not belong to target start-up events 2. setting the NSM state to LUC_RUNNING when focused.target has been started, FULLY_RUNNING when unfocussed.target has been started, FULLY_OPERATIONAL when lazy.target has been started.

Node Application Mode Activation

Node application modes (such as regular, transport, loading etc.) can be activated using two different ways:

1. the system is started and already knows about the desired node application mode at boot-time, 2. the system is already running and needs to switch from one mode to another.

The boot manager does not need to be aware of the list of supported modes. The two above scenarios are handled as follows:

1. When the system is started, the boot loader will set the final target (e.g. transport.target). All the boot manager has to know is whether or not to restore the LUC when starting up. This information is provided by the NSM via a GetRestoreLUC() query method. 2. The NSM or some other component initiates the switch from one mode to another at run-time. The boot manager provides an Isolate(target) method that can be used to switch to a different mode; the target would e.g. be transport.target or loading.target.

SysInfraEGLifecycleExecPrjctBootManagerInternalArchitecture

Internal Software Architecture of the Boot Manager

This page lists the expected components/classes that will end up being part of the internal software architecture of the boot manager implementation.

Common Components Boot Manager Components Legacy App Handler Components

Common Components The common components are defined as GObject classes in the common/ directory of the boot manager source tree.

WatchdogClient instantiated once for each service daemon responsible for setting up a timeout to call in regular intervals sd_notify("WATCHDOG=1") ShutdownConsumer generated from the shutdown consumer D-Bus interface specification used as a skeleton to implement the Shutdown() method called by the NSM during the shutdown phase NSMConsumerProxy generated from the NSM Consumer D-Bus interface specification used to register ShutdownConsumer objects with the NSM instantiated once by each service daemon to register itself as a shutdown consumer instantiated by the Legacy App Handler implementation once per legacy application

Boot Manager Components

NSMLifecycleControlProxy generated from the NSM D-Bus interface specification used by LUCStarter to ask for whether the LUC should be restored or not SystemdManagerProxy generated from the org.freedesktop.systemd1.Manager D-Bus interface specification used by BootManagerService and TargetStartupMonitor to talk to the org.freedesktop.systemd1.Manager service BootManager generated from the org.genivi.BootManager.BootManager1 D-Bus interface specification used as a skeleton by BootManagerService to implement the org.genivi.BootManager.BootManager1 communication LUCStarter takes an NSMLifeycleControlProxy and a BootManagerService asks the NSMLifecycleControlProxy about whether or not to restore the LUC if the LUC is to be restored, uses the internal BootManagerService API to start the LUC units has a build-time-configurable order in which the different LUC type groups are started (e.g. first foreground, then background etc.) LUCHandlerService takes a GDBusConnection and creates a LUCHandler also creates a GSettings object to store the last user context creates a mutual binding between the "last-user-context" property of the LUCHandler and the "/last-user-context" GSettings key connects to the "handle-register" and "handle-deregister" signals of the LUCHandler to implement the server-side of the D-Bus methods updates the "last-user-context" property of the LUCHandler whenever a change is requested in the "handle-register" and "handle-deregister" callbacks Common components used: one ShutdownConsumer to handle shutdown requests from the NSM one NSMConsumerProxy to register the ShutdownConsumer with the NSM one WatchdogClient to register with systemd's watchdog mechanism

Legacy App Handler Components

LAHandler generated from the org.genivi.BootManager.LegacyAppHandler1 D-Bus interface specification used as a skeleton in LAHandlerService to implement the org.genivi.BootManager.LegacyAppHandler1 D-Bus communication

LAHandlerService takes a GDBusConnection and creates a LAHandler connects to the "handle-register" and "handle-deregister" signals of the LAHandler to implement the registration and deregistration of ShutdownConsumer objects for legacy apps implements a "handle-shutdown" signal handler for every ShutdownConsumer create for the legacy apps

Common components used: multiple ShutdownConsumer to handle shutdown requests from the NSM, one for the service itself, multiple ones for the legacy apps one NSMConsumerProxy to register the ShutdownConsumer of the service and the ShutdownConsumer objects of legacy apps with the NSM one WatchdogClient to register with systemd's watchdog mechanism

SysInfraEGLifecycleExecPrjctBootManagerMinutes20120611

Boot Manager Telco 2012-06-11 (15:00-15:30 CET)

Participants Minutes Status Report Visibility of the Progress Weekly Telco

Participants

Torsten Hildebrand (GENIVI, Continental) Alex Brankov, Marc Dunford, Jannis Pohlmann (Codethink)

Minutes

Status Report

Finished assessment of the requirements Finished design documentation including functional scope and architecture overview Talked to the BIT team to understand the integration process We've signed the GENIVI CLA, requested a repository We've also created JIRA items for the systemd and Linux versions we need We've clarified that the GLib dependency is ok for now Now working on technical specifications (control flow, sequence diagrams, D-Bus interfaces) Have also started working on the implementation of the LUC handler (internal repository for now)

Visibility of the Progress

Torsten: We should start using JIRA items to organize the work and make it more visible to others in GENIVI. This is important for others to be able to add their comments.

Torsten: We should go through a public review process before we start with the actual implementation. So after the technical specifications and everything are finished, we should post the plans on the genivi-dev mailing list so that others can comment.

ACTION: Torsten to look into creating a boot manager component in the GENIVI ProjectTracker project in JIRA.

Weekly Telco

Decision: Weekly telco to be held on Mondays, 15:00-16:00 CET using Continental's Webex. Codethink to send their agenda to Torsten before meetings.

SysInfraEGLifecycleExecPrjctBootManagerMinutes20120618

Boot Manager Telco 2012-06-18 (15:00-16:15 CET)

Participants

Gunnar Andersson (EG-SI Lead, Volvo) Torsten Hildebrand (Lifecycle Lead, Continental) David Yates (EG-Auto Arch, Continental) Jannis Pohlmann (Codethink)

Minutes

Status Report

Design document is finished (static and dynamic views), ready for review.

David & Torsten: Review Codethink Bootmanager design documentation

LUC implementation done. Bootmanager dbus interface implementation is ongoing (glib-binding, XML format)

Status of the git.genivi.org repository

Process ongoing, but started.

Shipping of NSM interfaces

Conti want to provide the NSM implementation with a evaluation license to Codethink this week (still not agreed inside Continental).

Codethink: Should also check if they can do a NDA with Continental, that the code is provided under this NDA.

Final license of NSM still not decided in Continental. Risk: This can have impact on the milestones of the Bootmanager. Especially to start the testing outside Codethink is quite late.

Review of interfaces by genivi-dev/eg-si/SAT

Gunnar: Review this week within EG-SI (Interfaces) Gunnar: Review next week within EG-SI (Design)

Status of systemd 183 / linux 2.6.39 in Compliance 3.0

Codethink: Create JIRA item for systemd 183 for Compliance 3.0

JIRA item is created, no further activities needed anymore, handled by Gianpaolo Macario. Closed for Codethink.

Other open actions/items:

Gunnar: Creating a boot manager component in the GENIVI ProjectTracker project in JIRA._

Done.

Codethink & David: Interface in UML model and implementation need to be synchronized

Telco Information

Weekly telco to be held on Mondays, 15:00-16:00 CET using Continental's Webex. Codethink to send their agenda to Torsten before meetings. If someone else what to join this telco, just send a mail to Torsten

SysInfraEGLifecycleExecPrjctBootManagerMinutes20120625

Boot Manager Telco 2012-06-27 (11:00-11:45 CET)

Participants

Gunnar Andersson (EG-SI Lead, Volvo) Torsten Hildebrand (Lifecycle Lead, Continental) Jannis Pohlmann (Codethink) Alex Brankov (Codethink) Marc Dunford (Codethink)

Minutes

Status report

Created JIRA items for milestones and task breakdown Implemented LUC start-up in the boot manager service Implemented all methods of the org.genivi.BootManager1 interface Pushed the resulting code to the new git.genivi.org repository Started the interface review during the EG SI telco last Thursday Working on target start-up monitoring and node application mode activation Working on the implementation of the legacy application handler

Phase 1 implementation milestone today

We think we have more implemented than we agreed to deliver (which was LUC handler + LUC start-up) Missing: registration of shutdown consumers with the NSM Missing: check with NSM whether LUC is to be restored or not

NSM strategy

We (Codethink) want to deliver all our work packages Therefore we need to find a way to prove to GENIVI that our implementation works Proposal: develop and ship an NSM dummy (implementing the bits we need) that can be dropped as soon as the NSM is available to all of GENIVI Needs GENIVI to agree that this would be an acceptable way to deliver in case the real NSM doesn't make it into Excalibur

Additional Information Codethink: Additional founding needed, because of more GENIVI overheader as expected. Review takes more time. Additional effort because missing NSM.

Other open actions/items:

Codethink & David: Interface in UML model and implementation need to be synchronized

Torsten: NSM will be sent under NDA to Codethink.

Codethink: Provide a presentation of the Bootmanager design for the next EG-SI meeting.

Telco Information

Weekly telco to be held on Mondays, 15:00-16:00 CET using Continental's Webex. Codethink to send their agenda to Torsten before meetings. If someone else what to join this telco, just send a mail to Torsten

SysInfraEGLifecycleExecPrjctBootManagerMinutes20120702

0. Introduce Ben to Torsten and Gunnar 1. Review action items from last meeting:

Jannis to mark the JIRA items for this milestone as completed. Send email to discuss ways to have fewer and shorter meetings. Who is doing this in Codethink? Jannis to make a presentation of the design and present it to the meeting tomorrow. Jannis to ensure that the Action items from the previous meeting are reported in the next meeting. 2. Review JIRA tracking items 3. Report on progress/status 4. Discuss issues Any progress on the NSM NDA?

SysInfraEGLifecycleExecPrjctBootManagerMinutes20120709

Boot Manager Telco 2012-07-09 (14:00-14:45 CEST)

Participants

Gunnar Andersson (EG-SI Lead, Volvo) Torsten Hildebrand (Lifecycle Lead, Continental) Ben Brewer (Codethink) Alex Brankov (Codethink)

Minutes

Status report

Updated interface changes with David Yates to be discussed tomorrow at the SAT. Began implementing the changes in our code base internally. Mostly done with our dummy NSM model, still need to know about access to code and the NDA.

NSM strategy

We should still get access to Continental's code under NDA if necessary. Still need information on ErrorCodes/Shutdown interface.

Other open actions/items:

Telco Information

Weekly telco to be held on Mondays, 15:00-16:00 CET using Continental's Webex. Codethink to send their agenda to Torsten before meetings. If someone else what to join this telco, just send a mail to Torsten

SysInfraEGLifecycleExecPrjctBootManagerMinutes20120716 Boot Manager Telco 2012-07-16 (15:00-16:45 CET)

Participants

Gunnar Andersson (EG-SI Lead, Volvo) Torsten Hildebrand (Lifecycle Lead, Continental) Ben Brewer (Codethink) Alex Brankov (Codethink) Marc Dunford (Codethink)

Minutes

Status report

NSM dummy implementation mostly complete, paused due to Gothenburg changes.

Reviewed with Torsten:

Discuss the changes to the project requirements from Gothenburg. CT128-A002: Not required. CT128-A004: Not required. CT128-F004: Not required. CT128-F006: Not required. CT128-F014: Added.

To discuss with Gunnar:

More work than expected caused by interface changes at Gothenburg F2F. Some work wasted on runtime application handling wrapper which is now removed.

Issues

Discuss milestone slip. Torsten agrees it's likely the F2F added work, we should summarize any extra work and send to him to be reviewed before bringing it up with Gunnar or the BIT group. No action items for a few weeks. Updated JIRA items related to systemd and kernel version. Renaming the boot manager has potential to add work, more in terms of documentation than in the code though. David Yates (Continental) will announce any changes on Wednesday (18th). Agree updates to the project requirements. Agreed. Still need NSM code from conti, when? Torsten will chase this up.

Other open actions/items:

Torsten: NSM will be sent under NDA to Codethink.

GENIVI/Codethink: Naming discussion still open.

Torsten: Discuss the scope/budget re-arrangements for the Gothenburg F2F.

Telco Information

Weekly telco to be held on Mondays, 15:00-16:00 CET using Continental's Webex. Codethink to send their agenda to Torsten before meetings. If someone else what to join this telco, just send a mail to Torsten

SysInfraEGLifecycleExecPrjctBootManagerMinutes20120813

Node Startup Controller Telco 2012-08-13 (15:00-: CET)

Participants

Gunnar Andersson (EG-SI Lead, Volvo) Torsten Hildebrand (Lifecycle Lead, Continental) Alex Brankov (Codethink) Jannis Pohlmann (Codethink) Marc Dunford (Codethink)

Minutes

Review action items from previous meeting

Torsten: NSM will be sent under NDA to Codethink.

Resolved, Codethink and Conti signed an NDA and the NSM code was sent to Codethink.

GENIVI/Codethink: Naming discussion still open.

Resolved by renaming the component to Node Startup Controller.

Torsten: Discuss the scope/budget re-arrangements for the Gothenburg F2F.

Mark raised the issue of extra work that was necessary and requested that we take a look at the budget used so far and compare it against the actual work that was done.

Gunnar mentioned that he would have preferred to reduce scope to reduce the amount of work towards the end of the project. He would like to take a closer look at what happened between the milestones and what the extra work was.

We discussed a few examples of where functionality that had already been implemented was dropped.

Action Item: Codethink to summarise extra work and budget used so far.

Status report

August 1st: Delivered milestone 3 Testing Various quality improvements Work on the reference manual Documentation for manual testing August 10th: Submitted Node Startup Controller 1.0.0 for inclusion in E-0.2 Currently talking to the BIT to make the integration happen

Next steps

Rename the wiki pages? Update their documentation?

SysInfraEGLifecycleExecPrjctBootManagerRequirements

Consolidated Requirements Specification for the Node Startup Controller

1. General Requirements 2. Interface/API Requirements 3. Functional Requirements 4. Testing Requirements 5. Integration Requirements 6. Not Required

The status of all requirements below is tracked using the status column. Possible status values are:

satisfied: a requirement has been implemented (either in code or in process) and approved. implemented: a requirement has been implemented (either in code or in process) but this has not been approved yet. -: a requirement has been identified but not implemented yet.

1. General Requirements

Status Codethink GENIVI Requirement Added Comment ID ID

satisfied CT128-G001 License must be MPL 2.0 by Gunnar GPL2 might be acceptable as Andersson on well but we'll assume MPL 2012-05-31

satisfied CT128-G002 The source code repository must be hosted on by Philippe git.genivi.org Robin on 2012-05-31

satisfied CT128-G003 GENIVI's JIRA instance must be used for issue by Philippe tracking and organization of action items Robin on 2012-05-31 satisfied CT128-G004 For public communication, the mailing lists by Philippe [email protected] and [email protected] Robin on must be used 2012-05-31

satisfied CT128-G005 Weekly conference calls should be set up to by Philippe Torsten Hildebrand and perhaps report on the progress Robin, Gunnar Gunnar will be participating from Andersson on the GENIVI side 2012-05-31

satisfied CT128-G006 Minutes of the weekly conference calls should be by Gunnar Gunnar wants this for keeping uploaded to the GENIVI wiki Andersson on track of the progress; Philippe 2012-05-31 Robin requested it as well

satisfied CT128-G007 The design documentation and technical by Torsten specifications should be uploaded to the Hildebrand on execution project page in the GENIVI wiki 2012-06-01

satisfied CT128-G008 Codethink must sign the GENIVI Contributor by Jannis This is required for contributions License Agreement and officially announce which Pohlmann on in general but also a prerequisite developers are authorized to contribute code 2012-06-01 to having a git repo set up on under this agreement git.genivi.org

2. Interface/API Requirements

Status Codethink GENIVI ID Requirement Added Comment ID

satisfied CT128-A001 SW-LICY-003 The definition and structure of Lifecycle APIs by Mark Hatle (Wind River) must be hardware independent

satisfied CT128-A003 SW-LICY-066 An API must be provided for applications to by David Yates (Continental) register themselves in the LUC

satisfied CT128-A005 The interfaces must be specified using the by Gianpaolo Macario on Franca IDL 2012-06-21

satisfied CT128-A006 The Franca/D-Bus interfaces must be reviewed by David Yates, Gunnar and approved by the EG SI Andersson on 2012-06-14

satisfied CT128-A007 The Franca/D-Bus interfaces must be reviewed by David Yates, Gunnar and approved by the SAT Andersson on 2012-06-14

3. Functional Requirements

Status Codethink GENIVI ID Requirement Added Comment ID

satisfied CT128-F001 SW-LICY-009 All Lifecycle components by Mark must provide monitoring Hatle (Wind capabilities with DLT for River) the Test Framework

satisfied CT128-F002 SW-LICY-022 The Boot Manager must by Mark This is about setting the state in the NSM. For this we keep track about start up Hatle (Wind rely on systemd to provide a signal for when a certain of services/application River) target (e.g. lazy.target) has finished. Lennart has stated that the JobRemoved() signal can be used to keep track of target startups. It is available from systemd >= 183 on.

satisfied CT128-F003 SW-LICY-023 The Boot Manager by Christian provide a mechanism to Muck alter the start order of (BMW) applications based on Last User Mode persistent data

satisfied CT128-F005 SW-LICY-042 The Boot Manager must by David For start-up there needs to be a way to ask the boot provide a configurable Yates manager to register itself as a shutdown consumer dynamic shutdown (Continental) multiple times, through unit files that execute e.g. a procedure for legacy and binary that calls a D-Bus method on the already 3rd party applications in running instance and passes a shutdown-N.target to the system it. During shutdown the boot manager would be called to shut down multiple times. It would first shut shutdown-N.target, then shutdown-N-1.target and so on. Better idea: Add an ExecStartPost field to unit files of legacy applications that tells the boot manager to register a new shutdown consumer for this particular legacy app. satisfied CT128-F007 SW-LICY-057 The Lifecycle by David The boot manager must provide a "isolate this target" Management must Yates method for different application modes. It also must support different levels of (Continental) ask the NSM for whether or not it is supposed to functionality dependent restore the LUC. It does not need to know about the on active node sessions modes itself. Note: With regards to the isolate method NOTE: the active node see the note above. sessions agreed in the requirement actually refers to the "Node Application Mode", i.e. Transport, SWL etc.

satisfied CT128-F008 The LUC handler must by Jannis support audible, Pohlmann background and on foreground applications 2012-05-29

satisfied CT128-F009 Multiple applications per by Jannis LUC category (audible, Pohlmann background, foreground) on must be supported 2012-05-29

satisfied CT128-F010 The order in which by Torsten audible, foreground and Hildebrand background groups are on started should be 2012-05-29 build-time configurable

satisfied CT128-F011 The method for by David registering applications Yates on with the LUC should 2012-05-29 allow for a multiple applications to be registered at once

satisfied CT128-F012 The boot manager must by Torsten register with systemd's Hildebrand watchdog mechanism on 2012-05-29

satisfied CT128-F013 The boot manager is by Torsten responsible for setting Hildebrand the system state in the on NSM when targets such 2012-05-29 as lazy.target have been started

satisfied CT128-F014 All of the BootManager by Ben functionality must reside Brewer on in the same process. 2012-07-13

4. Testing Requirements

Status Codethink GENIVI Requirement Added Comment ID ID

satisfied CT128-T001 Developers are expected to run static by Torsten quality checks of their code to detect Hildebrand leaks etc. on 2012-05-29

satisfied CT128-T002 Test cases must follow the use cases by Torsten in the UML model Hildebrand on 2012-05-29

satisfied CT128-T003 Test cases should be delivered as by Jannis No automated functional tests are provided example setups (each with unit files Pohlmann because testing functionality like this across and LUC data) along with on reboots is very hard and out of scope for this documentation on how to install and 2012-06-01 small project, mentioned on 12/07/2012 telco as test the setups manually part of the completion criteria.

satisfied CT128-T004 Two different example setups for LUC by Jannis This is to demonstrate that the boot manager startup, each with at least one Pohlmann restores the LUC correctly application must be provided on 2012-06-02 satisfied CT128-T005 An example setup for starting without by Jannis This is to demonstrate that different application LUC must be provided Pohlmann modes are supported during start-up on 2012-06-02

satisfied CT128-T006 An example setup for legacy by Jannis This is to test that multiple registrations of application shutdown with at least two Pohlmann shutdown consumers with the NSM are working legacy apps must be provided on correctly 2012-06-02

5. Integration Requirements

Status Codethink GENIVI Requirement Added Comment ID ID

satisfied CT128-I001 The API and ABI of the implementation must by Gunnar be frozen on 2012-07-30 (Compliance Freeze) Andersson on 2012-05-11

satisfied CT128-I002 The implementation must be feature-complete by Gunnar on 2012-07-30 (Compliance Freeze) Andersson on 2012-05-11

satisfied CT128-I003 The boot manager must be successfully by Gunnar integrated into the Excalibur release on Andersson 2012-08-27 (Baseline Freeze) on 2012-05-11

satisfied CT128-I004 The boot manager must build against the by Jeremiah GENIVI baseline Foster on 2012-06-01

satisfied CT128-I005 The boot manager must to be provided to the by Jeremiah This is to make developers responsible BIT team as an RPM or DEB package Foster on for detecting and fixing any build and 2012-06-01 missing dependency issues

satisfied CT128-I006 The boot manager must be working when by Jeremiah installed in a GENIVI baseline system (e.g. Foster on the Yocto based one for QEMU) 2012-06-01

satisfied CT128-I007 An early release of the boot manager in by Jeremiah RPM/DEB form for the E-0.1 release by Foster on 2012-07-13 is a nice to have 2012-06-01

satisfied CT128-I008 The boot manager must adhere to the by Gunnar For the specification, see semantic versioning specification Andersson http://semver.org/ on 2012-05-11

satisfied CT128-I009 The boot manager must pass the code by Jannis For more information, see Code scanning / license compliance check Pohlmann Scanning performed by the License Review Team (LRT) on 2012-06-02

satisfied CT128-I010 The boot manager must pass the technical by Jannis For more information, see Technical evaluation coordinated by the SAT/EG-SI Pohlmann Evaluation on 2012-06-02

satisfied CT128-I011 The boot manager must pass the quality by Jannis For more information, see Quality assessment coordinated by the SAT/EG-SI Pohlmann Assessment on 2012-06-02

6. Not Required

Codethink GENIVI ID Requirement Removed Comment ID

CT128-R001 SW-LICY-05 The Boot Manager must ensure that by David This is mandatory.target and during the F2F critical applications are prioritized during Yates, meeting in Wetzlar we decided that it belongs system start-up to ensure that they are Torsten to Boot Management but not to the Boot available as early as possible within the Hildebrand Manager. system start-up sequence (Continental) on 2012-05-29 CT128-R002 The Boot manager is not responsible for by Torsten systemd user sessions are still work in starting systemd user sessions; it will Hildebrand progress upstream; if needed this could be instead be started as root and will start on worked on in a separate project / work applications under their own user IDs 2012-05-29 package

CT128-A002 The LUC handler must be implemented as by Ben This is because it doesn't really belong into the a separate binary and D-Bus service Brewer on boot manager itself. It will later be moved to 2012-07-16 user management and that should be possible without affecting the boot manager too much. At the Gothenburg F2F it was decided that the LUC handler should be part of the same binary.

CT128-A004 SW-LICY-072 An API must be provided to read the by Ben It was decided at the Gothenburg F2F that all contents of the current LUC Brewer on components should reside in the same binary, 2012-07-16 therefore an external interface to read the LUC is no longer needed.

CT128-F004 SW-LICY-024 The Boot Manager must provide a by Ben This covers only applications that provide unit mechanism for starting and stopping Brewer on files. We need to provide D-Bus methods to applications during normal run-time. The 2012-07-16 start, stop, kill, restart and perhaps list units. It Boot Manager will use this for shutdown was decided at the Gothenburg F2F that the and cancel shutdown scenarios. BootManager would no longer provide a Additionally this interface will be used by wrapper for starting/stopping applications SWL and System Health Monitor during run-time since there is currently little use for such a wrapper.

CT128-F006 SW-LICY-050 The Boot Manager must be able to by Ben This is the boot manager receiving "start this dynamically alter the start-up sequence of Brewer on unit file" calls from the application event the Head Unit to reflect operations on user 2012-07-16 handler. The boot manager does not have to input devices. I.e. If the user presses the monitor anything for user input events itself. "Navi" button during the system start-up Since the run-time start/stop interface is then the Navigation SW must be given removed and systemd cannot prioritize tasks it priority during the start-up sequence is believed that this is no longer relevant.

SysInfraEGLifecycleExecPrjctBootManagerTestScenarios

Boot Manager Test Scenarios

1. LUC Handler Registration of individual apps Deregistration of individual apps Registration of multiple apps Deregistration of multiple apps

1. LUC Handler

For the LUC Handler implementation, we provide a set of tests that can be performed automatically once the luc-handler daemon has been installed to the system.

After the typical

./configure && make && make install sequence to build and install the boot manager (and therewith the LUC Handler), the following command can be used to run the LUC Handler test suite:

make -C tests/luc-handler check

This will clear the persistent LUC data and will perform the following tests:

Registration of individual apps

1. Register a single background app Verify that the background list was created and that the app was registered 2. Register another background app Verify that both apps are now registered 3. Register the first background app again Verify that the two apps are still registered and that the first app is only registered once 4. 4. Register a foreground app Verify that the two background apps are still registered, that the foreground list was created and that the foreground app is now registered as well

Deregistration of individual apps

1. Deregister the second background app Verify that only the first background app is now registered for background and that the foreground app is still registered 2. Deregister the first background app Verify that the background list has disappeared and that the foreground app is still registered

Registration of multiple apps

1. Register two background apps at once Verify that the background list was created, that both apps have been registered for background and that the foreground app is still registered as well 2. Register two more background apps Verify that all four background apps are now registered and that they are stored in the order they were registered 3. Register two already registered background apps in a different order Verify that these apps are moved to the end of the background list in the order they were registered in now 4. Register a background and an audible app, so a mix of registrations for an existing and a new LUC type Verify that the audible list has been created, that the new app has been added to it and that the background app has also been registered

Deregistration of multiple apps

1. Deregister all registered background apps Verify that only the audible and foreground apps remain in the LUC 2. Deregister all apps from the foreground and audible types, so multiple types at once Verify that the LUC is now empty again

SysInfraEGLifecycleExecPrjctBootManagerTrackingInJIRA

Boot Manager Execution Project Tracking in JIRA

We are using the BootManager label to link JIRA tasks/items to the execution project. The following tasks/items are being tracked at the moment:

Milestones and Sub-tasks of the Boot Manager Execution Project (4 issues) Type Key Summary Plannedstart Due Assignee Status Jun Milestone: Deliver final design and project plan to GENIVI for assessment Jannis GT-2060 04/Jun/12 04, and approval Pohlmann 2012 Closed Jun Milestone: Deliver implementation of phase 1 functionality to GENIVI for Jannis GT-2065 26/Jun/12 26, approval Pohlmann 2012 Resolved Jul Milestone: Deliver a feature-complete implementation of the Boot Jannis GT-2068 24/Jul/12 24, Manager including phase 2 functionality to GENIVI for approval Pohlmann 2012 Resolved Aug Milestone: Package integrated into GENIVI release and handover of final Jannis GT-2073 21/Aug/12 21, documentation Pohlmann 2012 Closed

SysInfraEGLifecycleExecProjectBootManagerLegacyApps

Legacy Application Start/Shutdown

Legacy applications are applications that provide a systemd unit file but are unaware of any GENIVI components, particularly the NSM's shutdown consumer concept. The boot manager therefore provides a helper binary and D-Bus method to register shutdown consumers for legacy applications. One shutdown consumer is created and registered with the NSM for each legacy application. During shutdown, the NSM will call them to shut down in reverse order. Each shutdown consumer will then call the boot manager to stop the corresponding legacy app unit. SysInfraEGLifecycleExecProjectBootManagerSystemdInteraction

Interaction with systemd

The boot manager interacts with systemd mostly through the org.freedesktop.systemd1.Manager and org.freedesktop.systemd1.Job interfaces. An org.genivi.BootManager1.Start("app1.unit") method call, for instance, would result in the boot manager calling org.freedesktop.systemd1.Manager.StartUnit("app1.unit", "fail"). From this call, systemd would return a job object, e.g., /org/freedesktop/systemd1/job/123, that implements the org.freedesktop.systemd1.Job interface and can be used to link what is going on in systemd with the Start() call. The org.freedesktop.systemd1.Manager.JobRemoved signal can be used to wait for the result of the job.

All interaction with systemd and boot manager clients is performed asynchronously, meaning that

the boot manager is expected to use asynchronous D-Bus calls for the communication with systemd, the boot manager is not allowed to block incoming method calls while it is waiting for a systemd job to finish, the boot manager will delay responses to incoming method calls until the corresponding systemd job has finished; clients are expected to be aware of this and make asynchronous calls to the boot manager.

LUC and Dynamic Start-up Management

The LUC start-up is a good example of how the interaction with systemd works. Starting the units results in systemd jobs being created and JobRemoved signals being emitted by systemd. Due to the asynchronous nature of the interaction, there is no guarantee for jobs to be finished in the order they were started:

Runtime Application Management

All of the unit start/stop/kill/etc. methods the org.genivi.BootManager1 interface provides result in similar interaction with systemd as is explained above for the LUC start-up. All methods result in systemd jobs being executed and their results being reported back to the boot manager. The only difference is the org.genivi.BootManager1.List method, which does not spawn a systemd job. Target Start-up Monitoring

For monitoring targets starting up, the boot manager subscribes to systemd using the org.freedesktop.systemd1.Manager.Subscribe method. Afterwards, it monitors the org.freedesktop.systemd1.Manager.JobRemoved signal. Whenever the job for an interesting target (e.g. lazy.target) is removed, it checks the type of the job and potentially calls the NSM to apply a new system state.

SysInfraEGMinutes20120529

Minutes of kickoff meeting Execution project "Boot Manager"

F2F in Wetzlar 29.05.2012

Participants:

Marc Dunford (Codethink) Jannis Pohlmann (Codethink) Alex Brankov (Codethink) David Yates (Continental) Torsten Hildebrand (Continental)

Agenda:

9:00 Introduction

9:30 Idea & Concept presentation incl. answering upcoming questions

AI: David -> NSM dbus interface to be sent to Codethink AI: David -> Some responsibility rewording inside the overview slide of boot management and resource management

11:00 Requirement walk-through

AI: David -> SW-LICY-025 removed AI: Jannis -> SW-LICY-042 design need to be done and reviewed Draft design has been discussed.

Latest feedback from Lennart:

Jannis: We also discussed that the boot manager is responsible to set the system state in the NSM e.g. when lazy.target has been started (LAZY_RUNNING). We were not sure whether systemd already provided a generic signal for when a unit has been started.

I discussed this with Lennart yesterday and, starting from systemd version 183, there is a signal called "JobRemoved" that the boot manager can connect to. "JobRemoved" is emitted whenever e.g. a unit start job has finished successfully or unsuccessfully. The signal reports the unit name and the result type (e.g. success). We can use this to filter out the targets we are interested in and so on.

When discussing SW-LICY-057 we discussed systemd's isolate feature. We were not sure whether this was part of systemd's D-Bus API.

I talked to Lennart and he told me that it is available as a special mode to be passed to StartUnit(), starting from version 183.

11:30 Design goal in workpackage description

Decision: NSM take responsibility about node application mode evaluation and provide an interface with the information about execute LUC (focus target) or not. AI: David-> Provide new extension of NSM interface

12:00 Walk-Through vehicle level requirement assigned to Boot Management

SW-LICY-066 SW-LICY-072 were not part of the SoW, but took into account inside the offer.

12:30 Lunch

13:30 Questions and answers

1. user-id How to start application with an user-id? systemd unit file supports some features

AI: David -> check if it is satisfies our needs

2. 2. glib glib dependency acceptable? What are the effort to have no dependency.

AI: Jannis -> Do a design proposal to review

3. license Source code license?

Continental: Recommend GPL2.0, but if Mozilla public license is required it is also fine. Codethink: Recommend GPL2.0, but if Mozilla public license is required it is also fine.

4. systemd version Version 27 plus patch for the target reached signal or lastest version which supports this feature

The points I added to the requirements analysis section above provide strong reasons to require systemd 183 as part of GENIVI Compliance 3.0. Lennart told me that it does work with Linux >= 2.6.38, so we should try to push for that in Compliance 3.0 as well.

Torsten, do you see any potential problems of hardware being incompatible with this version of the ? Torsten: No, we do not see an issue with this kernel version

5. Unit/Component testing Static code check should be done by the developer anyway Component testing need to follow the defined use-cases in EA model. (Note one test case can cover multiple use cases) Full feature state need to be demonstrated with test cases.

Jannis: Only very basic testing ("Test application (inc. systemd unit files) that can be started via the Boot Manager") was part of the SOW requirements when we submitted our proposal. We're fine with providing example collections of systemd unit files for the different use cases along with documentation on how to run manual tests.

6. Organisation Where to host the source code project? If nothing is decided by GENIVI, source force will be used. Where to host the documentation, project status, release notes, bugreports,...? If nothing is decided by GENIVI, GENIVI wiki and JIRA will be used. Is there any design document template? David will ask the SAT team. If noting is provided the project will use the GENIVI wiki.

AI: Torsten-> Create an entry page on the wiki for the execution project so that we can create our requirements specification and design documents there.

Jannis: Torsten, this is important to meet the first milestone. Can we get this set up ASAP? Torsten: Will be done this.

7. Integration What are the preconditions for a component integration with the BIT team? Further discussion is needed on this point.

AI: Jannis-> Ask the BIT what the pre-conditions and process for integration into Excalibur are. Keep Torsten and Gunnar in CC.

1. Status meeting / call Regularly call every week one hour. All AI will be listed on the project WIKI page. The will be a mailing list for the project Project status report is part of this regularly meeting.

15:30 Walk-through of the architecture model Explaination where everything is

AI: David -> Help Jannis to get the right version of the architecture model

16:30 Wrap up

AI: Let Torsten, David and Gunnar know about Codethink's internal [email protected] mail alias.

You now know, so we'll consider it done.

AI: Codethink-> to send Torsten, Gunnar an invitation to kanban. Should be done, please let us know if there are any problems? AI: Codethink-> to look into what the Quality Assurance process in GENIVI is.

SysInfraEGLifecycleExecPrjctNodeResMgmt

Execution Project Node Resource Management

1. Team 2. Statement of Work 3. Project Schedule 4. Weekly Status Reports 5. Requirements 6. Architecture Overview 7. Documents and Specifications 8. Implementation 9. Baseline Integration Status 10. Telcos 11. Overview of Wiki Pages 12. Component Specificaton 13. Test Plan

1. Team

GENIVI representatives:

Gunnar Andersson (EG-SI Lead) David Yates (Topic Lead Lifecycle)

2. Statement of Work

This execution project aims at transforming the logical Node Resource Management into the specific components Node Health Monitor and Node Resource Manager.

This wiki page provides an overview of all information generated as part of the project. This includes the project schedule, requirements, an overview of the architecture, pointers to the implementation, documents and technical specifications as well as progress reports in the form of telco minutes.

3. Project Schedule

Node Health Monitor released as Abstract P2 in Horizon Node Health Monitor changed to Specific P1 in Intrepid Node Resource Manager released as Abstract P2 in Intrepid

4. Weekly Status Reports

Weekly status reports can be found within the EG-SI Weekly Meetings

5. Requirements

Requirements Specification Excalibur Dependencies

6. Architecture Overview

Functional Scope Architecture Overview Internal Software Architecture Resource Management presentations

7. Documents and Specifications

Lifecycle presentations

8. Implementation The source code repository for the Node Health Monitor is available here:

Node Health Monitor Code

Working with GENIVI Git repositories is described this wiki page.

9. Baseline Integration Status

10. Telcos

11. Overview of Wiki Pages

The Lifecycle domain is described in a number of Wiki pages and you can navigate through the domain pages using the arrow links on the bottom of the page.

12. Component Specificaton

Here is the latest version of the Component Specification for the Node Health Monitor.

This is based on the latest version of the EA Model and the content of the Wiki pages.

13. Test Plan

The NHM is delivered with a unit test that can be started via the “make check” command. A coverage analysis with the tool “BullsEye” resulted in a function-coverage of 100 % and a branch-coverage of 84 % for NHM in version 1.3.3.

The unit test should be considered in every coming source code adaption. A static code analysis with the tool “Klocwork” and a memory leak check with “valgrind” did not show any findings for the NHM source.

SysInfraEGLifecycleExecPrjctNSM

Execution Project Node State Management

1. Team 2. Statement of Work 3. Project Schedule 4. Weekly Status Reports 5. Requirements 6. Architecture Overview 7. Documents and Specifications 8. Implementation 9. Baseline Integration Status 10. Telcos 11. Overview of Wiki Pages 12. Component Specificaton 13. Test Plan 14. Yocto BB recipe

1. Team

GENIVI representatives:

Gunnar Andersson (EG-SI Lead) David Yates (Topic Lead Lifecycle)

2. Statement of Work

This execution project aims at transforming the logical Node State Management into the specific components Node State Manager and Node State Machine by delivering an implementation to be integrated into the GENIVI Foton release.

This wiki page provides an overview of all information generated as part of the project. This includes the project schedule, requirements, an overview of the architecture, pointers to the implementation, documents and technical specifications as well as progress reports in the form of telco minutes.

3. Project Schedule Node State Manager released as Abstract P2 in Excalibur Node State Manager changed to Specific P1 in Horizon

The interface between the Node State Manager and the Node State Machine has been updated and a number of bug fixes integrated

4. Weekly Status Reports

Weekly status reports can be found within the EG-SI Weekly Meetings

5. Requirements

Requirements Specification Excalibur Dependencies

6. Architecture Overview

Functional Scope Architecture Overview Internal Software Architecture System shutdown concept NSM Data

7. Documents and Specifications

Lifecycle presentations

8. Implementation

The source code repository for the Node State Manager is available here:

Node State Manager Code

Working with GENIVI Git repositories is described this wiki page.

As a part of the review to promote the NodestateManager from "Abstract P2" to "Specific P1", the question arose, how the glib usage influences the startup. Therefore, measurements with strace have been made. The result indicates that shared objects are not loaded at once, but only the symbols that are accessed. The usage of large shared objects, in this case glib, does not necessarily slow the startup down.

The output of the strace measurement can be found here.

9. Baseline Integration Status

This section will summarise the integration of the NSM implementation into the GENIVI baseline.

10. Telcos

11. Overview of Wiki Pages

The Lifecycle domain is described in a number of Wiki pages and you can navigate through the domain pages using the arrow links on the bottom of the page.

12. Component Specificaton

Here is the latest version of the Component Specification for the Node State Manager.

This is based on the latest version of the EA Model and the content of the Wiki pages.

13. Test Plan

The attached document node_state_manager Test Plan.doc describes the test plan for the Node State Manager. This matches the code that is stored within the GIT repository.

14. Yocto BB recipe A Yocto BB recipe is attached here

Node State Manager Description

This is the draft proposal of the public web page text for NSM.

Node State Manager (NSM), part of the Lifecycle subsystem in GENIVI

Introduction

The node state management is the central function for information regarding the current running state of the embedded system. The Node State Manager (NSM) component provides a common implementation framework for the main state machine of the system. The NSM collates information from multiple sources and uses this to determine the current state(s).

The NSM notifies registered consumers (applications or other platform components) of relevant changes in system state. Node state information can also be requested on-demand via provided D-Bus interfaces.

The node state management also provides shutdown management, so one part of the information which is provided is the shutdown request notification to the consumers.

The node state management is the last/highest level of escalation on the node and will therefore command the reset and supply control logic. It is notified of errors and other status signals from components that are responsible for monitoring system health in different ways.

Internally, node state management is made up of a common generic component, Node State Manager (NSM), and a system-specific state machine (NSMC) that is plugged into the Node State Manager. Through this architecture there can be a standardized solution with stable interfaces towards the applications, which still allows for product-specific behavior through the definition of the specific state machine.

Full Text

The node state management is the central function for information regarding the current running state of the embedded system. The Node State Manager (NSM) component provides a common implementation framework for the main state machine of the system. The NSM collates information from multiple sources and uses this to determine the current state(s).

The NSM notifies registered consumers (applications or other platform components) of relevant changes in system state. Node state information can also be requested on-demand via provided D-Bus interfaces.

In addition to states, the NSM supports the notion of sessions which are temporary running conditions of the system. Sessions are independent and orthogonal to the node state. There are a number of predefined session types corresponding to what all IVI systems typically use. However, a key difference between sessions and states, is that any application can also dynamically add its own session type. In an automotive IVI system, an example of a typical session is an ongoing phone call. The specific state machine could be implemented such that this this affects how the state machine reacts to, for example, a shutdown request while the session is active.

The NSM also provides shutdown management. The boot management functionality (primarily implemented by the Node Startup Controller) only boots the node. Shutdown is achieved by the Node State Manager signalling applications to shut down. Please note however that a key difference in the GENIVI Lifecycle handling compared to typical Linux systems is that components shall normally be designed to simply go into a quiet state. Typically this means to store persistent data and make sure file systems are in a consistent state and so on. An application is however not shut down in the normal sense, i.e. it does not free memory or unload itself unnecessarily. This is to allow for the "cancelled shutdown" feature which allows the system to quickly resume into a running state if this was requested by the user at a time before the system has reached a full shut down. If shutdown completes however, the applications must have entered a state where it is safe to perform a reset or power-off.

This will be done later. Go ahead with the text above

Under the full text I would suggest that we include the majority of the contents of https://collab.genivi.org/wiki/display/genivi/SysInfraEGLifecycleConcept to really understand how the Node State Manager fits in the Lifecycle Concept.

SysInfraEGLifecycleSubsysDoc

Subdomain Lifecycle Documentation

A list of former/legacy documentations and specifications Lifecycle in system context Requirements analysis (the compliance requirements, use cases and scenarios) Functional analysis (based on use cases and scenarios) Design Integration Tests / Compliance A list of former/legacy documentations and specifications

The following picture shows an overview of lifecycle components.

– Christian Muck - 2011-01-17

ID Name of item Purpose of item Author Status (Name,Company)

1 Platform_State_Manager-Req.pdf Platform State Management Mark Hatle, Wind To-be-checked Requirement Specification (ver River 0.17)

2 PSMPresentation.ppt Top Level presentation Mark Hatle, Wind To-be-checked describing the Platform State River Manager (03/11/2009)

3 DFS-PowerStateManagement.pdf Overview of Demo (K3) version Wind River To-be-checked of the Program State Management (v0.17 - 29/05/2009)

4 GENIVI_Platform_Lifecycle.pdf GENIVI Platform Lifecycle Mark Hatle, Wind To-be-checked Specification River Draft Version 0.2 (19/06/2009)

5 System_Resource_Management_Requirements_v1.0.doc The capture of Resource Unknown To-be-checked Management requirements for the GENIVI system (Draft Version 1.0 - 23/06/2009)

6 DFS-ControlGroupsHealthMonitor-R2_0.pdf Overview of the K3 Demo for the Brian Gunn, Wind To-be-checked System Health Monitor (Version River 1.2 - 22/04/2009)

7 System_Resource_Management-collated-091709.xls System Resource Management: Various To-be-checked Requirements review

8 BootInfrastructure (Fast) Boot Infrastructure: Mark Hatle To-be-checked proposed boot process, power Wind River states and requirements

9 FastBootMoblin Fast Boot Infrastructure: Gerard Maloney To-be-checked Moblin optimizations

10 SIBootTimings Fast Boot Infrastructure: Manfred Bathelt To-be-checked requirements BMW

11 systemd - system initialization daemon Systemd: Michael Kerrisk To-be-checked system initialization and session jambit management daemon 12 GENIVI_1.0_RC_Architecture_Implementation_Overview.7z Architecture Implementation GENIVI To-be-checked Overview: GENIVI 1.0

13 100803_GENIVIStakeholdersNeeds_1.52.pdf Stakeholders Needs Document: GENIVI To-be-checked Apollo Reference document

14 ResourceManager Resource Manager: Mark Hatle To-be-checked Wiki - component description, Wind River requirements, implementation

15 SystemHealthMonitor System Health Monitor: Mark Hatle To-be-checked Wiki - component description Wind River and requirements

16 PowerStateManager Power State Manager: Mark Hatle To-be-checked Wiki - component description Wind River and requirements

Lifecycle in system context

System border (SysInfraEGLifecycleConceptSystemBorder) General concept (SysInfraEGLifecycleConcept) Boot management Split (SysInfraEGLifecycleConceptBootmanagementSplit) Boot Management - Startup concept (SysInfraEGLifecycleConceptBootmanagementStartup) Boot Management - Shutdown concept (SysInfraEGLifecycleConceptBootmanagementShutdown) Multi-node concept (SysInfraEGLifecycleConceptMultiNode) Concept refinement (SysInfraEGLifecycleConceptRefinement)

Requirements analysis (the compliance requirements, use cases and scenarios)

Requirements (SysInfraEGLifecycleRequirements) Vehicle Requirements

GENIVI Model -> Requirements View -> System Infrastructure -> Lifecycle SW Platform Requirements

GENIVI Model -> Logical View -> SW Platform Requirements -> System Infrastructure -> Lifecycle

$ : The Lifecycle directory refers to the subdomain Lifecycle and will itself be split it into the required subdirectories and hence will contain all requirements. Existing subdirectories at this level referring to Lifecycle will be removed (i.e. System Health Monitor etc)

Use cases and scenarios (SysInfraEGLifecycleRAUsecases)

GENIVI Model -> Use Case View -> System Infrastructure -> Lifecycle

Functional analysis (based on use cases and scenarios)

Black box realization of use cases (SysInfraEGLifecycleFABlackBoxUcRealization) Static views

GENIVI Model -> Logical View -> Clusters -> Base Layer -> System Infrastructure -> Lifecycle Dynamic views

GENIVI Model -> Logical View -> Use Case Realizations -> System Infrastructure -> Lifecycle

Design

Structure definition (SysInfraEGLifecycleDesignStructDef) Blocks / Components

GENIVI Model -> Logical View -> Logical Components Activity diagrams

IMPLEMENTATION Model -> System Infrastructure -> Lifecycle -> Sequence Diagrams Public interfaces

Design analysis (white box realization of use cases)(SysInfraEGLifecycleDesignWhiteBoxUcRealization) IMPLEMENTATION Model -> System Infrastructure -> Lifecycle -> Use Case Realizations

Process views (SysInfraEGLifecycleDesignProcV)

Integration

Dependencies Deployment

Tests / Compliance

Test criteria Test cases

SysInfraEGLifecycleConcept

Subdomain Lifecycle Concept

Level 1 logical view of Lifecycle

The objectives are, which driving this first-level structure definition:

Complexity reduction Clear responsibility split De-central organization with less dependencies Offering GENIVI SW platform interfaces/attributes Extendable with product specifics

The following figure presents the level 1 logical view of the concept for Lifecycle.

Responsibility Description

Supply management

Abstract: The supply management will monitor supply specific signals/escalations provided/abstracted by the plug-ins organized in a tree structure and will notify registered consumers of state changes. Each plug-in is a supply commander which will try first to solve issues by its own. Only in case if it is not resolvable it will escalate the issue to the upper tree node. The root of this tree is the supply management.

It will be up to the registered consumer and the plug-ins to decide what action to take within the node (i.e. logging errors, reducing functionality in the system,...) if the supply condition is going worse.

There is one special consumer the node state management which is interesting in 3 mapped supply platform states (GOOD,POOR,BAD).

If the condition is GOOD, no further action is needed. If the condition is POOR, the node state management will request a shutdown of the system/ECU, but if the system is in a state (like SW update), the request won't be satisfied. (System lifetime protection) If the condition is BAD, the node state management will force the system/ECU to shutdown immediately. This request can not be overruled. (System corruption protection)

Configuration:

The supply management will use a configurable state machine. A default state machine will be provided by GENIVI, but this can be replaced/extended at the product development.

Plug-in:

The plug-ins has the role/responsibility to abstract the product specific HW/network information about supply conditions and will already react on issues where it is able to.

Thermal management

Abstract:

The thermal management will monitor thermal specific signals/escalations provided/abstracted by the plug-ins organized in a tree structure and will notify registered consumers of thermal changes. Each plug-in is a thermal commander which will try first to solve issues by its own. Only in the case it is not resolvable will it escalate the issue to the upper tree node. The root of this tree is the thermal management.

It will be up to the registered consumer and the plug-ins to decide what action to take within the node (i.e. activating of fans in the system and/or reducing the volume,...) if the thermal condition is getting worse.

There is one special consumer the node state management which is interested in 3 mapped thermal platform states (GOOD,POOR,BAD).

If the condition is GOOD, no further action is needed. If the condition is POOR, the node state management will request a shutdown of the system/ECU, but if the system is in a state (like SW update), the request won't be satisfied. (System lifetime protection) If the condition is BAD, the node state management will force the system/ECU to shutdown immediately. This request can not be overruled. (System corruption protection)

Configuration:

The thermal management will use a configurable state machine. A default state machine will be provided by GENIVI, but this can be replaced/extended at the product development.

Plug-in:

The plug-ins has the role/responsibility to abstract the product specific HW/network information about supply conditions and will already react on issues where it is able to.

[Markus Boje 2011-04-14: Supply and Thermal Management looks very similar. The difference seems to be in the plug-in tree. Can this be subsumed by a kind of OuterConditionManager?] [Torsten Hildebrand 2011-04-27: On concept level we scoping on different topic, the manifest could be in one implementation running in two instances] Power management

Abstract:

The power management will receive power event/messages from product specific plug-ins which are providing the connection to the node/vehicle network, also a plugin for abstraction of the wake-up logic will have to be provided by the product development. The power management will filter and map them to GENIVI predefined events/messages, which will be forwarded to the node state management. As the internal state-machine of the node state management contains product specifics, raw power events/messages can also be send to it. Currently there is no direct interface planned at the power management to access internal information by applications. All application relevant information shall be provided on the node state management.

Configuration:

Product to platform mapping information has to be provided.

Plug-in:

Plug-ins will be used to provide all requested information to the power management, developed by product development.

Node state management

Abstract:

The node state management is the central repository for information regarding the states/sessions inside the node. It collates information from multiple sources and uses this to determine the current states (there are different states existing for different purposes). This information will be delivered to registered consumers or can be requested via the provided interface. All the raw data gathered by the node state management is also be available on request to interested consumers.

The node state management also provides the shutdown management, so one part of the information which is provided is the shutdown request notification event/message to the consumers. The boot management only boots the node and has to stop the boot activity in case of a high priority shutdown request, which is initiated by the node state management. That means boot management is just a consumer of the node state management. The node state management is the last/highest level of escalation on the node, therefore it will command the reset and supply control logic. Additionally if the system is a multi-node system the node state management has included a slave (done as a consumer) which knows the specifics of this configuration and knows about what event/message need to be transferred also to other node(s). Configuration:

The node state management will use a configurable state machine. A default state machine will be provided within GENIVI, but this can be replaced/extended by the product development.

Plug-in:

The node state management will use a plug-in to initiate supply off/cycle events/command in the system, pending if the control logic is locally or located on another node.

Data

The node state management will access and control data from a number of sources. Further information on this topic can be found here.

Boot management

Abstract:

The boot management contains all the functionality required to start a GENIVI based product including low level drivers, services, automotive applications and third party add-ons. This configuration has to be easily manageable, especially the included dependencies and boot order information. The boot management shall handle boot dependencies as much as possible by itself automatically to reduce the complexity and the integration effort for the development team. The most important goal of boot management is to start-up the node as fast as possible to a requested functionality state. To satisfy this goal parallelization of component start-up is provided to avoid unused CPU power (idle time). Additionally it will include the runtime prioritization during start-up of components based on user driven events (e.g. the last user context information is read to dictate the boot path). In some cases a cancel of the start-up activity is needed like a "BAD" thermal condition. This is one of the reasons why the boot management is a consumer of the node state management. Finally the boot management is responsible to setup resource management configurable and observable partitions.

Configuration:

All SW components to be started on the node must provide information regarding their internal dependencies which will be used by the boot management during the system start-up including the partition base information. For certain components which are used for dictating the boot path additional configuration is needed.

Plug-in:

Not applicable

Resource management

Abstract:

Resource management contains the functionality to ensure that the node runs in a stable and defined manner. To do this it will monitor and limit different aspects of SW component behavior including system resources (like CPU load and memory) and critical run-time observation.

Resource allocation (CPU and memory) will be configurable on a component basis and will provide run time dynamic alterations in the area of resource allocation when requested by applications in predefined resource-hungry use cases.

In case of critical run-time observation failure, the following escalation strategies could be considered to repair the situation :

Restart application Restart node Rollback of application to previous version Rollback of all user updates to baseline state Deletion of applications persistence data Deletion of all user persistence data

The exact escalation strategy employed will be configurable and determined by the OEM.

Over and above the described functionality, the resource management stores this situation in the error storage, creates and records a crash dump for future problem analysis.

Configuration:

The required configuration data is defined and provided on the component level.

Plug-in:

The monitoring of critical or important SW components on the node will be done via plug-ins. A plug-in will be provided to monitor one or many applications on the node along with a recovery strategy for the failing application (i.e. restart single application/restart complete system).

[Christian Muck 2011-04-20: Where does the plug-in gets the information from if an application fails? (Will the plug-in observes the application information from systemd, will systemd notifies the plug in with the "OnFailure" attribute of a unit file or will it observe the application itself e.g. /proc entries?] [Torsten Hildebrand 2011-04-27: The error reporting will be different between the applications, because they are existing in a lot of cases, therefore a specific plug-in is needed. systemd will only report if an application is unexpected closed and may the application offers systemd based error callback routines to get more information. Additionally we will offer an GENIVI-example plug-in, which can be used for new development.]

[[~torsten.hildebrand 2011-10-21] Added the interface for storing the LUC at the Boot Management]

SysInfraEGLifecycleBootMgmtSOW

GENIVI Lifecycle - Boot Management

Statement Of Work

GENIVI Lifecycle - Boot Management Statement Of Work Scope of Work Introduction Organization Of Work Schedule Period of performance Requirements Design Shutdown Concept Use cases Key activities Initial list of expected tasks Solution definition and final design Solution Development Coordination of upstream acceptance Maintenance Completion Criteria

Scope of Work

Introduction

The purpose of this statement of work is to define the implementation required to fulfill the architecture of the Boot Management package within the Lifecycle sub-domain.

Background

The system architecture for the Lifecycle sub-domain can be found in the GENIVI architecture model and a subset of the architecture is available within the GENIVI wiki.

The functional scope of Lifecycle is: However for this statement of work we are only interested in the Boot Manager component that refines the Boot Management package: The Boot Management contains all the functionality required to start a GENIVI based product including low level drivers, services, automotive applications and third party add-ons. This configuration has to be easily manageable, especially the included dependencies and boot order information. The boot management shall handle boot dependencies as much as possible by itself automatically to reduce the complexity and the integration effort for the development team.

The most important goal of boot management is to start-up the node as fast as possible to a requested functionality state. To satisfy this goal parallelization of component start-up is provided to avoid unused CPU power (idle time).

Additionally it will include the runtime prioritization during start-up of components based on user driven events (e.g. the last user context information is read to dictate the boot path). In some cases a cancel of the start-up activity is needed like a "BAD" thermal condition. This is one of the reasons why the boot management is a consumer of the node state management.

Finally the boot management is responsible to setup resource management configurable and observable partitions.

Organization Of Work

Topic Comment

Maintenance Upstream The implementer will deliver and maintain the defined component in a publicly accessible GIT Mode repository for easier inclusion within a GENIVI distribution

License for the GNU General To increase the likelihood of upstream acceptance of the GENIVI changes, the code contributions contributed code Public License should be kept license compatible with the upstream projects.

GENIVI mentor Continental Continental will be responsible for defining the architecture of the components in question Automotive

Relevant upstream Linux projects systemd Boot Manager libcgroup

Maintainers Implementer Once the work has been accepted upstream by the respective projects, owner is to ensure that it remains in mainline and is properly maintained.

Schedule

The outcome of this SOW is planned to be included in Excalibur. Due to the short timescales and the inherent risks to a a GENIVI distribution synchronization of this work with Discovery is not feasible. NOTE: the planned introduction of systemd into the Discovery release is not effected by this SOW. The usage of systemd within a distribution is still possible without the additional features provided by the Boot Management. The aim is to have a functional solution by the second calendar quarter 2012 (Q2 2012) that can be integrated within the BIT.

Period of performance

Start TBD End TBD

Requirements

The SW requirements to be fulfilled by the Boot Manager can be found in the table below:

Alias Derived Area Originator Description Additional Info Priority State Comment from (Name, Company)

SW-LICY-003 API Mark Hardware The definition and P1 Released Christian Muck, Hatle, independent API's structure of BMW - Okay. Wind Rive Lifecycle APIs must be hardware independent

SW-LICY-009 VH-LICY-027 Component Mark DLT Monitoring All Lifecycle P1 Released Christian Muck, Hatle, capabilities components must BMW - Better Wind River provide monitoring understanding: capabilities with Lifecycle DLT for the Test Management must Framework provide monitoring capabilities with DLT for the Test Framework. Original Req. Lifecycle Management must provide monitoring capabilities for the Test Framework

SW-LICY-022 RQ 1.14.1 Component Mark Track start up of The Boot Manager P1 Released David Yates, Hatle, services/application must keep track Continental - Wind River about start up of Component design services/application and naming not finalized hence reworded Christian Muck, BMW - Okay. Original Req. SHM must keep track about start up of services/application

SW-LICY-023 VH-LICY-023 Component Christian Last User Mode The Boot Manager P1 Released Munich Workshop: Muck, persistent data provide a Reword for LUM BMW mechanism to alter dependent start-up the start order of Original Req. applications based The Lifecycle on Last User Mode Management persistent data. supports different start priorities for applications. SW-LICY-024 VH-LICY-026 Component Christian Start/stop The Boot Manager P1 Released David Yates, Muck, applications during must provide a Continental : BMW run-time mechanism for Reword needed to starting and be clear about stopping point of no return applications during and term normal run-time shutdown. The Boot Manager Christian Muck, will use this for BMW : Reword shutdown and done. cancel shutdown scenarios. Additionally this interface will be used by SWL and System Health Monitor. The Boot Manager must provide a mechanism for starting and stopping applications during normal run-time.

The Boot Manager will use this for shutdown and cancel shutdown scenarios. Additionally this interface will be used by SWL and System Health Monitor.

SW-LICY-025 VH-LICY-026 Component David Critical applications The Boot Manager P1 Released Yates, are prioritized must ensure that Continental critical applications are prioritized during system start-up to ensure that they are available as early as possible within the system start-up sequence

SW-LICY-042 VH-LICY-004 Component David Configurable The Boot Manager P1 Released Yates, shutdown must provide a Continental procedure configurable dynamic shutdown procedure for legacy and 3rd party applications in the system

SW-LICY-050 VH-LICY-011 Component David User input alters The Boot Manager P1 Released Yates, start-up sequence must be able to Continental dynamically alter the start-up sequence of the Head Unit to reflect operations on user input devices

i.e. If the user presses the "Navi" button during the system start-up then the Navigation SW must be given priority during the start-up sequence SW-LICY-057 VH-LICY-005 Component David Active node The Lifecycle P1 Released Yates, sessions Management must Continental support different levels of functionality dependent on active node sessions

NOTE: the active node sessions agreed in the requirement actually refers to the "Node Application Mode", i.e. Transport, SWL etc.

Design

As detailed previously this statement of work covers the implementation required for the Boot Manager. The standard flow for the Boot Manager is as follows :

Register with the Node State Manager as a consumer for the Shutdown State Register with the Node State Manager as a consumer for the Node Application Mode Trigger the Node State Manager about the current Lifecycle State Evaluating the current "Node Application mode" to determine subsequent actions in the Lifecycle Normal & Parking mode The Boot Manager needs to evaluate the current user and read the Last User Context Trigger systemd with the applications to be given priority in the current lifecycle Transport, SWL & Factory Mode Simply signal systemd about completion as default SW should be started in these modes

Additional responsibilities are :

Providing an interface that can be used to trigger application start/stop within systemd Multiple registration with the NSM to allow for the shutdown of legacy applications

Shutdown Concept

The basic idea is the following:

During start-up phase the applications,... will register at the Node State Management which are interested to be involved during a shut-down phase. The order of registration in start-up is the inverted order of the execution in shut-down phase. Currently we think that we do not need a specific specification of the shut-down behavior. There will be a protocol defined between the registered consumers and the Node State Management. There will be specific consumers which are responsible to set a shutdown target or a list of conflicts to systemd, which enables the de-initialization of (legacy) components via systemd. Use cases

The Boot Manager must fulfill the use cases defined in the two following linked documents:

White Box UC

Black Box UC

Key activities

Initial list of expected tasks

The following table captures the high-level tasks to be performed as part of this SOW. They are all rather interdependent.

Solution definition and final design

Work Project Name Short description Priority Work Estimation

(in days)

1.01 General Documentation Document project plan 1 5

1.02 General Documentation Document design of Boot Manager 1 15

1.03 General Community Communicate with the maintainers sub-system maintainers upstream early to 1 10 adoption engage the dialog and increase the likelihood of adoption.

1.04 General Mitigate Incorporate upstream inputs into the final design and circle back with the 1 10 upstream risks contributors upstream

1.05 General Present SOW Once finalized, present the SOW design to all relevant GENIVI expert groups 1 5 design to ensure transparency and approval

1.06 General Sign off Work within GENIVI process to ensure component sign off and integration 10

Sum of effort estimations:

Solution Development

Work Project Name Short description Priority Work Estimation (in days)

2.01 Linux Implementation Implement work package and deliver into GIT Hub for testing within existing 1 80 systemd POC

Sum of effort estimations: Coordination of upstream acceptance

Work Project Name Short description Priority Work Estimation (in days)

3.01 Linux final delivery Prepare compliance documents required for acceptance into GENIVI 1 5 preparation Bit team

3.02 Linux Delivery support Support the BIT team with the integration of the package 1 5

Sum of effort estimations:

Maintenance

Supplier will take on the responsibility to maintain the solution during the acceptance period. Once upstream, supplier will coordinate maintenance with the upstream maintainers and sub-system maintainers necessary for the respective projects.

Completion Criteria

Deliverable Project Short description IP

4.01 Design document Document that describes the architecture of the module GENIVI Copyright

4.02 Boot Mgr module Actual Linux implementation of Boot Manager in source form GPLv2

4.03 General Test application (inc. systemd unit files) that can be started via the Boot Manager GPLv2

4.04 General Manual pages to reflect implementation Misc.

4.05 General Successful integration of the module inside a GENIVI release GENIVI Compliance

SysInfraEGLifecycleConceptBootmanagementSplit

Subdomain Lifecycle Boot Management- Concept -Split

Table of content:

Introduction The split of boot management Why do we have that split?

Introduction

The split of boot management

Why do we have that split?

It is possible to perform a shutdown using systemd, however it would stop and unload the components in its shutdown concept. This results in a large lead time to make them functional again in case of a cancel shutdown situation. An IVI system must be able to come back fast without losing any context if a reboot can be avoided for that cancel shutdown.

Node State Management will only call registered consumers in the shutdown phase. Those consumers will drive the components into a stable state and ensure that everything has been stored which will be needed for the next start-up.

Note: The components won’t be shutdown (Exceptions are existing like the flash filesystem). Therefore in addition the shutdown management concept will include/use the systemd shutdown concept as well where it makes sense and supports legacy/adapted components.

SysInfraEGLifecycleConceptBootmanagementShutdown

Subdomain Lifecycle Bootmanagement- Shutdown Concept

This page is under construction.

Table of content:

Introduction About the concept Shutdown preparation in start-up phase Shutdown in execution Analysis of concept

Introduction

In automotive the aim in a shut-down phase is to trigger real needed activities only. That means a lot of standard/traditional activities for shut-down are not in the focus in an IVI-system. If you can archive the requested start-up KPI with a cold boot the IVI system will be switched off at the end of the shut-down phase. The main activities which are need in shut-down are the following:

log off the user write / specify the last user context unregister at the vehicle network bring hardware in a safe state (drives,...) mute audio switch off display(s) store application and system persistence data

About the concept

The basic idea is the following:

During start-up phase the applications,... will register at the Node State Management which are interested to be involved during a shut-down phase. The order of registration in start-up is the inverted order of the execution in shut-down phase. Currently we think that we do not need a specific specification of the shut-down behavior. There will be a protocol defined between the registered consumers and the Node State Management. There will be specific consumers which are responsible to set a shutdown target or a list of conflicts to systemd, which enables the de-initialization of (legacy) components via systemd.

Shutdown preparation in start-up phase During the start-up phase components that are interested in being notified about impending shutdowns will register with the Node State Manager.

They must do the registration with the Node State Manager before they notify systemd that they have finished initialization as the Node State Manager will then benefit from the dependency ordering provided by systemd in the startup sequence.

Shutdown in execution

To allow for the shutdown of Legacy/3rd Party applications that do not want to use the Node State Manager interface the Boot Manager will be able to register with the Node State Manager as a consumer for the component in question. This will be achieved by using the ExecStatePost tag in the components unit file.

During the shutdown sequence the Node State Manager will call the registered consumers in the reverse the order of registrationto preserve any dependencies that may exist.

Analysis of concept

Shutdown activities are triggerable without unloading the components. Legacy components can be shut down as their traditional way. Full flexibility on where to integrate systemd based shutdown units.

SysInfraEGLifecycleConceptBootmanagementStartup

Subdomain Lifecycle Boot Management Start-up Concept

Table of content:

Introduction Rough systemd performance analysis The concept Overview of the GENIVI boot phases (target clustering) Focused.target BootManager Focused Application Types Application Design Constraints Example flow Unfocussed.target(s) Usage Lazy.target Usage An example Example of Subsystem Start-up Order using systemd + BootManager Appendix Special system units

Introduction

To understand the Bootmanagement Concept based on systemd, it is necessary to understand the principles of systemd. If you already know systemd and for what unit files are useful, you can skip the introduction on SysInfraEGSystemd - otherwise please take some minutes to learn more about systemd (reading or watching video).

Each unit will have a configuration file that can be used to define critical information about the unit that systemd will use during the start-up process.

For instance here is the configuration file (.service) for crond:

[Unit] Description=Command Scheduler After=syslog.target

[Service] EnvironmentFile=/etc/sysconfig/crond ExecStart=/usr/sbin/crond -n $CRONDARGS

[Install] WantedBy=multi-user.target

Of interest for us at this point are the After and the WantedBy keywords. These show us that the crond must be started after syslog.target and is part of the multi-user.target

One type of unit is a target. A target is a collection of sub units that will all be started when that target is activated. They allow for easier grouping of units and control of the system dependencies. Additionally they typically mirror the standard Linux run-levels. The following diagram (Example subtarget deployment on Fedora 15) shows a typical dependency graph for the main targets. The problem with this approach for GENIVI is that we need the graphical front-end earlier and there are components in the Mandatory section that are not needed so early in the start-up. Therefore we need to change the target clustering.

Rough systemd performance analysis

Using Fedora 15 and a Virtual Machine as a baseline we analyzed the system start-up to evaluate how and what systemd was starting. systemd trace output tells us the following : Start-up finished in 2s 124ms (kernel) + 2s 331ms (initrd) + 35s 426ms (userspace) and analysis of the systemd plot shows us more high-level data: which service took how much time to initialize, and what needed to wait for it. Clearly there are a lot of components started that are not needed within an Automotive solution and the ordering of the components is not optimal for us.

By changing the unit configuration files we were able to move components to the start of the dependency graph as we would need to for PDC/RVC and also were able to remove components from the start-up.

Therefore we saw no reason to believe that an Automotive start-up could not be possible using systemd. However we identified that we need to alter the target clustering to meet our requirements and would need to use an embedded libc to reduce the initial overhead.

The concept

Overview of the GENIVI boot phases (target clustering) Focused.target

BootManager

The focused.target will contain the GENIVI Lifecycle Boot Manager. It will be responsible for the following actions :

reading the LastUserContext (LUC) persistent information parsing the LUC data to determine which unit* or to start in the current phase requesting systemd to start the required units (based on unit name) signalling systemd when it has requested all the units that it requires (i.e. when Boot Mgr has finished initialization) providing an interface that can be used to update the LUC persistent information informing the Node State Manager about the status of the current system start-up Option: Triggering the Resource Management to update the cgroup configuration in the system

*)unit: Could be a target unit or service unit(s).

Focused Application Types

To determine the focused and unfocussed applications in the system it is proposed to create the following three types of focused applications :

Audible This specifies the current audible source within the Head Unit (i.e. Radio) Foreground This specifies the application that is currently in focus on the HMI (i.e. Browser) Background This specifies an application that is running in the background (i.e. Navigation)

It is proposed that an interface will be provided that will allow for applications to be registered to each of the focused types listed above.

This interface can be used by the HMI, applications themselves or some other controlling component to fill the LUC persistent data.

To parse the LUC persistent data optimally it is proposed that the complete unit name (i.e. navigation.target/service) is used on this interface. This will then be used by the Boot Manager on the next start-up to determine what should be started in the focused.target phase.

Application Design Constraints

The application is responsible for delaying the standard systemd initialized signal. It must only signal completion when it is completely up and running (i.e. for Navigation when address input is possible or route planning is complete) and not when initialization is complete.

If the application was to signal systemd when the initialization was complete, then systemd could continue with the unfocussed.target potentially blocking high priority applications. If target-units are used to control the focused boot phase: It is proposed that a "target" is created for the main Automotive features (i.e. navigation.target, mediaplayer.target etc.) and this unit name is used during the registration. In this way it is easy to include all dependent components for the main application. This would provide a static cgroup configuration for each target with only focused or unfocussed settings available. Alternatively it would be possible to create a "target" file during the integration/build phase for each of the target configurations possible in the system. In this way it would be possible to have a more precise resource management split dependent on the three applications filling the three LUC focused types.

If service-units are used to control the focused boot phase: The registered application must ensure that the registered unit is configured in a way to pull in all components that it is dependent on (i.e. a top level feature unit)

Example flow

1. systemd accesses focused.target.wants directory to find required components 2. systemd starts the BootMgr as part of the focused.target 3. BootMgr reads the focused application string, i.e. navigation.target 4. BootMgr tells systemd to start navigation.target 5. systemd parses navigation.target for dependencies 6. systemd parses hmi.service, gps.service and tuner.service for dependencies 7. systemd starts HMI, GPS and Tuner in parallel 8. HMI, Tuner and GPS applications signal systemd that they are running 9. systemd starts navigation 10. Navigation runs up to point where route is calculated and displayed 11. Navigation signals systemd that it is running 12. Boot Mgr reads the background application string, i.e. nothing 13. Boot Mgr reads the audible application string, i.e. radio.service 14. BootMgr tells systemd to start radio.service 15. systemd parses radio.service for dependencies 16. systemd starts Radio 17. Radio signals systemd that it is running 18. Bootmanager signals systemd that it is finished 19. systemd moves on to unfocussed.target

Note: Targets and services can be used. The policy what need to be started first (Background / Audible / Foreground) is OEM specific.

Unfocussed.target(s)

Usage

The unfocussed.target (s) will include all core applications that are not already included within the Mandatory boot phase.

This target will be used to start all applications that the user is/was not currently using. The target itself will include applications that have already been started as part of the focused.target but these will be ignored by systemd.

All components will be started in parallel where possible.

Lazy.target

Usage

The lazy.target will include background applications that can be delayed until everything else in the system has been started. Additionally the signal "FULLY_RUNNING" will allow blocked lazy activities to start (i.e. disk defragmentation).

An example

Example of Subsystem Start-up Order using systemd + BootManager Here, using systemd, it can be seen that the Boot Manager evaluates the LUC and starts the Navigation. This in turn pulls in Positioning, Tuner and HMI.

Appendix

Special system units

basic.target A special target unit covering early boot-up. ctrl-alt-del.target systemd starts this target whenever Control+Alt+Del is pressed on the console. Usually this should be aliased (symlinked) to reboot.target. dbus.service A special unit for the D-Bus system bus. As soon as this service is fully started up systemd will connect to it and register its service. default.target The default unit systemd starts at boot-up. Usually this should be aliased (symlinked) to multi-user.target or graphical.target. display-manager.service The display manager service. Usually this should be aliased (symlinked) to xdm.service or a similar display manager service. emergency.target A special target unit that starts an emergency shell on the main console. This unit is supposed to be used with the kernel command line option systemd.unit= and has otherwise little use. graphical.target A special target unit for setting up a graphical login screen. This pulls in multi-user.target. Units that are needed for graphical login shall add Wants dependencies for their unit to this unit (or multi-user.target) during installation. halt.target A special target unit for shutting down and halting the system. Applications wanting to halt the system should start this unit. kbrequest.target systemd starts this target whenever Alt+ArrowUp is pressed on the console. This is a good candidate to be aliased (symlinked) to rescue.target. local-fs.target systemd automatically adds dependencies of type After to all mount units that refer to local mount points for this target unit. In addition, systemd adds dependencies of type Wants to this target unit for those mounts listed in /etc/fstab that have the auto and comment=systemd.mount mount options set. systemd automatically adds dependencies of type After for this target unit to all SysV init script service units with an LSB header referring to the $local_fs facility. mail-transfer-agent.target The mail transfer agent (MTA) service. Usually this should pull-in all units necessary for sending/receiving mails on the local host. systemd automatically adds dependencies of type After for this target unit to all SysV init script service units with an LSB header referring to the $mail-transfer-agent or $mail-transport-agent facilities, for compatibility with Debian. multi-user.target A special target unit for setting up a multi-user system (non-graphical). This is pulled in by graphical.target. Units that are needed for a multi-user system shall add Wants dependencies to this unit for their unit during installation. network.target systemd automatically adds dependencies of type After for this target unit to all SysV init script service units with an LSB header referring to the $network facility. nss-lookup.target systemd automatically adds dependencies of type After for this target unit to all SysV init script service units with an LSB header referring to the $named facility. poweroff.target A special target unit for shutting down and powering off the system. Applications wanting to power off the system should start this unit. reboot.target A special target unit for shutting down and rebooting the system. Applications wanting to reboot the system should start this unit. remote-fs.target Similar to local-fs.target, but for remote mount points. systemd automatically adds dependencies of type After for this target unit to all SysV init script service units with an LSB header referring to the $remote_fs facility. rescue.target A special target unit for setting up the base system and a rescue shell. rpcbind.target systemd automatically adds dependencies of type After for this target unit to all SysV init script service units with an LSB header referring to the $rpcbind facility. time-sync.target systemd automatically adds dependencies of type After for this target unit to all SysV init script service units with an LSB header referring to the $time facility. shutdown.target A special target unit that terminates the services on system shutdown. Services that shall be terminated on system shutdown shall add Conflicts dependencies to this unit for their service unit, which is implicitly done when DefaultDependencies=yes is set (the default). systemd automatically adds dependencies of type Conflicts to this target unit for all SysV init script service units that shall be terminated in SysV runlevels 0 or 6. sigpwr.target A special target that is started when systemd receives the SIGPWR process signal, which is normally sent by the kernel or UPS daemons when power fails. sockets.target A special target unit that sets up all service sockets. Services that can be socket-activated shall add Wants dependencies to this unit for their socket unit during installation. swap.target Similar to local-fs.target, but for swap partitions and swap files. sysinit.target A special target unit covering early boot-up scripts. systemd automatically adds dependencies of the types Wants and After for all SysV service units configured for runlevels that are not 0 to 6 to this target unit. This covers the special boot-up runlevels some distributions have, such as S or b. syslog.target systemd automatically adds dependencies of type After for this target unit to all SysV init script service units with an LSB header referring to the syslog facility. systemd-initctl.service This provides compatibility with the SysV /dev/initctl file system FIFO for communication with the init system. This is a socket-activated service, see system-initctl.socket. systemd-initctl.socket Socket activation unit for system-initctl.service. systemd-logger.service This is internally used by systemd to provide syslog logging to the processes it maintains. This is a socket-activated service, see system-logger.socket. systemd-logger.socket Socket activation unit for system-logger.service. systemd will automatically add dependencies of types Requires and After to all units that have been configured for stdout or stderr to be connected to syslog or the kernel log buffer. systemd-shutdownd.service This is internally used by shutdown(8) to implement delayed shutdowns. This is a socket-activated service, see system-shutdownd.socket. systemd-shutdownd.socket Socket activation unit for system-shutdownd.service. umount.target A special target unit that umounts all mount and automount points on system shutdown. Mounts that shall be unmounted on system shutdown shall add Conflicts dependencies to this unit for their mount unit, which is implicitly done when DefaultDependencies=yes is set (the default).

SysInfraEGLifecycleConceptCgroupsInput cgroups input collection

This page collects various inputs concerning cgroups. If you want to add something create a new heading with the title of your post and put your name below. See genivi-dev mail for details of the origin.

Use Cases for system behavior

How do you want the system to behave? Come up with use cases. (see also https://collab.genivi.org/issues/browse/GT-555) put your use case here

[Your Name and Date here] cgroups deployment and experience

Share your experience with cgroups configuration of systems and how you would like deployed GENIVI Linux systems to look like.

The following figure shows a node cgroup hierarchy for CPU-load as a proposal, which will be used in the F2F in Ismaning Nov 2011

put your examples and experience descriptions here

[Your Name and Date here]

Multicore behavior of cgroups

We especially would like to get information or involvement in respect to multicore systems and cgroup behavior - if you have experience or want to investigate this topic please contact Torsten Hildebrand. put your experience here

[Your Name and Date here]

Collection of links

Lennart's general behavior recommendations on cgroups [Christian Muck 2011-08-26] man page for systemd cgroup settings [Christian Muck 2011-08-26] systemd cgroups limit patch

SysInfraEGLifecycleConceptMultiNode

Subdomain Lifecycle Multi-Node Concept This page is in review state.

Overview of a multi-node deployment

The following figure shows an abstract 3 node deployment with some aspects from Lifecycle.

Description

The multi-node concept for Lifecycle is based on the idea that each node has its own commander, so each node is first of all independent (like a standalone node). That means each node has its own state-machine which is driven by all incoming internal and external event/messages. Those messages originally are generated by the ECUs connected to the vehicle bus, by ECU owned hard- and software and by the ECU user. There is no master - slave relation between nodes and no states are somehow mirrored on another node.

For GENIVI application Lifecycle will offer platform own-defined states, sessions, events and messages which can also be used for inter node communication (INC). Because of OEM specific business logic also the raw (OEM defined) states, sessions, events and messages are provided to each node.

Common resources like the power supply have its own commander (realized as a plug-in), controlled by the local state-machine. If another node has got some requests like shutdown or reboot, it will send an event/message to the power supply command node and this node will decide based on internal state / session and request priority if the request can be satisfied.

In fact that the Linux operating system is not used on all nodes (because they are small performance CPUs, no MMU is supported,...) the requested interface of the Lifecycle management has to be fulfilled. The realization of this interface will be a message catalog based on the INC.

Rationale

The component implementation of Lifecycle shall be totally independent from the deployment decision (one or more node based system/ECU). Local events shall be handled locally. Changes or extensions of typical local events should not request updates/changes on other nodes. Additionally a decentralized management reduces the complexity a lot. A state-machine owner need only to take care about the local events and the defined events for shared resources (if they are existing at this level). SysInfraEGLifecycleConceptRefinement

Subdomain Lifecycle Concept Refinement

The orange blocks are the first level clustering of the subdomain Lifecyle. The blue blocks are the logical components of the subdomain Lifecycle

This figure defines the refinement of the first level clustering and the logical components.

SysInfraEGLifecycleConceptSystemBorder

Subdomain Lifecycle System Border / Context

Lifecycle as a black box from application point of view

The following figure shows the interfaces of Lifecycle to the rest of the system. Provided interfaces which can be used by applications or others and requested interfaces which has be either fulfilled or stubbed by the product development. Additionally Lifecycle will be dependent on the subdomains Persistency and Log&Trace. Comments: [Markus Boje 2011-04-14: We did plan for specific OEM lifecycle / node states, is this missing as plug in or is this meant by power state?] [Torsten Hildebrand 2011-04-26: Those state are visible on the interface and inside the configuration / state machines a.s.o] [Christian Muck 2011-04-21: The requirement was about mapping between specific OEM lifecycle/node states and GENIVI node states - i think this is done by "OEM and HW specific Power event forwarder and abstraction" - more details can be found on next page@Node state management right?] [Torsten Hildebrand 2011-04-26: That is correct, part of the power management, which is manifested by the power event forwarder,...] [Torsten Hildebrand 2011-10-21: Added the interface extension for setting the last user context]

SysInfraEGLifecycleDesignStructDef

Subsystem Lifecycle Design Structure Definition

Subsystem Lifecycle Structure Scope (current) Node State Management Structure Definition Boot Management Structure Definition

Subsystem Lifecycle Structure Scope (current)

The following figure shows the current scope of the structure definition. That will be changed updated next with the resource management. Node State Management Structure Definition

The following figure shows the structure definition including the proposals for:

the refinement the GENIVI compliance stereotype the component license the GENIVI priority the interface relations the dependencies

Boot Management Structure Definition

The following figure shows the structure definition including the proposals for:

the refinement the GENIVI compliance stereotype the component license the GENIVI priority the interface relations the dependencies In the above diagram the "Application Responsibility" components are placeholder components that are not part of Boot Management but rather are to be provided by each Application in the system. Every Application that is to be started by systemd will need to notify/signal systemd when it has completed its initialization. This signal will then be used by systemd to control the dependency synchronization during the start-up sequence.

They are included in the Boot Management compliancy diagrams as we want to ensure that this application requirement/design constraint is fulfilled as needed.

SysInfraEGLifecycleDesignWhiteBoxUcRealization

Subdomain Lifecycle Design - White Box Use Case Realization

Scope of this page Boot Management Resource Management System Health Monitor Thermal Management Node State Management Questions and Answers Application blocking system shutdown Cancel ongoing system shutdown Consumer blocking system shutdown System Start-up using Last user mode System Start-up with User Selection System Recovery from an application failure

Scope of this page

The scope of this work package is the definition of White Box Use Cases for the Subdomain Lifecycle.

Below you will find a table for each of the Logical Components within the Subdomain Lifecycle. Each table will document the currently available Use Cases for that component and include a status column. The absolute latest version of the use cases are in Enterprise Architect but after major changes the attached exported version will be updated.

Below the tables is a space for people to add questions for any use cases.

Boot Management Alias Originator Use Case State Status (Name,Company)

UC_LCBM_001 David Yates, System Start-up - Last User Mode Released David Yates, Continental - Continental Reviewed and released

UC_LCBM_002 David Yates, System Start-up - User Selection Released David Yates, Continental - Continental Reviewed and released

UC_LCBM_003 David Yates, System Shutdown - Application blocking shutdown Released David Yates, Continental - Continental Reviewed and released

UC_LCBM_004 David Yates, System Shutdown - Consumer blocking shutdown Released David Yates, Continental - Continental Reviewed and released

UC_LCBM_005 David Yates, System Shutdown - Cancel ongoing shutdown Released David Yates, Continental - Continental Reviewed and released

UC_LCBM_006 David Yates, System Restart - Synched reboot into SWL mode Released David Yates, Continental - Continental Reviewed and released

UC_LCBM_007 David Yates, System Shutdown - Cancel shutdown request arrives too Released David Yates, Continental - Continental late to cancel the shutdown sequence Reviewed and released

UC_LCBM_008 David Yates, System Shutdown - Fast shutdown requested during Released David Yates, Continental - Continental Coding Phase Reviewed and released

UC_LCBM_009 David Yates, System Shutdown - Normal Shutdown Released David Yates, Continental - Continental Reviewed and released

UC_LCBM_010 David Yates, System Shutdown - Incoming call forces shutdown Released David Yates, Continental - Continental cancellation Reviewed and released

UC_LCBM_011 David Yates, System Start-up - Over temperature issue causes Released David Yates, Continental - Continental shutdown during start-up Reviewed and released

UC_LCBM_012 David Yates, System Start-up - Hangup during start-up causes system Released David Yates, Continental - Continental reboot Reviewed and released

Resource Management

Alias Originator Use Case State Comment (Name,Company)

UC_LCRM_001 David Yates, Continental Partition System Resources Released David Yates, Continental - Reviewed and released

UC_LCRM_002 David Yates, Continental Application requests additional Released David Yates, Continental - Reviewed and resources released

System Health Monitor

Alias Originator Use Case State Comment (Name,Company)

UC_LCSH_001 David Yates, System recovery from application failure Released David Yates, Continental - Continental Reviewed and released

UC_LCSH_002 David Yates, System recovery from failed filesystem in Released David Yates, Continental - Continental system startup Reviewed and released

UC_LCSH_003 David Yates, Software update process and the interaction Released David Yates, Continental - Continental with Lifecycle Reviewed and released

Thermal Management

Alias Originator Use Case State Comment (Name,Company)

UC_LCTM_001 David Yates, Thermal management requires system Released David Yates, Continental - Reviewed Continental shutdown and released

Node State Management Alias Originator Use Case State Comment (Name,Company)

UC_LCNM_001 David Yates, Continental Session state change Released David Yates, Continental - Reviewed and requested released

Questions and Answers

Application blocking system shutdown

Question Christian Muck, 2011-05-26: I think the node state manager has the feasibility to cancel the shutdown sequence before the synchronous call (response?) before "turn off power" will be executed. Is this "validate shutdown allowed"

Question Christian Muck, 2011-05-26: Is "persist data" from consumers to persistency synchronous or asynchronous?

Answer David Yates, 2011-05-26: I would say that the interface has to be synchronous as the consumer must know when the data has actually been updated before proceeding.

Cancel ongoing system shutdown

Question Christian Muck, 2011-05-26: Is the respond "state change notification(accept) and "state change accept(shutdown)" basically the same after the call "state change request(shutdown)"?

Question Christian Muck, 2011-05-26: Why is in this sequence chart the call "persist data" from a consumer to persistency? Can this done only by special consumers or is this normally the task of the node state manager?

Consumer blocking system shutdown

Question Christian Muck, 2011-05-26: I think the node state manger has to validate the "point of no return" before "turn off power"

Answer David Yates, 2011-05-26: I do not think that that is relevant for this use case. The Lifecycle only cares about the "point of no return" in the cancel shutdown use case but you are right that this is not shown in that diagram.

Answer David Yates, 2011-05-26: I do not think that that is relevant for this use case. The Lifecycle only cares about the "point of no return" in the cancel shutdown use case but you are right that this is not shown in that diagram.

Question Christian Muck, 2011-05-26: "state_change_request(shutdown)" is called from node state manager to persistency. When the persistency will be flushed. Should we adopt this part or why is it here only "persist data"?

Answer David Yates, 2011-05-26: You are right this is inconsistent to the other use cases. The handling should be the same

Question Christian Muck, 2011-05-26: Is "persist data" from consumers to persistency synchronous or asynchronous? - Same question at App Blocking Shutdown

Answer David Yates, 2011-05-26: From my perspective it has to be synchronous as the consumers must know when the data has actually been written before proceeding.

System Start-up using Last user mode

Question Christian Muck, 2011-05-12: Apps signal systemd that they are initialized. We should explain the interfaces of systemd on SysInfraEGSystemd. Did Lennart say something about interfaces of systemd?

Question Christian Muck, 2011-05-26: I added information about D0-Bus interfaces,methods,properties of systemd to SysInfraEGSystemd. Man pages and "systemd for developers" give more information about communicating with systemd

Question Christian Muck, 2011-05-12: What is exactly the difference between non-active and lazy apps?

Question Christian Muck, 2011-05-12: When a non active app gets the signal from user systemd to initialize, should it "run to a pre-working state", similar as the active app "run to working state"?

Answer David Yates, 2011-05-16: Yes, we would envisage that a Non-active apps will run to a "quiet" state whereby it can start quickly if needed but where it does not take up too many resources. This was the reason for checking with Lennart whether it was possible to easily pass a parameter via systemd to the component that could be used to tell it which state to initialize in. As this does not seem to be that easy it will be up to the Application to read the Last User Mode during initialization to determine its running state.

System Start-up with User Selection

Question Christian Muck, 2011-05-12: At SystemInfraEGLifecycleDesignWBLastUserMode there is always a response from an app to systemd 'initialized'. Why did we here give no response 'initialized' to systemd?

Answer David Yates, 2011-05-16: Correct, the component should always send a signal to systemd, that was just an oversight from my side. I will update this in the future

System Recovery from an application failure Question Manfred Bathelt, 2011-05-13: Which component handles dieing applications?

SysInfraEGLifecycleFABlackBoxUcRealization

Subsystem Lifecycle FA Black Box Use Case Realization

Lifecycle Management Black Box export from Enterprise Architect

Lifecycle Management Black Box export from Enterprise Architect

For the latest architecture regarding the black box view of the Lifecycle Management please check out the Infrastructure branch of the Enterprise Architect model. However, here is an export of the last baseline snapshot from 12th July 2011.

SysInfraEGLifecycleNSMData

Subdomain Lifecycle - Node State Manager

Component Overview

The node state management is the central repository for information regarding the states/sessions inside the node. It collates information from multiple sources and uses this to determine the current states (there are different states existing for different purposes). This information will be delivered to registered consumers or can be requested via the provided interface. All the raw data gathered by the node state management is also be available on request to interested consumers.

The node state management also provides the shutdown management, so one part of the information which is provided is the shutdown request notification event/message to the consumers. The boot management only boots the node and has to stop the boot activity in case of a high priority shutdown request, which is initiated by the node state management. That means boot management is just a consumer of the node state management. The node state management is the last/highest level of escalation on the node, therefore it will command the reset and supply control logic. Additionally if the system is a multi-node system the node state management has included a slave (done as a consumer) which knows the specifics of this configuration and knows about what event/message need to be transferred also to other node(s).

Node State Information

The following table details the data that will be used/controlled by the Node State Manager.

Data Name -> name of the data

Data Values -> Values that the data can have Source -> This is the person who will change this data item to this particular value of the data NSM State Relevant -> This defines whether the NSM will take into account this particular value of the data when evaluating node state changes NSM Trigger Event -> If the data item is owned by the NSM then this is the interface that will be used to update the value of the data Distributed -> This specifies whether the Data Item will be distributed by the NSM to interested consumers

User, Seat or Node-> This determines whether the data item relates to the Node, a particular Seat/View or for a particular User Comment -> General comments about the data value

Data Data Values Source NSM NSM Trigger Distributed Data User, Comment Name State Event consumer Seat Relevant or Node

Node Boot SWL, Application, Node State Yes Boot mode Yes Bootloader, Node Recommendation is that Mode Test Software Manager change only NSM and session state handling (TSW) performed by some other should be used and this Node State Lifecycle is the key data used by Manager trigger components applications. by applications. Boot Mode is only used by specific enabled applications.

Constraint here is that persistency must be available in early boot phase.

Used by the boot loader to decide which kernel image to start in the system. Maybe not needed by the NSM as each Kernel Image will have it's own state machine.

Question is whether we include the current boot mode in the information distributed by the NSM or whether this should always be read from the Persistency. Advantages for NSM is that all consumers would get the current boot mode and not the currently stored boot mode

Node Parking, Factory, Node State Yes Node Yes NSM, Node The NSM will support a Application Transport, SWL Manager Application BMGR and selection of "application Mode (triggered by mode change other modes" which when Diagnostics, I/F applications selected can result SWL or SHM) in different levels of application functionality and state handling.

i.e. Transport mode - only PDC and RVC applications are required, all other functionality is disabled

Node Start-up Default value Yes NSM Yes NSM, SHM Node Default value and value State initialization taken until Fully Running automatically sets this value

Node Base Running Boot Manager Yes Node State Yes NSM, SHM Node Base Running means all State change I/F and may mandatory components some have been started and Lifecycle the Boot Manager has consumers just been called for initialization

Node Last User Context Boot Manager Yes Node State Yes NSM, SHM Node LUC Running will be State (LUC) Running change I/F and may entered when the Boot some Manager has started all Lifecycle components needed consumers within the focused boot phase Node Fully Running Boot Manager Yes Node State Yes NSM, SHM Node Fully Running means all State change I/F and some applications requested Lifecycle are active but some consumers operational functionality is disabled (i.e. defragmentation)

Node Fully Operational Boot Manager Yes Node State Yes NSM, SHM Node Fully Operational means State change I/F and may system is completely up some and running Lifecycle consumers

Node Degraded Power NSM (via Yes Node State Yes SHM and Node This state will be active State Supply Mgmt) change I/F some when the NSM receives Lifecycle a "Poor" state from the consumers Supply Management but shutdown is not currently possible

Node Shutdown Delay Node State Yes Clamp No Node A shutdown due to State Manager change/Network Clamp Change/Network activity activity has been initiated. The Head Unit will delay for a configurable (OEM specific) period of time before initiating the shutdown. This allows for temporary stop/start situations like garage. This also allows to return quickly to a working scenario if the clamp is changed back.

Node Shutting Down Node State Yes Clamp change Yes All Node System shutdown has State Manager timeout, shutdown been received or clamp Diagnostic consumers change timeout from request, SW (incl. state "Shutdown Delay" request, BMGR, has occurred System Health SHM,...) Monitor, Thermal/Supply Mgmt

Node Fast Shutdown Node State Yes Clamp change, Yes All fast Node System is in process of State Manager Diagnostic shutdown performing a fast request, SW consumers shutdown. This would request, (limited) normally be called only System Health on a Diagnosis request Monitor, when it is known that Thermal Mgmt, there is no persistent Supply Mgmt data to be written. This would result in no consumers being called in the shutdown sequence

Supply Good, Poor, Bad Supply Yes Generic HW No Node Good = Nothing to do, State Manager State I/F Poor = Shutdown if possible, Bad = Shutdown immediately

Thermal Good, Poor, Bad Thermal Yes Generic HW No Node Good = Nothing to do, State Manager State I/F Poor = Shutdown if possible, Bad = Shutdown immediately

Node Heating, Ventilation, Application Yes Session State Yes NSM and Seat This is used by certain Session AirCon (HEVAC) change I/F some OEM's and will also need State Active applications to be a wake-up reason as it defines we start to a point where we can control the HEVAC Node Rear View Camera RVC Yes Session State Yes NSM and Seat This is a product decision Session (RVC) Active change I/F some but it is likely that the applications RVC active will overrule a Poor and non critical failure.

Node Park Distance PDC Yes Session State Yes NSM and Seat This is a product decision Session Control (PDC) change I/F some but it is likely that the Active applications PDC active will overrule a Poor and non critical failure.

Node HMI Active HMI Yes Session State Yes NSM and Seat This is an open session Session change I/F some state that can be used applications differently for different products.

Traditionally it is expected that this session state will be set by the HMI when they are completely running and all graphics have been rendered on the appropriate layer.

This could for instance be used to determine when to enable the display or to switch from a Splashscreen to the real HMI to ensure that the user does not see the HMI before it is completely ready.

Node Network Passive Network Yes Session State No NSM and Node This state will be true Session State active Manager change I/F some when there is Network applications Activity but the Head Unit is in a user perceived off state and therefore is not directly using the Network.

Node Network active Network Yes Session State No NSM and Node This state will be true Session Manager change I/F some when there is Network applications Activity and the Head Unit is in a user perceived on state and therefore is directly reliant on the Network.

Node Phone active Phone Yes Session State Yes NSM and Seat Indicates that a phone Session Application change I/F some call is in progress and applications would normally delay the NSM from delaying the system shutdown

Node Permanent / NSM Yes Session State Yes NSM and Node Permanent/Entertainment Session Entertainment Mode change I/F some mode is normally active active applications when the user has started the target via the Power On button and clamp state is not active. This mode would normally allow the target to run for a configurable period of time before an automatic shutdown will occur.

Node Software Loading SWL/Diagnosis Yes Session State Yes NSM, SHM, Node When SWL is in progress Session (SWL)/Flash Mode change I/F SRM, SWL we would need to handle active appl. and reboot requests and some recovery requests applications differently Node Diagnostics Active Diagnostics No Session State Yes NSM, SHM, Node Session change I/F SWL appl. and some applications

Node Additional Product Product Yes Session State Yes NSM and Seat Predefined product Session Session States Extensions change I/F some states with no defined applications GENIVI meaning

Wake-up Unknown NSM No Yes NSM, Node This is the default value Reason BMGR, and would be used as a Error Mgmt, catch all for unhandled DIAG exceptions/restarts in the SHM and systems may some applications

Wake-up First Switch To NSM No Yes NSM, Node This would be the state Reason Power (FSTP) BMGR, when the user performs a DIAG first switch to power SHM and may some applications

Wake-up CAN Activity Power Event No Power Event I/F Yes NSM, Node This is the normal Reason Controller BMGR and wake-up reason and some would initiate a standard applications system start-up

Wake-up Software Power Event No Power Event I/F Yes NSM, Node A system restart request Reason Controller BMGR and via software (SWL, some Diagnosis, SHM etc) applications would result in a "Software" wake-up reason. The NSM would handle it the same as a normal "CAN activity" reason.

Wake-up Eject NSM (Front Yes Yes NSM, Node An "eject" wake-up Reason Panel) BMGR and reason would be used by some the NSM as the typical applications use case is that the target is not completely run-up. Typically we would run up enough to allow for the eject to be handled. We would then shutdown again when the media has been removed completely from the drive.

Wake-up Media Insertion NSM No Yes NSM, Node Media insertion is not Reason (VuC/PIC) BMGR and relevant to the NSM state some machine as it does not applications alter the Last User Mode. From the NSM point of view a "Media Insertion" would trigger the same handling as the "ECU Power On button"

Wake-up ECU Power On NSM (Front Yes Yes NSM, Node This wake up reason Reason button Panel) BMGR and would trigger the change some of Node Session State to applications Entertainment Mode

Wake-up HEVAC NSM (Front Yes Yes NSM, Node Input via the HEVAC can Reason Panel) BMGR and wake up the Head Unit some (this is OEM optional) applications and would only activate the HEVAC session state with a reduced set of functionality Wake-up Phone NSM (via Yes Yes NSM, Node In-car phone can wake Reason Network) BMGR and up the Head Unit some applications

Wake-up Unexpected Reset NSM Yes Yes NSM, Node Set when the NSM Reason BMGR, validates the shutdown Error Mgmt reason from the previous and some lifecycle applications

Shutdown Thermal Poor Thermal Yes Updated by No NSM Node Reason Manager NSM at the point of triggering the shutdown

Shutdown Thermal Bad Thermal Yes Updated by No NSM Node Possible that Thermal Reason Manager NSM at the Bad will trigger an point of immediate shutdown triggering the which will bypass shutdown Persistency storage

Shutdown Supply State Poor Supply Yes Updated by No NSM Node Reason Manager NSM at the point of triggering the shutdown

Shutdown Supply State Bad Supply Yes Updated by No NSM Node Normal shutdown Reason Manager NSM at the procedure when the point of clamp state change triggering the indicator is received shutdown

Shutdown Normal (network) NSM (based Yes Updated by No NSM Node Possible that Supply Bad Reason on clamp NSM at the will trigger an immediate state) point of shutdown which will triggering the bypass Persistency shutdown storage

Restart Application Failure NSM Yes Updated by Yes NSM, SHM, Node Potentially a restart due Reason NSM at the BMGR and to Application failure point of some could result in a different triggering a applications start up procedure system restart depending on escalation policies (for instance we could perform an automatic rollback to a previous version of the SW)

Restart SW Loading NSM No Updated by Yes NSM, Node Reason NSM at the BMGR and point of some triggering a applications system restart

Restart Diagnostics NSM No Updated by Yes NSM, Node Reason NSM at the BMGR and point of some triggering a applications system restart

Restart User NSM Yes Updated by Yes NSM, Node Potentially a restart Reason NSM at the BMGR and triggered by the "User" point of some could result in a different triggering a applications start up procedure as we system restart could trigger an automatic emergency SWL mode due to problems in the system

BMW Lifecycle State Chart

To validate that the GENIVI lifecycle concept in general and the node state management works in real world scenarios, BMW published its lifecycle state chart. SysInfraEGLifecycleRAUsecases

Lifecycle vehicle use case definition

This page is under construction.

The following diagram shows the traceability from Vehicle Requirements to Black Box Use cases.

By clicking on the Forward arrow below you will be taken to the Black Box Use Cases web page

SysInfraEGLifecycleRequirements

Released Subdomain Lifecycle Requirement Definition

Legal Note well The rules for contributing confidential information are defined in Section 17.1 of the GENIVI Bylaws. Unless information submitted by members to this page is submitted according to those confidentiality rules, all information on this page will be considered non-confidential.

Scope of this page Scope of Lifecycle requirements Lifecycle Owned Requirements - Vehicle (1.1.1) Lifecycle Owned Requirements - SW Platform (1.2.1) Lifecycle Owned Requirements - Mechanism (1.3.1) Lifecycle Owned Requirements - Hardware (1.4.1) Lifecycle Owned Constraints (1.1.2) Lifecycle Requirements to be satisfied outside of Lifecycle Subdomain (2.1.1) Lifecycle Constraints to be satisfied outside of Lifecycle Subdomain (2.1.2)

Scope of this page

The scope of this work package is the definition of all requirements and constraints for the Subdomain Lifecycle and all requirements and prerequisites from the Subdomain Lifecycle to someone else.

The structure, the terms and the lifecycle of the requirements and constraints are described here: (SysInfraEGRequirementsStructDef)

Scope of Lifecycle requirements

Lifecycle requirements describing the rules of how the ECU is supposed to react to different internal and external conditions including supply voltage level, temperature, vehicle network status, application status, etc. Such requirements are also referred to as Life Cycle, Life Phases, Power Management, etc. (source of scope)

Lifecycle Owned Requirements - Vehicle (1.1.1)

Alias Area Originator Description Additional Info State Comment Assigned (Name, To Company)

VH-LICY-001 Component Mark Lifecycle In shutdown situations, Lifecycle Released David Yates, NSM Hatle, Management Management must timeout after a Continental - Wind River must override defined time duration and proceed Exception blocking with shutdown even if one or more needs to added consumers on consumers or devices fail to for SWL shutdown respond to a state change (especially in request. Special handling must be scenario where configurable for the Software HU is the Loading consumer to prevent master of the potential system instabilities. SWL Process) Christian Muck, BMW - Okay. Original Req. In shutdown situations, PSM must timeout after a defined time duration and proceed with shutdown even if one or more consumers or devices fail to respond to a state change request

VH-LICY-002 Component Mark Supply Released Munich Supply Hatle, Management Workshop: Wind River must be able Hardware to route power events hardware are part of power events Supply Management Christian Muck, BMW - Not part of GENIVI lifecycle/PSM requirement. Original Req: Assignment not PSM and should be renamed to Supply

David Yates, Continental: Agreed in Telco on 19th May 2011

VH-LICY-003 Component Christian The system Released David Yates, Boot Muck, should detect Continental: Mgmt BMW the wake up Agreed in Telco reason to on 19th May prevent or 2011 modify start-up if functionality is not needed. VH-LICY-004 Component Christian The system Differentiation between normal Released David Yates, NSM Muck, functionality to shutdown (normal persistence) Continental : BMW store and fast shutdown, e.g., from Additional necessary diagnostic. In a fast shutdown information is data is situation we will not do normal needed here depending on persistence. Christian Muck, the shutdown BMW : We mode. This will be a product decision on differentiate consumers will be called. between normal shutdown (normal persistence) and fast shutdown, e.g. from diagnostic. In a fast shutdown situation we will do not normal persistence.

David Yates, Continental: Agreed in Telco on 19th May 2011

VH-LICY-005 Component Christian In diagnostic Note: Incoming messages over Released David Yates, NSM Muck, or flash bus will be ignored, e.g. phone Continental : BMW sessions the call, button pressed. Node internal Does this lifecycle is exceptions from thermal override other controlled by management should not be exceptions, i.e. diagnostic. ignored to save the ECU. Phone call taking place, thermal management etc Christian Muck, BMW : Incoming messages over bus will be ignored, e.g. phone call, button pressed. Node internal exceptions from thermal management should not be ignored to save the ECU.

David Yates, Continental: Agreed in Telco on 19th May 2011

VH-LICY-006 Component Fabien Before a If diagnosis starts a reset/reboot Released David Yates, NSM Hernandez managed procedure, system can give a little Continental: , PSA reset/reboot, time to the user to finish actions. Agreed in Telco the system This time will be a product on 19th May provides a configurable value. 2011 degraded mode in order to maintain main user services.

VH-LICY-007 Component Fabien The system Rational: Released David Yates, NSM Hernandez has the ability Ongoing phone call and the driver Continental: , PSA to delay the asks for ignition off (key) Agreed in Telco shutdown on 19th May depending on 2011 application state. VH-LICY-008 Component Fabien The system Rational: Released Markus NSM Hernandez has the ability When a call is still ongoing a 2011-04-07 , PSA to manage a defined period of time (x minutes) Changed timeout to after the driver asked for ignition wording of shutdown off (key) the call is switch to the rational depending on phone and the system is shut David Yates, application down. Continental: state Agreed in Telco . on 19th May 2011

VH-LICY-009 Component Christian The system Ignition on -> system on. Low Released David Yates, NSM Muck, has to follow battery -> system shutdown. Continental: BMW the global car Agreed in Telco operation on 19th May modes. 2011

VH-LICY-010 Component Christian There must be Full functionality for all user Released David Yates, NSM Muck, a normal interactions with ignition on. Continental: BMW operation Agreed in Telco mode. on 19th May 2011

VH-LICY-011 Component David The system Released David Yates, Boot Yates, can be woken Added more Mgmt Continental up by the user explicit pressing the requirements Power On from Christians button on the original req. ECU Christian Muck, The system can be woken up from network/bus signals.

VH-LICY-012 Component David The system Released David Yates, Boot Yates, can be woken Added more Mgmt Continental up by the user explicit inserting a requirements CD/DVD in the from Christians drive original req. Christian Muck, The system can be woken up from network/bus signals.

Torsten: Add DVD eject button

VH-LICY-013 Component David The system Released David Yates, Boot Yates, can be woken Added following Mgmt Continental up by the user review in F2F pressing the meeting CD/DVD eject button

VH-LICY-014 Component David The system Released David Yates, Boot Yates, can be woken Added more Mgmt Continental up by an explicit active bus requirements signal from Christians original req. Christian Muck, The system can be woken up from network/bus signals.

Danilo: Add a requirement for internal wake up signal (i.e. internal phone) VH-LICY-015 Component David The system Released David Yates, NSM Yates, can be Added more Continental shutdown by a explicit specific Clamp requirements State from Christians notification original req. Christian Muck, The system can be woken up from network/bus signals.

VH-LICY-016 Component David The system Released David Yates, NSM Yates, can be Added more Continental shutdown by explicit the ECU after requirements a product from Christians defined original req. timeout when Christian Muck, there is no The system can active Clamp be woken up State from network/bus signals.

VH-LICY-017 Component David The system Specific diagnostic authentication Released David Yates, NSM Yates, can be and protocols will need to be Added more Continental instructed to handled before this Diagnosis job explicit shutdown by is accepted by the ECU requirements an external from Christians tester original req. Christian Muck, The system can be woken up from network/bus signals.

VH-LICY-018 Component Christian There must be Reduced functionality (i.e. internal Released David Yates, Lifecycle Muck, a parking HU functionality only) during Continental : BMW mode. ignition off but key is present more information needed, is this mode used when HU is turned on by user but clamp state not active and therefore HU only stays on for 30mins? Simon Barnett, Jaguar: Do not agree with reduced functionality, HU wake-up should wake-up other car components (ECU) for High premium system Danilo, Magneti : More detailed requirement needed to define reduced functionality and flexibility based on OEM requirements Christian Muck, BMW : Okay can be rejected. VH-LICY-019 Component Christian There must be No function for the customer (e.g. Released Torsten Lifecycle Muck, an operation navigation, entertainment etc.) Hildebrand: BMW mode for except safety functions (e.g. PDC) Should be split transporting during key presence and ignition into two the cars. on. Different start-up and different requirements functionality to building the cars Christian Muck, possible. Only used in factories BMW : Split into and service. two requirements from original: "There must be an operation mode for building and transporting the cars."

VH-LICY-020 Component Christian There must be Reduced functionality (i.e. Released Christian Muck, Lifecycle Muck, an operation Diagnosis, PDC, etc.) during key BMW : Split into BMW mode for presence and ignition on. Different two building the start-up and different functionality requirements cars in to transporting the cars possible. from original: factories. Only used in factories. "There must be an operation mode for building and transporting the cars."

VH-LICY-021 Component Christian There must be This mode will be accessible Released David Yates, Lifecycle Muck, an operation through a protected interface that Continental : is BMW mode for can only be accessed via there no flashing authorized applications. customer based devices. updates - has this been agreed with Automotive group? Other OEMs require full system update by end user Christian Muck, BMW : I did not found any Automotive EG requirement for flashing devices. Torsten Hildebrand: Reword still needed to be allow for other applications to trigger the mode. Maybe we need a protected interface that can only be called by certain authenticated consumers

VH-LICY-022 Component Christian There must be Offer interfaces for thermal Released Thermal Muck, a thermal management and perform proper BMW management. counter measures if it's too cold/warm.

VH-LICY-023 Component Fabien Following Global requirement related to Released David Yates, Boot Hernandez power up, the availability of rear view camera Continental : Mgmt , PSA system is below. We can think also about reword to state gradually first picture availability, audio, multi staged functional video, ... (time constraints) boot process to prioritize component start-up order All : Agreed as is VH-LICY-024 Component David The lifecycle Released NHM Yates, must provide Continental watchdog functionality for system applications.

VH-LICY-025 Component Christian The ECU must When this occurs it is required Released Torsten: We NSM Muck, handle an that some critical persistency data accept this and BMW immediate is stored using the remaining will drive sub power off. battery charge requirements to other domains

Maybe more understandable as

"The ECU must handle an emergency power off"

VH-LICY-026 Component Christian The lifecycle It must be shown that at all times Released David Yates, Boot Muck, must ensure in the system that safety critical Continental : Mgmt BMW safety critical applications can be started within Updated to be a applications the same timescales (KPI's) as true Vehicle can always be agreed for a first switch to power. requirement and started by the moved original driver. Specifically during the shutdown req regarding sequence there can not be a canceling window of opportunity where shutdown into safety critical applications are SW Reqs blocked from running.

NOTE: Exception allowed for when Head Unit is in Flash Mode (i.e. SWL)

VH-LICY-027 Component Christian To provide Each application/component Released David Yates, Lifecycle Muck, Log&Trace should provide version information Continental : BMW data for and build details in DLT at start-up Reword problem time. needed, analysis the Lifecycle will DLT must be use DLT but supported. Additional Info field is wrong Christian Muck, BMW : No additional info field is needed, e.g. the first log messages of a consumer should contain version information and build details.

VH-LICY-028 Component Fabien Accordingly to Released NSM Hernandez its state, the , PSA system could generate a reset/reboot Component Christian The system Open David Yates, Muck, must support a Continental : BMW real time clock Discussion and an uptime needed here, clock. we will forward real time clock information from Vehicle Network but Lifecycle should not be responsible for more

Torsten: This is Infrastructure but is not really Lifecycle, maybe we need a new subdomain to handle this kind of item

Danilo: Will check with Sys Arch team how clocks should be handled in the system

Component Fabien For specific Install, upgrade states Duplicate David Yates, ZReject Hernandez states, the Continental : , PSA system is not Similar allowed to requirement reboot. above from BMW - exceptions allowed for Thermal Management?

Component Christian There must be Full functionality and no system Duplicate Fabien ZReject Muck, a shutdown during phone calls. Hernandez With BMW communication these mode. requirement do we need to maintain *1 or put it as a SW platform RQ ? All, agreed it is a duplicate of other requirements

Component Christian There must be Systems prepares shutdown, no Duplicate David Yates, ZReject Muck, a shutdown key is present and no functionality Continental : BMW delay mode. is offered to the user. what is the use case here, who requires this mode? Christian Muck, BMW : Duplicate to which requirement?

Component Fabien The Lifecycle Rejected Munich ZReject Hernandez Management Workshop: can stop Can be applications or Rejected, reboot the agreed with system if a Fabien CPU load Markus threshold is 2011-03-31: I reached think we should create generic node health requirement instead of this explicit one. Component Christian The Must not be disturbed by Rejected David Yates, ZReject Muck, emergency call entertainment or system Continental : BMW must be shutdown. exceptions possible in all needed for SWL situations Note: Exceptions are allowed for and Diagnostic where the SWL and Diagnostic sessions customer is Simon Barnett, present. Jaguar : Emergency calls must be done by another external ECU and therefore emergency calls are out of the scope of IVI Fabian Hernandez, PSA, Maybe it could be fulfilled later with an additional integration of components in the ECU. All, Agreed to reject for now but available later possibly

Component David The system Rejected David Yates, ZReject Yates, can be woken Added more Continental up from explicit network/bus requirements signals. from Christians original req. Christian Muck, The system can be woken up from network/bus signals.

VH-LICY-029 Component David Lifecycle During a fast shutdown it is Released David Yates, NSM Yates, Management accepted that persistence data Continental : Continental must be able from that lifecycle will not be Requirements to perform a stored during the shutdown added based on fast system F2F meeting in shutdown June when requested

Lifecycle Owned Requirements - SW Platform (1.2.1)

Alias Derived from Area Originator Description Additional Info Priority State Comment (Name, Company) SW-LICY-001 OEM Component Christian The Node State The Lifecycle P1 Released Munich Workshop: Muck, Manager of the Manager API will Original OEM state BMW Automotive distribute the OEM and the mapped Controller must and GENIVI states GENIVI state - provide mapping as part of the maybe one API between interface. provides both states OEM-specific FH (PSA): Do we system states and need this RQ, GENIVI States. Automotive controller is seen like any others devices ... and doesn't hold more features. Original Req. The Lifecycle Manager of the Automotive Controller knows the OEM-specific system states. If the system state will be changed, the Lifecycle Management of the infotainment controller should be informed about the new system state to trigger the state change.

SW-LICY-002 Internal API Mark The Node State P1 Released Christian Muck, BMW Hatle, Manager must - Okay. Wind River send a notification to all registered consumers after a state change has completed

SW-LICY-003 API Mark The definition and P1 Released Christian Muck, BMW Hatle, structure of - Okay. Wind Rive Lifecycle API'smust be hardware independent

SW-LICY-004 API Mark Lifecycle P1 Released Christian Muck, BMW Hatle, Management API - Requirement for Wind River must provide testing. functionality to get a list of all registered consumers

SW-LICY-005 VH-LICY-002 API Mark The Node State P1 Rejected Munich Workshop: Hatle, Manager must power state in this Wind River provide an API to requirement does not get the current refer to power power state degradation Christian Muck, BMW - Not part of GENIVI lifecycle requirement.

SW-LICY-006 VH-LICY-002 API Mark The Node State P1 Rejected Munich Workshop: Hatle, Manager must power state in this Wind River provide an API to requirement does not switch the power refer to power state degradation Christian Muck, BMW - Not part of GENIVI lifecycle requirement but the shutdown/reboot sequence should be discussed.

SW-LICY-007 Component Mark The Node State P1 Released Christian Muck, BMW Hatle, Manager must be - Okay. Wind River accessible to multiple consumers SW-LICY-008 Component Mark The Node State P1 Released Christian Muck, BMW Hatle, Manager must be - Better Wind River able to distribute understanding: the node state to Lifecycle all consumers Management must be able to set the state to all consumers. Its not part of GENIVI lifecycle to power up/down individual systems and devices.

Original Req: Lifecycle Management must be able to individually power up/down individual systems and devices

SW-LICY-009 VH-LICY-027 Component Mark All Lifecycle P1 Released Christian Muck, BMW Hatle, components must - Better Wind River provide monitoring understanding: capabilities with Lifecycle DLT for the Test Management must Framework provide monitoring capabilities with DLT for the Test Framework. Original Req. Lifecycle Management must provide monitoring capabilities for the Test Framework

SW-LICY-011 VH-LICY-006 Component Mark The Node State P1 Released David Yates, Hatle, Manager must Continental - VH-LICY-007 Wind River have the ability to Component design trigger immediate and naming not VH-LICY-008 or deferred system finalized hence reboots reworded VH-LICY-028 Christian Muck, BMW - Reworded from schedule to trigger system reboots. Original Req. NHM must have the ability to schedule immediate or deferred system reboots

SW-LICY-012 VH-LICY-026 Component Mark node resource P1 Released David Yates, Hatle, Management must Continental - Wind River have the ability to Component design restrict and monitor and naming not memory levels of finalized hence processes and reworded notify the NHM should react on appropriate system Memory information components provided by the NRM Christian Muck, BMW - Okay. Original Req. NHM must have the ability to monitor memory levels of processes and notify the appropriate system components SW-LICY-013 VH-LICY-026 Component David node resource P1 Released Yates, Management must Continental have the ability to restrict and monitor CPU usage levels of processes and notify the appropriate system components

SW-LICY-014 VH-LICY-024 Component Mark node health P1 Released David Yates, Hatle, Monitor must Continental - Wind River provide software Component design watchdog and naming not functionality to finalized hence reworded applications Christian Muck, BMW - Okay. Original Req. NHM must provide software watchdog functionality to user Space applications

SW-LICY-015 VH-LICY-024 Component Mark node health P1 Released David Yates, Hatle, watchdog timeout Continental - Wind River reaction must be Component design configurable and naming not finalized hence reworded Christian Muck, BMW - Okay. Original Req. NHM Watchdog timeout reaction must be configurable

SW-LICY-016 VH-LICY-024 Component Christian The node health For each process P1 Released David Yates, Muck, Manager shall be it shall be Continental - BMW configurable with configurable how Updated for clarity regards to many tries should Original Req. restarting failing be made by For each process it processes Lifecycle shall be configurable Management to how many tries to be restart a process done by the Lifecycle (i.e. if a process Management to crashes and the restart the process if first attempt to a process is lost and restart fails). the first attempt to restart fails.

SW-LICY-017 VH-LICY-024 Component Mark The node resource P1 Released Munich Workshop: Hatle, Manager must Complete system Wind River monitor memory memory - consumption and configurable plug-in initiate a decides reaction configurable David Yates, recovery Continental - mechanism if the Component design complete system and naming not memory is below a finalized hence configurable reworded threshold Christian Muck, BMW - Okay. Original Req. NHM must monitor memory consumption and initiate a system reboot if the memory is below a threshold

SW-LICY-019 VH-LICY-024 Component Fabien The node resource P1 Released Munich Workshop: Hernandez Manager must Expand for system monitor the CPU and process load and load of the check against the complete system higher requirement and of individual Original Req: processes The Lifecycle Management must monitor CPU load SW-LICY-071 VH-LICY-024 API David The node health P1 Released Yates, Manager must Continental provide an interface whereby Applications can register plugins for observation

SW-LICY-070 VH-LICY-024 Component Mark The node health For instance if a P1 Released Manfred Bathelt Hatle, Manager must safety critical (BMW): Wind River provide a application is not propose to introduce configurable started correctly an abstract recovery/escalation then the system component policy in case that should be responsible for the NHM detects restarted. consistency checks an abnormal start However, if a of applications (e.g. up behavior user/3rd party to detect abnormal application does startup), to provide not start then a proper specific input restart is not to NHM. Such necessary. functionality should not be core responsibility of NHM.

Munich Workshop: Reword to be more obvious David Yates, Continental - Definition of policies/settings required plus component design/naming should be removed Christian Muck, BMW - Definition what should be tracked? What is an abnormal start up behavior? Original Req. NHM must react according to configured system policies in case that the NHM service/application tracker detects an abnormal start up behavior

SW-LICY-020 VH-LICY-024 Component Christian The node health System failures P1 Released Munich Workshop: Muck, Manager response monitored by the Reword to explain BMW to System/Process Lifecycle "lost" a little better failures must be Management will Original Req. configurable. include for For each process it example shall configurable exceptions, what will be triggered excessive by the Lifecycle resource usage Management if this and heartbeat process is lost failures. The (nothing, restart the configurable process or trigger to actions for the reboot the system). Lifecycle Management shall include ignore, restart the process and triggering a reboot of the system.

This configuration will be possible at the OEM/Product level SW-LICY-021 VH-LICY-027 Component Christian The node health Review David Yates, Muck, Manager must Continental BMW trigger a dump of New requirement debug information added in Table 2.1.1 in case of failures. Munich Workshop: Split the requirement to be specific about who does dump and what type of dump Original Req. The core-dump should be logged if a process fails.

SW-LICY-022 RQ 1.14.1 Component Mark The NSC must P1 Released David Yates, Hatle, keep track about Continental - Wind River start up of Component design services/application and naming not finalized hence reworded Christian Muck, BMW - Okay. Original Req. NHM must keep track about start up of services/application

SW-LICY-023 VH-LICY-023 Component Christian The NSC provide a P1 Released Munich Workshop: Muck, mechanism to alter Reword for LUM BMW the start order of dependent start-up applications based Original Req. on Last User Mode The Lifecycle persistent data. Management supports different start priorities for applications.

SW-LICY-066 VH-LICY-023 API David An API must be P1 Released Yates, provided for Continental applications to register themselves in the LUC

SW-LICY-072 VH-LICY-023 API David An API must be P1 Released Yates, provided to read Continental the contents of the current LUC

SW-LICY-024 VH-LICY-026 Component Christian The NSC must The NSC will use P1 Released David Yates, Muck, provide a this for shutdown Continental : Reword BMW mechanism for and cancel needed to be clear starting and shutdown about point of no stopping scenarios. return and term applications during Additionally this shutdown. normal run-time interface will be Christian Muck, BMW used by SWL and : Reword done. node health Monitor.

SW-LICY-067 VH-LICY-026 Component Christian The Node State P1 Released Muck, Manager must be BMW able to trigger the cancelling of an ongoing shutdown

SW-LICY-025 VH-LICY-026 Component David The NSC must P1 Released Yates, ensure that critical Continental applications are prioritized during system start-up to ensure that they are available as early as possible within the system start-up sequence SW-LICY-026 VH-LICY-024 Component Mark The node resource The NRM will P1 Released Munich Workshop: Hatle, Manager must monitor and report replace memory with Wind River provide monitoring when applications ram of the RAM usage fail to get more David Yates, and the "Out of memory (i.e. they Continental - Reword Memory" signal. have reached their needed for allocated max.) clarification but will not take Christian Muck, BMW any further - Not needed. actions. Original Req. Middleware must It is intended that provide monitoring this trace output capability for memory can be used by a usage System Integrator to configure a system as required.

SW-LICY-027 VH-LICY-024 Component Mark The node resource P1 Released Munich Workshop: Hatle, Manager must Replace system with Wind River provide a CPU mechanism to David Yates, control the CPU Continental - Reword usage on a needed for process basis clarification Christian Muck, BMW - Reword needed. Original Req. Middleware should provide a mechanism to control the system usage on a process base

SW-LICY-029 VH-LICY-002 API Mark Supply P1 Released Munich Workshop: Hatle, Management must Covered under Wind River be able to dispatch Supply Management hardware power Christian Muck, BMW events to - Not part of GENIVI registered lifecycle requirement. consumers Original Req: Lifecycle Management must be able to dispatch hardware power events to the related Lifecycle consumers

SW-LICY-030 VH-LICY-002 API David The Supply P1 Released Yates, management API Continental must provide a mechanism for Supply Management plug-ins to communicate power events

SW-LICY-031 VH-LICY-002 Component David The Supply P1 Released Yates, management must Continental be able to map power events received from plug-ins to Platform Supply Management states

SW-LICY-032 VH-LICY-002 Component David The Supply P1 Released Yates, management must Continental communicate Platform Supply Events to the Node State Manager SW-LICY-033 VH-LICY-022 API David The Thermal P1 Released Yates, Management API Continental must provide a mechanism for Thermal Management plug-ins to communicate thermal events

SW-LICY-034 VH-LICY-022 Component David The Thermal P1 Released Yates, management must Continental be able to map power events received from plug-ins to Platform Thermal Management states

SW-LICY-035 VH-LICY-022 API David The Node State P1 Released Yates, Manager must Continental provide an interface with which the Thermal Manager can write the Platform Thermal State

SW-LICY-036 VH-LICY-001 API David The Node State P1 Released Yates, Manager must Continental provide an API mechanism for a consumer to delay the system shutdown

SW-LICY-037 VH-LICY-001 Component David The Node State P1 Released Yates, Manager must Continental provide a mechanism to configure how long a consumer can block the system shutdown

SW-LICY-069 VH-LICY-001 Component David The Node State P1 Released Yates, Manager must Continental enforce that no consumer can block the system shutdown for more than the product configured time

SW-LICY-039 VH-LICY-003 Component David Boot Management P1 Released Yates, must provide a Continental dynamic start-up mechanism based on the reset reason

SW-LICY-040 VH-LICY-023 API David The Node State P1 Released David Yates: Yates, Manager must Updated VH-LICY-003 Continental provide a public Christian Muck: interface whereby Duplicate or is it the wakeup reason listed twice because can be read the first one is assigned to Boot Management

SW-LICY-041 VH-LICY-004 Component David The Node State Based on the P1 Released Yates, Manager must target Continental provide a configuration configurable consumers might dynamic shutdown not be called procedure based before initiating a on the shutdown system shutdown reason and node session state SW-LICY-042 VH-LICY-004 Component David The NSC must P1 Released Yates, provide a Continental configurable dynamic shutdown procedure for legacy and 3rd party applications in the system

SW-LICY-043 VH-LICY-004 Component David The Node State P1 Released Yates, Manager must VH-LICY-017 Continental provide a mechanism for fast shutdown whereby only a small subset of components are informed about the shutdown procedure

SW-LICY-044 VH-LICY-017 API David The Node State For instance this P1 Released Yates, Manager must would be used by Continental provide an SWL, Diagnostics, interface whereby node health applications in the Monitor etc. system can request a system shutdown

SW-LICY-045 VH-LICY-016 API David The Node State For instance this P1 Released Yates, Manager must would be used by Continental provide an SWL, Diagnostics, interface whereby node health applications in the Monitor etc. system can request a system restart

SW-LICY-046 VH-LICY-016 API David The Node State P1 Released David Yates: Yates, Manager must Following review Continental provide an email was sent to interface that can Automotive group to be used by an check handling. external component to Manfred Bathelt notify about (BMW): changes to the wouldn´t that interfere Clamp State with the Vehicle-Interface defined by Auto-EG?

SW-LICY-047 VH-LICY-012 API David The Node State P1 Released Yates, Manager must VH-LICY-013 Continental provide an interface whereby VH-LICY-014 the wake-up reason can be VH-LICY-015 provided by an external component

VH-LICY-011 Component David The Node State This will be P1 Released SW-LICY-049 Yates, Manager must activated by the VH-LICY-016 Continental support a mode user pressing the whereby the Head Power On button Unit will remain on the Unit operational for a configurable period of time regardless of clamp state.

SW-LICY-050 VH-LICY-011 Component David The NSC must be i.e. If the user P1 Released Yates, able to dynamically presses the "Navi" Continental alter the start-up button during the sequence of the system start-up Head Unit to reflect then the operations on user Navigation SW input devices must be given priority during the start-up sequence SW-LICY-051 VH-LICY-026 Component David The node resource P1 Released Yates, Manager must Continental ensure that safety critical applications (i.e. RVC & PDC) will always have enough node resources (CPU and memory) to meet safety requirements

SW-LICY-052 VH-LICY-012 Component David The Bootloader For instance when P1 Released Yates, must use the the wake-up VH-LICY-013 Continental wake-up reason to reason is media determine what insertion then a VH-LICY-014 components to different subset of start during the SW will be boot sequence required and/or the priorities for the start-up will be different

SW-LICY-053 VH-LICY-018 Component David The Bootloader The following P1 Released Yates, must use the modes must be VH-LICY-019 Continental current application supported, each mode to determine with their VH-LICY-020 what components configurable to start during the start-up VH-LICY-021 current boot dependencies : sequence Parking Mode Transport mode Factory/Production mode SWL mode

SW-LICY-054 VH-LICY-018 API David The Node State The following P1 Released Yates, Manager must modes must be VH-LICY-019 Continental provide an supported : interface whereby Parking Mode VH-LICY-020 the application Transport mode mode can be Factory/Production VH-LICY-021 updated mode SWL Mode

SW-LICY-057 VH-LICY-005 Component David The Lifecycle NOTE: the active P1 Released Yates, Management must node sessions Continental support different agreed in the levels of requirement functionality actually refers to dependent on the "Node active node Application Mode", sessions i.e. Transport, SWL etc.

SW-LICY-058 API David The Node State P1 Released Yates, Manager must Continental provide an interface to allow applications to change the Node Boot Mode

SW-LICY-059 VH-LICY-012 Component David The Node State P1 Released Yates, Manager must Continental store the lifecycle wake-up reason

SW-LICY-060 VH-LICY-006 Component David The Node State This value will be P1 Released Yates, Manager must used to delay all Continental provide a product requested system configurable shutdown/resets shutdown delay value SW-LICY-062 VH-LICY-012 Component David The Bootloader For instance when P1 Released Yates, must use the the Bootloader Continental wake-up reason to detects a wake-up specify via the reason indicating kernel command an insertion of a line the required CD/DVD it must applications to be start enough started functionality to allow for the media to be inserted. By default in this use case the target will automatically shutdown after media insertion however this will be dependent on the product state machine in the Node State Manager

SW-LICY-064 VH-LICY-013 API David The Bootloader P1 Released David Yates: Yates, must provide an Rewritten following VH-LICY-014 Continental API whereby it can initial expert group receive the review Wake-up reason from the Automotive Controller

SW-LICY-068 VH-LICY-013 Component David When the Node P1 Released Yates, State Manager Continental receives an indication of a media removal and the wake-up reason was "Media Eject" then it must initiate a system shutdown

- API Christian The Lifecycle Rejected Munich Workshop: Muck, Management Can be Rejected BMW provides a FH (PSA): Do we mechanism so that need this RQ, other components Automotive controller can call the specific is seen like any OEM system state others devices ... and from the Lifecycle doesn't hold more Manager of the features. Automotive Controller.

RQ 9.1.5 API Mark Lifecycle Rejected Munich Workshop: Hatle, Management must Can be Rejected Wind River provide Christian Muck, BMW authentication - Not needed. (prevent accidental API call from affected power state) RQ 9.1.11 API Mark All registered Rejected Munich Workshop: Hatle, devices must No longer necessary Wind River receive a to have requirements notification in case for both consumers of a power and devices - degradation consumers will only be used Hardware power events are part of Supply Management David Yates, Continental - Interruption should be changed for degradation Christian Muck, BMW - Not part of GENIVI lifecycle requirement.

Original Req. All registered devices must receive a notification in case of a power interruption

RQ 9.2.3 API Mark Lifecycle Rejected Munich Workshop: Hatle, Management API Can be Rejected Wind River must provide Christian Muck, BMW functionality to get - Requirement for a list of all testing. registered devices

RQ 9.2.6 API Mark Lifecycle Rejected Munich Workshop: Hatle, Management API Can be Rejected Wind River must provide David Yates, mechanism by Continental - which consumers Rewording as defined may register with below would result in power event duplicate reqs - dispatcher Propose to reject this requirement Wording to be reworked for better understanding Christian Muck, BMW - Maybe two requirements in one sentence. Understanding: First-Requirement: The Lifecycle Management API provides mechanism so that consumers can register themselves at the NSM. Second-Requirement: The Lifecycle Management API provides mechanism to notify consumer about events(state transitions).

Component Fabien The Lifecyle Rejected Munich Workshop: Hernandez Management must Can be Rejected, handle Out Of agreed with Fabien Memory signal to perform specific tasks RQ 9.1.4 Component Mark Lifecycle Rejected Munich Workshop: Hatle, Management must Rejected Wind River be capable of David Yates, remote Continental management of - What is meant by consumers "remote" and who is the controller? Christian Muck, BMW - Not needed.

RQ 9.1.6 Component Mark Lifecycle Rejected Munich Workshop: Hatle, Management Can be Rejected Wind River should perform What is the root authentication cause of this during registration requirement? of consumers Christian Muck, BMW - Not needed.

RQ 9.1.7 Component Mark PSM should Rejected Munich Workshop: Hatle, perform Can be Rejected Wind River authentication What is the root during registration cause of this of devices requirement? Christian Muck, BMW - Not needed.

RQ 9.1.12 Component Mark PSM must be able Rejected Munich Workshop: Hatle, to handle multiple Can be Rejected Wind River states David Yates, Continental - Requirement itself is not a problem. Currently no known definition of states available. Initial "States" proposal to be made by Continental as part of the Architecture work and reviewed at a System Level Christian Muck, BMW - What do you think/ is your understanding about this topic? Mapping of more than 1 OEM state to GENIVI state

RQ 9.1.18 Component Mark PSM must support Rejected Munich Workshop: Hatle, remote power state Can be Rejected Wind River management of David Yates, devices Continental - What is meant by "remote" and who is the controller? Christian Muck, BMW - Not part of GENIVI lifecycle/PSM requirement.

RQ 1.8.41 Component Mark Multi-tasking Rejected Munich Workshop: Hatle, support must Can be Rejected Wind River provide process David Yates, monitoring Continental - What is capabilities meant here? Christian Muck, BMW - Not needed.

Lifecycle Owned Requirements - Mechanism (1.3.1)

Alias Derived Area Originator Description Additional State Comment Assigned from (Name, Info To Company) SW-LICY-010 VH-LICY-027 Component David All Lifecycle Agreed propose to push Lifecycle Yates, components must this to SAT as a Continental provide trace output general requirement via DLT for all Genivi components

Lifecycle Owned Requirements - Hardware (1.4.1)

Alias Area Originator Description Additional State Comment Assigned (Name, Info To Company)

Power Fabien In the event of computer unit reset, the input and Open David Yates, Hardware Hernandez output of the system are reset by default to Continental : , PSA guarantee: Clarification needed is this not a HW "protected" state of input requirement, if not "neutral" or "passive" state for output (no then more information supply or current). needed

Power Fabien In the event of reset and during the reset (boot) Open David Yates, HW Hernandez phase, the system guarantees that its input and Continental : What's , PSA output are stable and in a state that does not difference to above generate an undesirable event. requirement?

Lifecycle Owned Constraints (1.1.2)

Alias Category Area Originator Description Additional State Comment Assigned (Name, Info To Company)

RQ SW API Mark Hatle, PSM API should provide functionality to Agreed Munich NSM 9.2.7 Platform Wind River modify the set of power events an Workshop: already-registered consumer is interested in Design Constraint

Christian Muck, BMW - Not needed.

- SW Component Christian The Lifecycle Management needs a Agreed Munich AL Platform Muck, dependency graph to resolve Workshop: BMW start-dependencies. Design constraint

Lifecycle Requirements to be satisfied outside of Lifecycle Subdomain (2.1.1)

Alias Area Originator Description Additional Info State Comment (Name, Company)

VH-LICY-003 Subdomain David The Bootloader must be Reassigned Yates, able to pass the reset Continental reason as a parameter to the Kernel

VH-LICY-012 Subdomain David The Bootloader must be Reassigned Yates, able to pass the wake-up VH-LICY-013 Continental reason as a parameter to the Kernel VH-LICY-014 VH-LICY-018 Subdomain David The Bootloader must Reassigned Yates, pass the the current VH-LICY-019 Continental application mode to dictate what components VH-LICY-020 to start during the current boot phase VH-LICY-021

RQ 1.14.1 Subdomain Mark Configuration Registry Reassigned Christian Muck, BMW - What's the Hatle, (CR) must be accessible definition of CR and for what is it Wind River from beginning of the especially used for? Not really a PSM Middleware start-up Requirement, move it to persistency. stage, to provide Christian Muck, BMW - Pushed to configuration settings Persistence

RQ 1.20.9 Subdomain Mark User manager must Reassigned David Yates, Continental - Not a Hatle, support export of user lifecycle requirement -move to Wind River profiles persistency Christian Muck, BMW - Not a GENIVI lifecycle requirement - move to persistency. Christian Muck, BMW - Pushed to Persistence

RQ 1.20.14 Subdomain Mark Locale/Internationalization Reassigned David Yates, Continental - out of Hatle, scope of lifecycle - who should own Wind River - Used unit system must this requirement? depend on User Manager Christian Muck, BMW - Not a GENIVI lifecycle/PSM requirement. Maybe persistency. Christian Muck, BMW - Pushed to Persistence

Component Christian It must be possible to set These configuration Reassigned David Yates Continental - Is this not a Muck, factory configurations for settings (protected by an Persistency requirement rather than BMW vehicle specific functions. OEM specific Lifecycle? A reword would be needed authentication/authorization to change the context to Lifecycle process) can't be changed using the settings or persistency raise by the end user. a SW requirement to fulfill Lifecycle section Christian Muck, BMW - Pushed to Persistence

Component Fabien After a system Reassigned David Yates, Continental : This is Hernandez reset/reboot, the user requirement for Personalization - they , PSA should recover its last should create SW requirements from settings or context. this to Persistency and Lifecycle for the storage/retrieval of data either side of a lifecycle Christian Muck, BMW - Pushed to Persistence

RQ 1.8.39 System Mark Multi-tasking processes Rejected David Yates, Continental - What is Hatle, must support rights meant with regards to "multi-tasking" Wind River management that in the context of write management. provides the ability to set Christian Muck, BMW - Not needed. node resource restrictions Christian Muck, BMW - Rejected.

RQ 10.2.1 System Mark The boot and init Review Christian Muck, BMW - What are the Hatle, sequence must be start-up performance requirements? Wind River optimized and fast Assign to application launcher. enough to meet the start-up performance requirements

RQ 1.1.4 Subdomain Mark Rear view camera picture Rejected Christian Muck, BMW - This is a Hatle, must be available 5s after video requirement and not part of Wind River cold boot lifecycle requirements. But maybe a new application launcher requirement: Application launcher supports priorities for starting applications.

RQ 1.16.3 System Mark Middleware must be Rejected Christian Muck, BMW - Not needed. Hatle, capable to restrict device Christian Muck, BMW - Rejected Wind River access from User Space applications on user and group level RQ 1.16.4 System Mark Middleware must be able Rejected Christian Muck, BMW - Not needed. Hatle, to restrict access to Christian Muck, BMW - Rejected Wind River hardware components from User Space applications on user and group level

RQ 1.16.6 System Mark Middleware must be able Rejected Christian Muck, BMW - Not needed. Hatle, to restrict access on the Christian Muck, BMW - Rejected Wind River file system on user and group level

RQ 9.1.13 System Mark All registered PSM Review David Yates, Continental Hatle, consumers should - Registration should be by default Wind River acknowledge a state without handshake but API should change request before allow additional handshake req this change will be Christian Muck, BMW - performed Understanding: the lifecycle manager at the automotive controller side initiates the state change and this command is sent to the PSM. The PSM set the state to all consumers.

RQ 10.1.8 Subdomain Mark Boot loader, in the normal Reassigned David Yates, Continental - This is a Hatle, mode, must load software Boot loader requirement and should Wind River images to make the not be in lifecycle Subdomain system fully functional. It Christian Muck, BMW - Boot loader must load operating topic and not part of lifecycle system and application requirements software into memory and Christian Muck, BMW - No activities start execution started on boot loader yet.Pushing when such an activity starts. Christian Muck, BMW - Pushed to Sys Arch Team

RQ 10.1.11 Subdomain Mark The start-up of boot Reassigned David Yates, Continental - BIOS to be Hatle, loader and kernel must renamed as HW Configuration. Wind River not interfere with the Should be split into two requirements devices and services that as well are initialized and Christian Muck, BMW - Boot loader enabled by the HW topic and not part of lifecycle Configuration requirements. Original Req Christian Muck, BMW - No activities The start-up of boot started on boot loader yet.Pushing loader and kernel must when such an activity starts. not interfere with the Christian Muck, BMW - Pushed to devices and services that Sys Arch Team are initialized and enabled by the BIOS

RQ 10.1.12 Subdomain Mark Boot loader must validate Reassigned David Yates, Continental - This is a Hatle, the MD5sum of the kernel Boot loader requirement and should Wind River image before running it not be in lifecycle Subdomain Christian Muck, BMW - Boot loader topic and not part of lifecycle requirements...but what would happen if the check fails? Christian Muck, BMW - No activities started on boot loader yet. Pushing when such an activity starts. Christian Muck, BMW - Pushed to Sys Arch Team

System David The system must provide Reassigned Christian Muck, BMW - No activities Yates, an interface to allow the started on OS/Kernel yet. Pushing Continental node health Monitor to when such an activity starts. dump debug information Christian Muck, BMW - Pushed to (stack traces, call stacks, Sys Arch Team etc..) in the event of failures. RQ 1.15.1 System Mark The GENIVI system must Reassigned David Yates, Continental Hatle, provide Data Storage - Reword needed for clarification Wind River Quota Management on Christian Muck, BMW - Not needed. User level to limit Data Christian Muck, BMW - Pushed to Storage Usage per User. UserMgtPersonalization Original Req. Middleware must provide Data Storage Quota Management on User level to limit Data Storage Usage per User.

RQ 1.15.2 System Mark Quota Management must Reassigned Christian Muck, BMW - Not needed. Hatle, be configurable to restrict Christian Muck, BMW - Pushed to Wind River user quota on each UserMgtPersonalization available data storage in the system

RQ 1.15.3 System Mark Quota Management must Reassigned Christian Muck, BMW - Not needed. Hatle, support a notification Christian Muck, BMW - Pushed to Wind River mechanism to inform user UserMgtPersonalization space applications about reached user quota limits

RQ 1.20.8 System Mark Middleware must provide Reassigned David Yates, Continental Hatle, user profile management - What is the definition of Middleware Wind River capabilities. in context of GENIVI? Christian Muck, BMW - Understanding that user profile management stores custom user settings. Not a PSM requirement. Christian Muck, BMW - Pushed to UserMgtPersonalization

RQ 1.8.2 System Mark Operating System must Reassigned Christian Muck, BMW - Not needed. Hatle, support multiple users Christian Muck, BMW - Pushed to Wind River UserMgtPersonalization

System Christian The developer of an Reassigned Munich Workshop: Muck, application/daemon Move to bottom table and assign to BMW provides information Integration about the application Christian Muck, BMW - No activities dependencies. started on Integration yet. Pushing when such an activity starts. Christian Muck, BMW - Pushed to Sys Arch Team

System Fabien Each application Rejected Munich Workshop: Hernandez launched provides Duplicate to above requirement with maximum memory, mass regards to integration team storage and CPU load needed

Component Christian The software update must Firmware, map data, Reassigned Fabien Hernandez Does this Muck, be able to flash all parts program code... requirement has an impact on BMW of the system. Indeed it is a software lifecycle - it's more boot related (we're loading requirement but it looking for RQ in relation with has some derived impact Lifecycle) on lifecycle in terms of boot Christian Muck, 2011-04-19: This code updates. E.g. when requirement was discussed at you are executing a kernel SysInfraEGMinutes20110414#Vehicle in place (flash) and you level requirements Lifecycle (Markus want to update exactly that Boje Indeed it is a software loading kernel you will need a requirement but it has some derived derived SW platform impact on lifecycle in terms of boot requirement to lifecycle. code updates. E.g. when you are Even if finally this executing a kernel in place (flash) and requirement ends up in you want to update exactly that kernel Automotive EG vehicle you will need a derived SW platform level space it still has some requirement to lifecycle. Even if finally traces to lifecycle SW this requirement ends up in platform requirements and Automotive EG vehicle level space it therefore should be still has some traces to lifecycle SW considered. platform requirements and therefore should be considered.) Christian Muck, BMW - Pushed to Sys Arch Team Component Christian When the software Reassigned Fabien Hernandez Does this Muck, update fails or is requirement has an impact on BMW interrupted the system lifecycle ? It's more boot related must return to a stable (we're looking for RQ in relation with state which allows a retry. Lifecycle)

David Yates, Continental : SWL requirement with a SW requirement to Lifecycle in the area of system reliability and system start-up

All : Add a requirement for Lifecycle to go always be able to go to a stable state for SWL Christian Muck, BMW - Pushed to Sys Arch Team

Component Christian Security relevant For example PDC and rear Reassigned If go through that kind of constraint Muck, functions must be view cameras. we need also to take into account BMW available two seconds time for: audio (e.g., parking after start-up. functionality), video(e.g., parking functionality), start-up screen, radio, complete system ...

David Yates, Continental : This is a system requirement and not accepted in Lifecycle

Needs to be pushed into the Sys Arch team

Danilo: Has tried this but there is no interest/ownership in the team for these issues Christian Muck, BMW - No activities started yet. Pushing when such an activity starts. Christian Muck, BMW - Pushed to Sys Arch Team

Component Fabien After network wake-up, Constraint? Reassigned David Yates, Continental : This is a Hernandez the welcome page is system requirement and not accepted , PSA displayed within 1 in Lifecycle second. Christian Muck, BMW - No activities started yet. Pushing when such an activity starts. Christian Muck, BMW - Pushed to Sys Arch Team

Component Fabien After network wake-up, Constraint? Reassigned David Yates, Continental : This is a Hernandez The MMI system included system requirement and not accepted , PSA Visual, Audio, Phone, in Lifecycle Video... is available in Christian Muck, BMW - No activities less than 4 seconds started yet. Pushing when such an activity starts. Christian Muck, BMW - Pushed to Sys Arch Team

RQ 2.1.1 System Mark Framework must be Rejected Christian Muck, BMW - Not needed. Hatle, running for High-Priority Wind River audio within 1s from start-up initiation

RQ 2.1.2 System Mark Framework must be Rejected Christian Muck, BMW - Not needed. Hatle, running for normal audio Wind River within 3s from start-up initiation

RQ 2.1.4 System Mark Framework's time of Reassigned Christian Muck, BMW - Requirement Hatle, audio unavailability from of the whole system. Wind River system shutdown Christian Muck, BMW - No activities 'point-of-no-return' back to started yet. Pushing when such an High-Priority Audio activity starts. Availability must not Christian Muck, BMW - Pushed to exceed High Priority Sys Arch Team Audio Availability time constraints RQ 1.1.20 System Mark HMI operability must be Rejected Christian Muck, BMW - Not needed. Hatle, available in less then 4s Wind River

RQ 1.8.1 System Mark Must use a Linux Rejected David Yates, Continental Hatle, Operating System - Not needed Wind River Christian Muck, BMW - Not needed.

Component Christian The system must support Graphical Layers maybe Reassigned Fabien Hernandez, PSA. External RQ Muck, several graphical layers. used for full-screen video, we are in the owned chapter BMW HMI, browser or navigation David Yates, Continental : This is not map rendering. Lifecycle Christian Muck, BMW - Pushed to MediaGraphis/AudioEG Christian Muck, BMW - Ludovic's answer: Normally the LM will be able to do that.

Component Christian There must be a Layer Reassigned David Yates, Continental : This is not Muck, Management Component Lifecycle BMW to control layer visibility. Christian Muck, BMW - Pushed to MediaGraphis/AudioEG Christian Muck, BMW - Ludovic's answer: It's already in the Compliance statement.

RQ 1.1.7 Subdomain Mark Early audio: last Reassigned Christian Muck, BMW - Requirement Hatle, entertainment source of audio topic and not lifecycle Wind River must be playing after requirements. start-up within 3s Christian Muck, BMW - Pushed to MediaGraphis/AudioEG Christian Muck, BMW - Ludovic's answer: Requirements AM_ELA01 (now it's < 2s and not 3s)

RQ 2.1.3 System Mark Framework must be Reassigned Christian Muck, BMW - Not needed. Hatle, running for High-Priority Christian Muck, BMW - Pushed to Wind River audio during system MediaGraphis/AudioEG shutdown Christian Muck, BMW - Ludovic's answer: Requirements AM_EL05

VH-LICY-025 Component Christian Persistency must monitor Review David Yates, Added based on Vehicle Muck, the power interrupt line requirements BMW and trigger the flushing of critical data when an immediate power off occurs

Lifecycle Constraints to be satisfied outside of Lifecycle Subdomain (2.1.2)

Alias Area Originator Description Additional Info State Comment Assigned (Name, To Company)

Component Fabien Time between a network Rejected Fabien Hernandez System Hernandez command (coming from , PSA. Also a , PSA network) and MMI feedback constraint. (visual, audio): < 70ms David Yates, Continental : This is not Lifecycle

Component Christian The system has to provide Reassigned Fabien Hernandez System Muck, user feedback on user , PSA. It looks like BMW interaction within 100ms. a constraint. May be not only linked to lifecycle ? David Yates, Continental : This is not Lifecycle Christian Muck, BMW - No activities started yet. Pushing when such an activity starts. Christian Muck, BMW - Pushed to Sys Arch Team Component Christian The system must provide a The video source is Reassigned Fabien Hernandez System Muck, video watchdog functionality. checked for video , PSA. Global BMW signal availability and system current transmission. constraint? Christian Muck, BMW - No activities started yet. Pushing when such an activity starts. Christian Muck, BMW - Pushed to Sys Arch Team

SysInfraEGSystemd systemd

For Beginners If you are new to this Expert Group start here: SysInfraEGStarterKit.

General Information Short Introduction systemd is a system and service manager for Linux, compatible with SysV and LSB init scripts. systemd provides aggressive parallelization capabilities, uses socket and D-Bus activation for starting services, offers on-demand starting of daemons, keeps track of processes using Linux cgroups, supports snapshotting and restoring of the system state, maintains mount and automount points and implements an elaborate transactional dependency-based service control logic. It can work as a drop-in replacement for sysvinit.

It's called systemd, not system D or System D, or even SystemD.

Maintainer: Lennart Poettering

Latest stable release: 189

License: LGPL2.1+ (changed from GPLv2 license change commit)

EG Responsible: David Yates Quick Links

General links

systemd-Homepage - http://freedesktop.org/wiki/Software/systemd systemd-man - http://0pointer.de/public/systemd-man/ systemd-Project - http://0pointer.de/blog/projects/systemd.html Interface stability promise - http://www.freedesktop.org/wiki/Software/systemd/InterfaceStabilityPromise Git Web Front-end - http://cgit.freedesktop.org/systemd/

Lennart's terse slides from linux.conf.au 2011, A video of his talk Mailing Lists - General Development and Discussion Mailing List Bug Reports - Existing Bug Reports IRC - #systemd on irc.freenode.org Follow systemd on Google+

Publications

Control Centre: The systemd Linux init system Booting up: Tools and tips for systemd, a Linux init tool systemd documentation for developrs

Part I, socket activation Part II, socket activation II

On hostnamed On timedated On localed On multi-seat PaxControlGroups - How to behave nicely in the cgroupfs trees How to Write syslog Daemons Which Cooperate Nicely With systemd Incompatibilities with SysV/LSB systemd and Storage Daemons for the Root File System Interface Portability And Stability Chart systemd optimizations The Container Interface of systemd The initrd Interface of systemd systemd documentation for admins

Part I, verifying bootup Part II, Which Service Owns Which Processes Part III, how do i convert a sysv init script into a systemd service file Part IV, killing services Part V, the three levels of "off" Part VI, changing roots Part VII, the blame game Part VIII, the new configuration files Part IX, On /etc/sysconfig and /etc/default Part X, Instantiated services Part XI, Converting inetd services Part XII, Securing Your Services Part XIII, Log and Service Status Part XIV, The self-explanatory boot Part XI, Watchdogs Table of contents

systemd General links Publications systemd documentation for developrs systemd documentation for admins Introduction Detailed Process identifier 1 Parallelizing socket services Necessary for improvement Solution Syslog example Parallelizing bus services Example Parallelizing services summary Parallelizing file system jobs Keeping the first user PID small Keeping track of process Activation methods Unit files Comparison of sysvinit, upstart and systemd Video introduction Lennart Poettering - Hacking Core OS (@OSEC Barcamp) Video - systemd, beyond init Video - Interview with Lennart Slide - Open source development Booting userspace in less than 1 second man pages Packages Daemons and systemd Writing and packaging system daemons Reference implementation of various APIs for new-style daemons Example for socket activation Example of start-up notification Writing systemd unit/service files Installing systemd service files Kernel Root FS Configure uboot Convert SysVInitScripts to systemd service files How to debug systemd problems Analyzing boot-up systemd commands Native systemd configuration files SysVinit to systemd cheatsheet Services Runlevels/Targets Changing runlevels: Figure out current runlevel systemd-devel systemd performance analysis Optimization systemd questions and answers Proof of Concept (POC) Sources Systemd installation guides etc

Introduction

Detailed

Process identifier 1

See Lennart's blog story for a longer introduction, and the two statusupdates since then. Also see the Wikipedia article.

For a fast and efficient boot-up two things are crucial

to start less Starting fewer or deferring the starting of services until they are actually needed

to start more in parallel We should not serialize start-up (such as sysvinit does). Run all at the same time, so that the available CPU and disk IO bandwidth is maxed out Most current systems that try to parallelize boot-up still synchronize the start-up of the various daemons. Example: since Avahi needs D-Bus, D-Bus is started first, and only when D-Bus signals that it is ready, Avahi is started too. This start-up synchronization results in the serialization of a significant part of the boot process.

Parallelizing socket services

Necessary for improvement

Understanding what exactly daemons require from each other and why their start-up is delayed

Traditional UNIX daemons Waiting until the socket of the other daemon offers its services and is ready for connections D-Bus clients example: /var/run/dbus/system_bus_socket

Solution systemd creates the listening sockets before the actually daemon is started and then just pass the socket to the daemon

systemd can create all sockets for all daemons in one step Next step run all daemons at once

If a service needs another and the other service isn't fully started up: That's completely okay!

The connection is queued in the providing service and the client will potentially block on that single request But only that one client will block and only this single request

Kernel socket buffers helps to maximize parallelization

Ordering, buffering and synchronization requests is done by the kernel without any further management from userspace

Dependency management becomes redundant (or at least secondary)

Daemon already started: immediately success Daemon is in process of being started: success (unless a synchronous request) Daemon is not running: success (it can be auto spawned) From the requester daemon-perspective: no difference

Is this kind of logic new?

Apple's launchd idea The listening of the sockets is pulled out of all daemons and is done by launchd The services can all start up in parallel The idea is older than launchd

inetd Listens on designated ports used by Internet services such as FTP, POP3 etc. When a TCP or UDP packet arrives with a particular destination port number, inetd launches the appropriate server program to handle the connection inetd focus: Internet services

Syslog example

Assumption: systemd created all sockets

systemd starts syslog and various syslog clients all at the same time The messages of the clients will be added to the /dev/log socket buffer As long as that buffer doesn't run full, the clients will not have to wait in any way and can immediately proceed with their start-up As soon as syslog itself finished start-up, it will dequeue all messages and process them

Parallelizing bus services

Modern daemons on Linux: provide services via D-Bus instead of sockets. Apply of the same parallelizing boot logic as for traditional socket services possible?

D-Bus already has all the right hooks for it Using bus activation a service can be started the first time it is accessed

Example

CUPS uses Avahi to browse for printers Starting Avahi at the same time as CUPS is possible If CUPS is quicker than Avahi: D-Bus queues the request until Avahi manages to establish its service name

Parallelizing services summary

Socket-based service activation and the bus-based service activation enables us to start all daemons in parallel, without any further synchronization.

Activation also allows us to do lazy-loading of services if a service is rarely used, we can just load it the first time, somebody accesses the socket or bus name, instead of starting it during boot.

Parallelizing file system jobs

Current distributions

More synchronizations points than just daemon start-ups Most prominently: file-system related jobs

On boot-up a lot of time is spent idling to wait until all devices that are listed in /etc/fstab/ show up in the device tree and are then fsck'ed, mounted,...

If that is finished we go on and boot the actual services. Improvement through autofs mount point.

Like a connect() call shows that a service is interested in another service, an open() (or similar call) shows that a service is interested in a specific file or file-system.

Interested apps wait only if a file-system they are looking for is not yet mounted and readily available

systemd sets up an autofs mount point and when the file-system finished fsck and quota due to normal boot-up it will be replaced by the real mount While the file-system is not ready yet, the access will be queued by the kernel and the accessing process will block But only that one daemon and only that one access

Integrating autofs in an init system is fragile and weird?

Example: application tries to access an autofs file-system Take very long to replace it with the real file-system, it will hang in an interruptible sleep -> you can safely cancel it If the mount point should not be mountable in the end (maybe because fsck failed), autofs returns a clean error code

Keeping the first user PID small

Shell is evil. Shell is fast and shell is slow

It is fast to hack, but slow in execution Classic sysvinit boot logic is modeled around shell scripts Every time commands like grep, awk, cut, sed and others are called, a process is spawned, the libraries searched, some start-up stuff and set up an and more After seldom doing more than a trivial string operation the process is terminated again Result: incredible slow systemd wants to get rid off shell scripts in the boot process

Figure out what they are currently actually used for systemd perspective: quite boring things, most of the scripting is spent on trivial setup and tear-down of services and should be rewritten in C Either in separate executables, or moved into the daemons themselves, or simply be done in the init system

Rewrite shell scripts in C takes time, in a few case does not really make sense and sometimes shell scripts are just too handy to do without

Keeping track of process

Central part of a system that start up and maintains services should be process babysitting

It should watch services Restart them if the shut down If they crash, collect information about them and keep it around for the administrator systemd uses cgroups

cgroups allow the creation of a hierarchy of groups of processes cgroups let you enforce limits on entire groups of process, e.g. limit the total amount of memory or CPU The hierarchy is directly exposed in a virtual file-system and hence easily accessible The group names are basically directory names in that file-system If a process belonging to a specific cgroup fork()s, its child will become a member of the same group Unless the child is privileged and has access to the cgroup file system, the child can not escape its group

Activation methods

activation on boot Want to make sure a service or other unit is activated automatically on boot it is recommended to place a symlink to the unit file in the .wants/ directory of either multi-user.target or graphical.target, which are normally used as boot targets at system start-up. socket-based activation Special target unit sockets.target. It is recommended to place a WantedBy=sockets.target directive in the Install bus-based activation Example device-based activation* It is possible to bind activation to hardware plug/unplug events. path-based activation Provide a way to bind service activation to file system changes timer-based activation Some daemons that implement clean-up jobs that are intended to be executed in regular intervals benefit from timer-based activation

Unit files systemd provides a dependency system between various entities called „Units". Units encodes information about

a service a socket a device a mount point an automount point a swap file or partition a start-up target a file system path a timer controlled

Units are named as their configuration files. Instead of bash scripts, systemd uses unit configuration files. Configuration file syntax is inspired by XDG Desktop Entry Specification .desktop files, which in turn are by Microsoft Windows .ini files

Comparison of sysvinit, upstart and systemd

Blog form Lennart - why systemd? http://0pointer.de/blog/projects/why.html

Video introduction

Lennart Poettering - Hacking Core OS (@OSEC Barcamp)

Lennart Poettering: "It's a very long recording (1:43h), but it's quite interesting (as I'd like to believe) and contains a bit of background where we are coming from and where are going to. Anyway, please have a look. Enjoy!" Source:http://www.youtube.com/watch?v=9UnEV9SPuw8

Video - systemd, beyond init

Source: http://www.youtube.com/watch?v=TyMLi8QF6swh

Video - Interview with Lennart

Source: http://www.youtube.com/watch?v=qPlVNjUq_hU

Slide - Open source development

Source: https://docs.google.com/present/view?id=dd4d9j2z_1r8fjkqc7

Booting userspace in less than 1 second

Koen's slides from ELC-E/T-DOSE 2011 on systemd in the embedded world

Source of video: http://www.youtube.com/watch?feature=player_embedded&v=RFVlbaDqll8&noredirect=1

PDF slides: http://elinux.org/images/b/b3/Elce11_koen.pdf man pages

Here's the full list of all man pages: http://0pointer.de/public/systemd-man/

Packages

Fedora (Where we are: http://lists.fedoraproject.org/pipermail/devel/2010-July/138855.html) OpenSUSE (Instructions: http://en.opensuse.org/SDB:Systemd) Debian (Wiki: http://wiki.debian.org/systemd) Gentoo ArchLinux Ubuntu

Daemons and systemd

Writing and packaging system daemons

Read http://0pointer.de/public/systemd-man/daemon.html for

Whats a daemon Writing new style Daemons for systemd Daemon activation types in systemd (socket-, bus-, device-, path-, timer- other-based activation) Porting existing daemons to new style systemd daemons

Reference implementation of various APIs for new-style daemons

http://0pointer.de/public/systemd-man/sd-daemon.html http://0pointer.de/public/systemd-man/sd-.html http://0pointer.de/public/systemd-man/sd_booted.html http://0pointer.de/public/systemd-man/sd_is_fifo.html http://0pointer.de/public/systemd-man/sd_listen_fds.html http://0pointer.de/public/systemd-man/sd_notify.html

The reference implementation from Lennart Poettering was distributed under the MIT license. systemd for developers

Example for socket activation

General links about units, services and sockets

Writing and packaging system daemons man pages Reference implementation of various APIs for new-style daemons systemd for developers

You can download a simple example project to get familiar with systemd and for quick demonstrations. Features of the project:

compatible with systemd/none systemd systems socket based activation automatic start-up of the service on boot both, whatever happens first on start-up

Example of start-up notification

Start-up notification to signal systemd when a daemon finished starting up, use the following command (used the reference implementation of Lennart):

sd_notify(0, "READY=1")

Extended start-up notification

sd_notifyf(0, "READY=1\n STATUS=Processing requests...\n MAINPID=%lu",(unsigned long) getpid());

It's important that the "Type=notify" in the unit file.

Writing systemd unit/service files

Please read the section "Integration with systemd - Writing systemd service files" - http://0pointer.de/public/systemd-man/daemon.html

http://0pointer.de/public/systemd-man/systemd.unit.html http://0pointer.de/public/systemd-man/systemd.service.html https://fedoraproject.org/wiki/Systemd_Packaging_Draft#Writing_unit_files

Installing systemd service files

Kernel

Because of kernel 2.6.35.9 you have to use the patch http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commitdiff;h=676db4af043014e852f67ba0349dae0071bd11f3

cp -rpv # Copy your whole kernel source directory cd vi ../Patch.systemd.patch # Delete the beginning and copy the rest of the patch to here patch -p1 < ../Patch.systemd.patch # Include the patch (see above) if necessary vi arch/arm/configs/_defconfig # for e.g. CONFIG_QUOTA=y export ARCH=arm export CROSS_COMPILE=/home//arm-2010.09/bin/arm-none-linux-gnueabi- make distclean make _defconfig make # see https://wiki.edubuntu.org/systemd#Kernel_requirements make -j8 uImage cp arch/arm/boot/uImage /tftpboot/_uImage__discovery.quota.systemd

Root FS

Start with an Ubuntu 11.10 minimal FS. All steps for this are not part of systemd. See (Ubuntu 11.10 Oneiric) https://wiki.ubuntu.com/ARM/RootfsFromScratch sudo \ rootstock \ --fqdn REA-marzen \ --login renesas \ --password ******* \ --imagesize 2G \ --seed ubuntu-minimal,fbset,g++,ltrace,nfs-common,strace \ --dist oneiric \ --serial ttySC2

Continue after boot and a working Ethernet connection on your target with the following

Install the systemd ECO system on your ARM SDB:

apt-get install intltool # 46 packages apt-get install gperf apt-get install libcap-dev apt-get install pkg-config apt-get install libudev-dev apt-get install libdbus-1-dev apt-get install make

Get the following packages from http://packages.debian.org/sid/libkmod2

dpkg -i libkmod2* dpkg -i libkmod-dev*

Before starting ./configure of system-d, you have to install -181 ECO system on your ARM SDB:

apt-get install uuid-dev dpkg -i libblkid1_* dpkg -i libblkid-dev*

Get newest udev from http://www.kernel.org/pub/linux/utils/kernel/hotplug/udev-181.tar.xz (here version 181)

export TARGET=udev-181 unxz $TARGET.tar.xz tar xvf $TARGET.tar cd $TARGET ./configure --disable-introspection \ --disable-gudev make -j4 sudo make install

Get newest systemd from http://www.freedesktop.org/software/systemd (here version 43) export TARGET=systemd-43 unxz $TARGET.tar.xz tar xvf $TARGET.tar cd $TARGET ./configure --localstatedir=/var \ --disable-manpages \ --with-dbuspolicydir=/etc/dbus-1/system.d \ --with-dbussessionservicedir=/usr/share/dbus-1/services \ --with-dbussystemservicedir=/usr/share/dbus-1/system-services \ --with-dbusinterfacedir=/usr/share/dbus-1/interfaces make sudo make install

Configure uboot

If your uboot from SD card looks similar like (pri):

bootargs=console=ttySC0,115200n8n rootwait rw noinitrd root=/dev/mmcblk0p2 ip=143.103.22.48

You have to add: init=/usr/lib/systemd/systemd

set bootargs console=ttySC0,115200n8n rootwait rw noinitrd init=/usr/lib/systemd/systemd root=/dev/mmcblk0p2 ip=143.103.22.48 sav

Please read the section "Integration with systemd - Installing systemd service files" - http://0pointer.de/public/systemd-man/daemon.html

Convert SysVInitScripts to systemd service files

Please read Lennart's blog Part III, how do i convert a sysv init script into a systemd service file

How to debug systemd problems

Please read https://fedoraproject.org/wiki/How_to_debug_Systemd_problems and http://freedesktop.org/wiki/Software/systemd/Debugging

Analyzing boot-up

Change the log level of systemd - Verbose mode on Modify /etc/systemd/system.conf to increase verbosity. Example:

LogLevel=debug <--- Uncomment this line and use "debug" (default: commented and "info") LogTarget=syslog-or-kmsg <--- Uncomment this line (default: commented) SysVConsole=yes <--- Uncomment this line (default: commented)

(Change the log level to debug has an impact on your start-up performance)

systemd writes automatically a log message with the time it needed to syslog/kmsg when it finished booting up and sends a broadcast messsage (D-Bus). Example:

systemd[1]: Start-up finished in 2s 65ms 924us (kernel) + 2s 828ms 195us (initrd) + 11s 900ms 471us (userspace) = 16s 794ms 590us.

Please read http://0pointer.de/blog/projects/blame-game.html to avoid misunderstanding of the command "systemd-analyze blame":

systemd-analyze blame Example:

$ systemd-analyze blame 6207ms udev-settle.service 5228ms cryptsetup@luks\x2d9899b85d\x2df790\x2d4d2a\x2da650\x2d8b7d2fb92cc3.service 735ms NetworkManager.service 642ms avahi-daemon.service 600ms abrtd.service 517ms rtkit-daemon.service 478ms fedora-storage-init.service 396ms dbus.service 390ms rpcidmapd.service 346ms systemd-tmpfiles-setup.service 322ms fedora-sysinit-unhack.service 316ms cups.service 310ms console-kit-log-system-start.service 309ms libvirtd.service 303ms rpcbind.service 298ms ksmtuned.service 288ms lvm2-monitor.service 281ms rpcgssd.service 277ms sshd.service 276ms livesys.service 267ms iscsid.service 236ms mdmonitor.service 234ms nfslock.service 223ms ksm.service 218ms mcelog.service ...

systemd-analyze plot shows more high-level data: which service took how much time to initialize, and what needed to wait for it. Example:

systemd-analyze plot > plot.svg eog plot.svg

Creates a dependency graph of the units. Example:

systemctl dot | dot -Tsvg > systemd.svg - Color legend:black = requires; dark blue = requisite; dark grey = wants; red = conflicts; green = after systemd commands

Control the systemd system and service manager http://0pointer.de/public/systemd-man/systemctl.html

Analyze system boot-up performance http://0pointer.de/public/systemd-man/systemd-analyze.html

Query the systemd journal http://0pointer.de/public/systemd-man/journalctl.html

Special journal fields http://0pointer.de/public/systemd-man/systemd.journal-fields.html

Native systemd configuration files

Add a hostname Example:

File: /etc/hostname

MyHostname

Console and keymap settings The /etc/vconsole.conf file configures the virtual console, i.e. keyboard mapping and console font. Example: File: /etc/vconsole.conf

KEYMAP=us FONT=lat9w-16 FONT_MAP=8859-1_to_uni

OS info /etc/os-release contains data that is defined by the operating system vendor and should not be changed by the administrator. Example:

File: /etc/os-release

NAME=Archlinux ID=arch PRETTY_NAME=Arch GNU/Linux ANSI_COLOR=1;34

Locale settings (read man locale.conf for more options) Example:

File: /etc/locale.conf

LANG=en_US.utf8 LC_COLLATE=C

Configure kernel modules to load during boot systemd uses /etc/modules-load.d/ to configure kernel modules to load during boot in a static list. Each configuration file is named in the style of /etc/modules-load.d/.conf. The configuration files should simply contain a list of kernel module names to load, separated by newlines. Empty lines and lines whose first non-whitespace character is # or ; are ignored. Example:

File: etc/modules-load.d/virtio-net.conf

# Load virtio-net.ko at boot virtio-net

Configure kernel modules blacklist systemd uses /etc/modprobe.d/ to configure kernel modules blacklist. Each configuration file is named in the style of //etc/modprobe.d/.conf. Empty lines and lines whose first non-whitespace character is # or ; are ignored. Example:

File: /etc/modprobe.d/snd_hda_intel

blacklist snd_hda_intel

SysVinit to systemd cheatsheet

Source: https://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet

Services

sysvinit Command systemd Command Notes

service frobozz start systemctl start frobozz.service Used to start a service (not reboot persistent)

service frobozz stop systemctl stop frobozz.service Used to stop a service (not reboot persistent)

service frobozz systemctl restart frobozz.service Used to stop and then start a service restart

service frobozz systemctl reload frobozz.service When supported, reloads the config file without interrupting reload pending operations.

service frobozz systemctl condrestart frobozz.service Restarts if the service is already running. condrestart

service frobozz systemctl status frobozz.service Tells whether a service is currently running. status ls /etc/rc.d/init.d/ ls /lib/systemd/system/.service Used to list the services that can be started or stopped /etc/systemd/system/.service

chkconfig frobozz systemctl enable frobozz.service Turn the service on, for start at next boot, or other trigger. on

chkconfig frobozz systemctl disable frobozz.service Turn the service off for the next reboot, or any other trigger. off

chkconfig frobozz systemctl is-enabled frobozz.service Used to check whether a service is configured to start or not in the current environment.

chkconfig frobozz ls /etc/systemd/system/*.wants/frobozz.service Used to list what levels this service is configured on or off --list

Runlevels/Targets

Source: https://fedoraproject.org/wiki/SysVinit_to_Systemd_Cheatsheet

sysvinit systemd Target Notes Runlevel

0 runlevel0.target, poweroff.target Halt the system.

1, s, single runlevel1.target, rescue.target Single user mode.

2, 4 runlevel2.target, runlevel4.target, multi-user.target User-defined/Site-specific runlevels. By default, identical to 3.

3 runlevel3.target, multi-user.target Multi-user, non-graphical. Users can usually login via multiple consoles or via the network.

5 runlevel5.target, graphical.target Multi-user, graphical. Usually has all the services of runlevel 3 plus a graphical login.

6 runlevel6.target, reboot.target Reboot

emergency emergency.target Emergency shell

Changing runlevels:

sysvinit Command systemd Command Notes

telinit 3 systemctl isolate multi-user.target (OR systemctl isolate Change to multi-user run level but has no runlevel3.target OR telinit 3) effect on the next boot.

sed ln -sf /lib/systemd/system/multi-user.target Set to use multi-user runlevel on next s/^id:.*:initdefault:/id:3:initdefault:/ /etc/systemd/system/default.target reboot

Figure out current runlevel

Note that there might be more than one target active at the same time. So the question regarding the runlevel might not always make sense. Here's how you would figure out all targets that are currently active:

$ systemctl list-units --type=target

If you are just interested in a single number, you can use the venerable runlevel command, but again, its output might be misleading. systemd-devel

The following table gives a short overview what's missing in systemd for using in GENIVI, e.g. missing functionality or patches.

Area Requirement Patch Thread Short description of patch Comment Status

SHM LC_VEH_024 Contributed restart In case a service failed to start properly (it exited with a code not 0 - not outside of patch I successful) it has to be retried, to be restarted a predefined number of times, GENIVI until it ends successfully or until the predefined number of restarts is reached. For that, there is the option "MaxRestartRetries" which accepts an integer = number of restarts. This applies only to ".service" units. SRM LC_VEH_024 Contributed cgroup Patch adds functionality to apply certain cgroups limits, such as shares of outside of limit CPY, memory limit and blkio. GENIVI patch I cgroup limit patch II systemd performance analysis

This section will be used to detail performance results achieved from tests using systemd.

Optimization

Please read https://wiki.archlinux.org/index.php/Systemd

< TODO: Insert examples > systemd questions and answers

This section will be used to detail questions and answers with regards to systemd. The table can be used by anyone to add questions/answers for systemd issues.

Area Originator Question State Comments/Answers

Performance David Which system lib is Open Yates planned to be used finally in GENIVI? i.e. , glibc or...

The reason this is important for systemd is that it has a large impact on the system start-up as systemd is dependent on this library and must load this before starting anything else.

Dependencies Christian When does systemd Open Muck exactly resolve the dependencies between units and do we have access(reading) to the dependency graph?

Dependencies David Is it possible for systemd Closed Lennart, AMM Dublin Yates to dynamically alter the start-up order This could be handled within systemd via CPU prioritization however (dependency graph) this functionality is not currently available. This can be added to the during the start-up systemd road map but this will not be a short term delivery based on run time events

Dependencies How can user interaction Closed Lennart, AMM Dublin being communicated to systemd? It is just done by starting a service by e.g. a socket request. David, AMM Dublin E.g. the HMI could send a d-bus message to tuner and this would trigger priority change.

Dependencies Is the cancellation of an Closed Lennart, AMM Dublin ongoing start-up procedure possible? With systemd there is no start-up phase any longer from systemd standpoint because we have bus activation. So if you want to cancel a service you can send a signal. So if you write your services right the service can be terminated at any time Dependencies How is the shutdown Closed Lennart, AMM Dublin with systemd realized? In systemd there is no shutdown. So it works like that you just have a target conflicting with everything else so everything first gets closed down and then the system is shut down (target is called shutdown). So if you use this and adopt it you can control which services should go down and which just should keep on until the shut down is realized. So what systemd does is that it kills all the processes. You could just change the description file and change the shutdown signal to SIGHUP or something and react in your service adequately.

Communication Is it possible that a Closed Lennart, AMM Dublin service can send something back to What systemd should not be is a generic message broadcast service. systemd informing that So this does sound like you probably use something else. This shutdown is finished? reminds me to something we wanted to have for suspend on desktops so that something can be delayed. So this is not implemented but we were thinking also about to have a very high level module doing this kind of stuff. The question is if we can get this in the next cycle done.

Units Torsten Can you talk about Closed Lennart, AMM Dublin Hildebrand snapshots and systemd instances Snapshots are dynamic and can be taken at any time. You can store it in memory under a name, do any change on your system and afterwards return to the snapshot. You guys wanted to have this on disk. So this would be a little fit and convert the snapshots to a target and seems to be a good feature so I will add this. And I just implemented a feature that you can have an option that you can ignore a service within a snapshot. It is not dynamically. You can also create units on the fly and solve the problem by that. These are called generators. So generators run very early in the boot process. So you can get different systemd instances like system-systemd and user-systemd. So it is already implemented to run the systemd as a user. So what we want to do is make systemd responsible for user log ins so that we can exchange the proprietary solutions with a generic one. So this user systemd would just run as a normal service from the view of the system systemd. So they would be completely separate. This would also fit to automotive because you also need user management

Dependencies Applications should Closed Lennart, AMM Dublin change their behavior in different start modes So you have several options: you can use generators, you can create several service configurations, you can use instantiated services by having one unit file and create multiple instances so you can have variables in the service file and pass your variable as parameters to create real instances.

Proof of Concept (POC)

To verify the currently proposed usage of systemd within the GENIVI consortium it is proposed that a Proof of Concept (POC) is created to validate the possible startup times when using systemd.

This POC will be detailed on this sub-page along with startup figures when they are available.

Sources

The content of this page is a collection of various wiki pages and Lennart's blog.

Systemd installation guides etc

Installing an Ubuntu 11.10 based file system with systemd on Freescale i.MX53 QSB Installing an Ubuntu 11.10 based file system with systemd in a Virtual Machine

SysInfraEGSystemdPOC systemd POC Introduction Proposed Measurement Points Continental systemd POC

Introduction

As part of the proposal to use systemd within the GENIVI consortium as defined by the Lifecycle Management architecture it is proposed that a Proof of Concept (POC) is created.

This will, as far as possible, validate the expected system start-up times with defined measurement points.

All interested GENIVI Members are free to create their own POC and add test configurations and figures to this page. It is proposed that this page details the standard measurement points and provides a location for any tests results and descriptions.

Obviously it is intended to only record information for POC's where optimized systemd configurations are in place as opposed to standard distributions, i.e. Fedora 15.

Proposed Measurement Points

The following measurement points should be recorded:

Network Initialization - CAN/MOST initialization and transmission/reception of a message Welcome Screen - Display of a static screen on the display First Audio - This is the point where we can output any audible source (mimics the starting of the Park Distance Control (PDC)) First Video - This is the point where a video source can be displayed on the screen (mimics the displaying of the Rear View Camera (RVC)) BASE_RUNNING - This is the point where all mandatory components (base & early features are started)

LUC_RUNNING - This is the point where components required for the Last user Context are up and running (for this POC it is proposed that the LUC requires three applications to be started)

Shell - This is the point where the shell is available for first input

FULLY_RUNNING - This is the point where the start-up is complete, i.e. we have started all components required in the system

For an explanation of the systemd concept and the contents of some of these measurement points please look here.

NOTE: The contents of the point FULLY_RUNNING can not be completely defined as it will always be dependent on the product, however it is measured to simply get a feeling for how much more in the system needs to be started from the previous boot phase.

Continental systemd POC

A proof of concept was created based on the i.MX53 Quick Start board. This was presented at the San Jose AMM and the following use case definition details the start-up process and the timings achieved. The unit files used in the POC are attached to this page. The root directory of the tar file is "systemd" and was located in the directory /lib.