Copyright O 2006. Organisingcommittee of the TRIZ FutureConference 2006, October 9th - 11th, 2006,represented by SimonDewulf, CREAX, Maarschalk Plumerlaan 113, 8-8900leper, Belgium.

All rightsreserved. No partof the contentsof this bookmay be reproducedor transmittedin any formor by any meanswithout the writtenpermission of the oublisher.

The articles,diagrams, captions and photographsin this publicationhave been suppliedby the contributorsor delegatesof the conference.While every effort has beenmade to ensureaccuracy, the editors,the organisingcommittee, CREAX and the KatholiekeUnrversiteit Leuven do not underany circumstancesaccept responsibilityfor errors,omissions or infringements.

The editingand reviewingprocedure of thesescientific contributions of the TFC 2006proceedings has beenorganised and coordinatedby KUleuven under responsibilityof Prof.Dr. lr. JoostR. Duflou.

Printedin Belgium tsBN90-77071-05-9 by Dejonghe ETRIATRIZ Furune Gorureneruce 2006 lnternational Scientific Committee Prof.J. Duflou,K.U.Leuven, Belgium Prof.D. Cavallucci,INSA Strasbourg, France Prof.G. Cascini,Univ. of Florence,ltaly Prof.W. H. Elmaraghy,Univ. of Windsor,Canada Prof.S. C.-Y.Lu, Univ.of SouthernCalifornia, USA Prof.G. Schuh,Aachen Univ. RWTH, Germany Prof.G. Seliger,TUBerlin, Germany Prof.M. Shpitalni,Technion, lsrael Prof.R. Teti,Univ.of Napels Frederico ll, ltaly Prof.S. Tichkiewitch,Instit. Nat. Polyt. Grenoble, France Prof.T. Tomiyama,TUDelft, Netherlands Prof.R. De Guio,INSA Strasbourg, France Prof.C. Rizzi,Univ. of Bergamo,ltaly Prof.N. Leon-Rovira,ITESM Tecnologico de Monterrey,Mexico Prof.B. Bitzer,South Westphalia Uni.of Appl. Science, Germany Prof.M. Ogot,Pennsylvania State University, USA Prof.K.-W. Lee, Korea Polytechnic University, South Korea Dr. lr. F. D'Hulster,Hogeschool West-Vlaanderen, Belgium Dr.lr. W. Dewulf,K.U. Leuven, Belgium Dr.lr. K. Hadeli,K.U. Leuven, Belgium Ing.B. Willems,K.U.Leuven, Belgium

Organising Committee CREAX SimonDewulf NeleDekeyser K.U.Leuven Prof.Joost Duflou lr.Joris D'hondt Ing.Tom Devoldere ETRIA Prof.Denis Cavallucci Prof.Gaetano Cascini HOWESTPIH lr. LodeDe Geyter Dr. lr. FrederikD'hulster Ing.Cies Vanneste HildeVan Maele VCK AnnetteGeirnaert MarjolijnDe Geest MartineVanremoortele Stad Kortrijk Jeande Bethune HeleenAllegaert

-t- PRerece:Acaoeurc Qunlrw

DearTFC2006 Participant,

One of the strengths of the ETRIA TFConferencesover the past years has clearly been its strong attendanceby practitioners,bringing valuable field experienceto the ETRIAcommunity. For this year's issue it was an explicit objectiveto reinforcethe TFConference as a forumfor scientificwork dedicated to TRIZrelated research, while preserving the strongpractitioners involvement. tr This dual goal is reflectedin the two volumesof the proceedings,one part being dedicatedto more theoreticresearch contributions and a secondvolume being reservedfor relevantpractitioner experience. The papersin thisvolume have been submitted to a rigorousscreening and review ltl by membersof the InternationalScientific Committee. This committee, composed of internationallyreputed academicsactive in the field of design theory and methodology,was instrumentalin assuringthe quali$of thisconference output. We tl wouldlike to thankthe committeemembers for theirvaluable feedback. However. withoutthe dedicatedeffort of all contributingauthors these conference proceedings ltl wouldbe butan emptyshell. As authoryour contribution is greatlyappreciated. Guaranteeinghigh quality proceedings is only possible through the logistical support of dedicatedpeople: we wouldlike to acknowledgethe significantefforts invested by TomDevoldere, Joris D'hondt and Nele Dekeyser in thisrespect: sincere thanks on behalfof the editorsand the completeorganising committee for makingthese proceedingsprint ready. Printinghard copies of proceedingshas becomeless obvious in this digitalage. However,the organisingcommittee is convincedthat having a papercopy at handis beneficialfor the efficiencyof conferenceattendance. Furthermore, the availabilityof hard copiesreinforces the more permanentcharacter envisaged for the valuable researchcontributions communicated at the TFConference.lt is our sincerehope thatyou will benefitfrom this book, both during the conferenceand as a referencein the yearsto come.

Wishingyou a productiveand enjoyable conference

Prof.Joost Duflou Mr.Simon Dewulf TFC2006Academic Host TFC2006Industrial Host

-il- PRernce:ETRIA's NExr ( S > cuRve

After six years of existence, ETRIA has taken a curvethese last monthsin applying an evident rule: evolution through better harmonizationwith its super system.While TRIZ is sought to be an unavoidable ingredientof organization'sstrategies when facing the difficultiesof Innovation'sera, we can observe paradoxicallya stronger reject from formal authoritieslike politicalsystems, scientific communities and management voices of authority. To remain clear, popularity of TRIZ grows as much as unpopularity.Which role does ETRIAneed to play in this conflictingand unpleasant situation?Since our aim and commoninterest resides in a betterharmony with the world, let's understandwhich reproachesare made to TRIZ and react. 1. Lackof clearlyexpressed theoretical foundations, avoiding a scientifical acceptancein existingcommunities; 2. Lack of clearly defined and worldwide recognizedbody of knowledge that leads to fuzziness, misinterpretationsand consequent reduced usability. This image fortunately not shared by everyone, impacts significantly on management'sdecisions regarding investment in relationwith TRIZ and in certain countriesis slowingdown the processof TRIZ disseminationand evolution.To face this situationand evolve,ETRIA has decidedto move in two paralleldirections. The first one is a more scientificallygrounded evaluation of its contributionto the world; the second is to participateto the definitionof TRIZ's body of knowledgetogether with two other majorworldwide associations. To achieve the first objective, worldwide reference researchers in scientifically establishedcommunities of EngineeringDesign have been contactedand invitedto join and constituteTFC2006 scientific committee. As a result,a significantreviewing has been made throughoutcontributions and led to the constitutionof two parallel sessions in the coming conference (Professional experience and scientifically aimed). To play an importantrole in the second,a workinggroup has been constitutedand will interactwith these other worldwideentities for a wider recognitionof TRIZ's body of knowledge. In order to provokethe possibilityfor the main actors of the TRIZ communityto cooperateand evolve in their mutual understanding,hopefully resulting in an interestingparadigm shift for everyone,ETRIA's contributorshave shown this year their opennessto be evaluatedby their peers,their opennessto criticismand their capacityto learn from others;those who are buildingthe world's most significant

-ilt- advancesin engineeringdesign and industrialorganizations from all over the world havingsignificant successes in TRIZ implementation. we are confidentthat improvingquality and validity of the work presentedat ETRIA'sconference is an importantstep towards a more generalacknowledgement of TRIZ both in the scientificand in the industrialworld. Eventually this will positively impactTRIZ effectivenessand usability. Indeed, we hope that this step will result in more open doors for being integrated with others,debating and contributing,using TRIZ, to the evolutionof our capacityto apprehendInnovation era's difficulties.

DenisCavallucci GaetanoCascini ETRIA board ETRIAboard

-tv- Plnrnens

ETRIA -t|ww.ETRlA-t{ET

CREAX Belgium CREAX creativity for innovation

KULeuven KATHOIIEKEUNIVERSITEIT Belgium tEUYElI

HOWESTPIH Belgium lfwp*

vcK Belgium utanffeffigfi$i*,

Ondernemerscentrum Stad Kortrijk Belgium orrdcmcncmrtun lGrtdjk

'ESF: de Europese bijdrage tot de ontwikkeling van de werkgelegenheid, door inzetbaarheid, ondememerschap, aanpasbaarheid en gelijke kansen le bevorderen, en door te investeren in menselijkehulpbronnen'

-v- TlaLe oF coNTENTS

Committees I Preface:Academic Quality tl Preface:ETRIA's next < S > curve ill Partners Tableof contents VI

ScrrrulrtcCor{tnreurroNS . On the complementarityof TRIZand axiomatic design: from decoupling objectiveto contradictionidentification ...... 1 Joosf R. Duflou, Wim Dewulf . Using TRIZ in the forecasting of the computer role playing games evolution...... 9 MichalKurela, PascalCrubleau, Henry Samier . Directedvariation: Variation of propertiesfor new or improved function productDNA, A basefor 'connectand develop'...... 15 SimonDewulf

. Towardsa rhetoricof TRIZ ...... 23 ConatlO Cathdin . Fractalityof knowledgeand TRIZ...... 31 VictorD.Berdonosov r OTSM-TRIZproblem network technique: applicationto the history of Germanhigh-speed trains ...... 37 NikolaiKhomenko, Eric Schenk,lgor Kaikov . Practice-basedmethodology for effectivelymodeling and documenting search,protection and innovation...... 45 Robefto Nani,Daniele Regazzoni . Systematicdesign through the integrationof TRIZ & optimizationtools...5Z Gaetano Cascini,Paolo Rlssone,Federico Rotini, Davide Russo . TRIZbased tool managementin supplynetworks ...... 63 Roberto Teti,Doriana D'ADDONA . UsingTRIZ and human-centereddesign for consumerproduct development 71 Alan Van Pelt, Jonathan Hey

. Structuringknowledge in inventivedesign of complexproblems ...... 77 DenisCavallucci, Thomas Eltzer . TRIZfor systemsarchitecting ...... 87 MaartenBonnema . TRfZfor softwareArchitecture ...... 93 Daniel Kluender

. Naturalworld contradiction matrix: how biological systems resolvetrade- offs and compromises...... 99 DarrellMann . Innovationand creativityon logisticsbesides TRIZ methodology ...... 109 Odair Oliva de Farias, Getulio Kazue Akabane . Gontributionsof TRIZ and axiomatic design to leannessin design: an investigation...... 117 Rohan A. Shirwaiker, G1l E. Okudan

. Conceptualdesign using axiomaticdesign in a TRIZframework ...... 123 Madara Ogot . Law- Antilaw ...... 133 VladimirPetrov

Keywords 141 Authors 143

-vil-

ON THE COMPLEMENTARITY OF TRIZ AND AXIOMATIC DESIGN: FROM DECOUPLING OBJECTIVE TO CONTRADICTION IDENTIFICATION

Joost R. Duflou Dept. Mechanical Engineering, K.U.Leuven , Belgium [email protected]

Wim Dewulf Dept. Mechanical Engineering, K.U.Leuven , Belgium [email protected]

Abstract Axiomatic Design (AD) has been recognized as a technique for enhancing the analytical capabilities in iterative design procedures, and as such can compliment the synthesis support offered by TRIZ. Identification of deficiencies in an existing design through AD, however, does not result in the straight forward formulation of contradictions between engineering parameters as a starting point for problem solving. Translating identified conflicts from the coupled design parameters, as identified by means of AD matrix analysis, to engineering parameter contradictions, often requires an intermediate step of abstraction. In this article this observation is illustrated by means of a case study dedicated to heavy duty laser cutting with reactive gas support. A systematic mapping of design specific, feature related, independent design parameters to dependent, concept related engineering parameters is proposed as a means to integrate both methodologies.

Keywords: Axiomatic Design, TRIZ, decoupling, contradiction identification, sheet metal cutting

1. Introduction While numerous cases illustrate that TRIZ provides a methodology capable of systematic idea generation, the most commonly used TRIZ tools seem less focussed on the analytical step that necessarily precedes the creative phase. Although it is generally recognised that proper formulation of the problem to be solved offers half the solution, the emphasis in this TRIZ tool box approachis often more oriented towards solution generation rather than problem analysis. Su-field analysis provides a possible solution for problem identification if system elements are of a nature that the interaction between them can be recognised based on an expert’s insight in the system interactions. Often, however, the negative influences between subjects are insufficiently known to perform the analysis exercise without further means of support. Comparison with the more systematic nature of the analysis methodology offered by Axiomatic Design [Suh, 1988], made a number of authors propose procedures in which both methods are used as complementary tools [Mann, 1999], [Young, 2004], [Shin, 2006]. Most models of design activities indeed contain, besides an initial need recognition and problem definition step, consecutive phases that are to be interpreted as iterative loops (e.g. [Pahl & Beitz, 1995], [Dieter, 1996]). A similar scheme is included in reference [Suh, 1988], positioning the proposed AD analysis step in a broader design context. While other authors limit the possible role of TRIZ in decoupling a design in support of an AD procedure, e.g. [Shin, 2006], Mann [Mann, 1999] stresses the possible integration of TRIZ and AD respectively as the creative synthesis and the feedback analysis steps in a generic design procedure (Figure 1).

Figure 1: Iterative design procedure after [Mann, 1999]

When focussing on the data streams in this scheme, the compatibility of the respective in- and outputs can, however, be questioned. Does a functional specification, for example, form sufficient input for a TRIZ based synthesis step? As a problem solving approach, the commonly encountered TRIZ procedures using Su-field analysis start from one or more observed conflicts surfacing as the outcome of initial design efforts. These can be either the result of dissatisfaction with limited functional performance of an existing design, in which case a design with shortcomings is available as input, or contradictions between engineering parameters perceived during one of the stages of an ongoing design project. In both cases preceding design activities are expected. While this observation may be of theoretic interest only, the following, more profound question determines the applicability of the proposed integrated procedure: Can the output of an AD based analysis provide improved input for a TRIZ based synthesis iteration? In the next sections it will be illustrated by means of a case study that for more complex systems the possible integration of AD and TRIZ as complementary methodologies can be obstructed by a number of hurdles. As illustrative case the design of a laser cutting system capable of processing thick steel plates was chosen. In the following section this application and the state-of-the-art design solution are described in order to provide the necessary background knowledge for reading Sections 3 and 4.

2. Case description: Laser cutting of thick steel plates A short historic overview of the different dominant processing techniques for sheet metal cutting is provided here as an illustration of consecutive development steps that show a high correspondence with one of the TRIZ trends of evolution. Indeed, the consecutive steps that can be predicted based on the ‘Object segmentation’ trend can be clearly recognised in the following summary. Early efforts for systematic cutting of sheet metal foils were typically based on scissor like tools. For heavy duty tasks, guillotine shears with large, monolithic tools form the industrial implementation of this concept (Fig. 2A).

guillotine oxyfuel plasma punching nibbling abrasive waterjet cutting shearing cutting cutting

highly monolithic segmented solid monolithic segmented gas plasma solid solid powder liquid solid ABCDEFG Figure 2: Evolutionary steps in sheet metal cutting and corresponding object segmentation trend

To increase flexibility, and to allow cutting out blanks rather than cutting through sheets along straight lines only, punching with a limited set of simple shaped tools was introduced. In this process shearing is typically performed by repetitively cutting out rectangular contours by means of a single stroke using a punch and die set (Fig. 2B). To further enhance flexibility towards non-straight contour edges, nibbling was more recently introduced as a process variant: high frequency punching, normally using a circular tool that is applied with sufficient overlap between consecutive cut-outs, thus allowing to approximate free form contours with a preset accuracy (Fig. 2C). Productivity, tool costs, minimum achievable dimensions and limited sheet thicknesses that can be processed form some shortcomings of these processes. Emerging in the 1980’s, abrasive waterjet cutting forms a next step in the object segmentation trend. Abrasive particles are used as consumables for cutting away sheet material in an accumulative way. Carried by a high speed waterjet, that transfers the necessary kinetic energy to the abrasive, this particle stream allows cutting with limited cut widths and, given sufficient time, can support processing plates of considerable thickness (>50mm) (Fig. 2D). With a sufficiently high pressure, waterjet cutting with pure water forms the transition to a fully liquid phase cutting tool (Fig. 2E). Oxyfuel cutting can be considered as a form of cutting based on a gas as the cutting medium (Fig. 2F). While the principle of using local heating through combustion of a gas to melt away plate material along a contour, already existed as a cutting principle since the early decades of the 20th century, the achievable quality was typically not considered sufficient to compete with the techniques mentioned above. Only where the plate thickness would not allow shearing or punching, oxyfuel cutting would form an alternative. The output quality of the process also benefited from the introduction of numerically controlled positioning systems in the second half of the century. Plasma (arc) cutting emerged in the late 1950’s as a cutting technique suitable for processing thick plates. The process uses the conductive nature of metal plasma to locally induce energy in the material to be cut (Fig. 2G). The process allows generating much narrower and straighter cuts than oxyfuel cutting. The technique gradually improved towards the high density plasma cutting that became dominant in the early 1990’s. The process is capable of cutting plates up to 75-100mm, but typically produces asymmetric cuts. In the late 1990’s laser cutting started taking over the market as the dominant technology. For thin sheets the flexibility, precision and productivity achievable through localised heating by means of a small diameter, high intensity IR light beam (Figure 3) made this technology the preferred choice for new investments in recent years.

laser source power P l cutting gas pressure p

f SOD t fp

v b

Figure 3: Laser cutting principle (left) and process parameters (right)

For more background information on sheet metal cutting processes, references [Serruys, 2002] and [Trumpf, 1996] can provide useful overviews. While these short descriptions and the chronological order in which the evolutionary steps occurred already match the object segmentation trend in a remarkable way, the dominant nature of this evolutionary trend is stressed even more when contemplating the increasing investment cost for the consecutive hardware solutions. When taking a close look at the state of the art in this domain of manufacturing, it is obvious that the goal of ideality has not been achieved yet: laser cutting is typically limited in the thickness of the plates that can be processed. Hybrid forms of laser and waterjet cutting, as well as laser and oxyfuel cutting, have been suggested to overcome this problem, but have not proven to offer the high quality output and productivity expected from pure laser cutting. While the laser cutting principle, as depicted in Fig. 3, may seem simple, the number of process parameters, and thus the complexity of the process control, is high. Furthermore, technological limitations to the available laser power in combination with the beam quality (intensity distribution) still impose the use of a process variant for heavy duty work. In this so-called reactive gas assisted laser cutting the cutting gas, typically oxygen, not only provides an expulsion force, but also acts as an oxidant causing exothermic reaction with the steel plate to be cut when ignited by the laser. These observations and the desire to shift the process window beyond the present boundaries in terms of plate thicknesses that can be processed form the starting point for the inventive problem solving case study presented in the next section. 3. Axiomatic Design analysis of the existing configuration Analysing the current hardware and control solution in function of compliance verification according to the first AD axiom leads to a mapping of functional requirements and design parameters as shown partially in Figure 4.

Figure 4: Partial view on functional requirement and design parameter mapping

As can be seen from Figure 4, some of the functional requirements (FR’s) can be met by individual design parameters (DP’s), and thus can be considered as uncoupled (FR3,4←DP3,4). For others decoupling can easily be achieved by properly sequencing the design parameter tuning (FR5,6←DP5,6). However, for the crucial functionalities of material melting and expulsion a strong coupling can be observed. Indeed while in the existing configurations the required expulsion force can only be provided by the cutting gas, the gas flow simultaneously determines the heat input into the plate (through exothermic oxidation). Increasing the gas pressure, in order to assure systematic expulsion when cutting thick plates with a narrow kerf width, actually typically generates burning defects, leading to wide kerfs with an unacceptable surface roughness or even complete failure to cut the plate. The expulsion of molten material is determined, among others, by the viscosity of the melt, which is temperature dependent, and by the width of the molten zone. Both the laser and oxidation heat soure influence these factors Since the obvious solution to uncouple the design by switching to cutting with an inert assist gas would require a higher laser power and intensity than can not be obtained at economically acceptable cost, laser cutting capabilities are de facto limited to 25-30mm depending on the plate material to be processed. In contrast with the fairly straight forward AD approach, systematically listing all relationships between substances and influencing fields would require an in-depth insight in the complex nature of the process. Although the process has been intensively studied over the past decades, difficulties to observe the laser cutting phenomena in process do not allow to obtain the transparent insight required for a systematic Su-field exercise.

4. TRIZ as a decoupling instrument The use of TRIZ as a tool for resolving coupled design conflicts was suggested and demonstrated by several authors [Young, 2004], [Zhang, 2004]. In these application-oriented publications, however, the transformational step, required to use AD output as a TRIZ contradiction problem specification, is not discussed. In the more systematic study by Shin and Park [Shin, 2006] different outcome categories from AD analyses are linked to appropriate TRIZ modules for problem resolving. However, in the examples accompanying this study the translation of the design parameters characterising a coupled design situation to a TRIZ engineering parameter vocabulary is not systematically dealt with. In this section the requirements to link the output of an AD analysis step to a TRIZ redesign procedure are therefore discussed. While it is obvious in the case study example that the functional requirements are not properly met, and although the conflicting design parameters have been clearly identified, at this stage identification of an appropriate input for a TRIZ conflict resolving exercise is not evident. On the one hand, functional requirements are insufficient to describe an actual contradiction to be resolved. On the other hand the conflicting design parameters, as illustrated in Figure 4, do not match the engineering parameters used in TRIZ. A systematic transformation of the AD output is therefore required if a procedural design method, based on AD and TRIZ integration, is envisaged. Typical for the TRIZ engineering parameters is that they are ‘dependent parameters’ that can be influenced in many different ways. A parameter such as ‘weight’, for example, can be affected by a range of independent parameters, such as the material selection (via specific weight), by volume adjustment, by varying the porosity, etc. While independent parameters are typically specified in a detailed design stage, as part of the final technical part or product specification, underlying, dependent parameters are closely related to the functional specification defined at the outset of a design project. Systematic generation of input for TRIZ problem solving therefore requires re-determination of the functional parameters underlying the independent design parameters. For the case study, the transformation requires zooming out to a higher level of abstraction than the detailed design level used in the example of Figure 4. Table 1 illustrates this for the coupled FR’s and DP’s of the case study.

Table 1: Transformation from independent DP’s to dependent engineering parameters

AD TRIZ AD Design Functional Engineering parameters requirements parameters

Dependent Conceptual Embodiment Functional spec. Detailed design parameters design design

2 1 laser source temperature 2 localised heat localised melting laser power outcoupling distribution supply control 4 3 pressure gas flow gas supply 1 gas supply melt expulsion distribution velocity profile pressure valve control

While abstracting the design parameters involved in the coupling to be eliminated, at a given point the one to one mapping (c in Table 1) between design specific and abstracted parameters is no longer possible (d). At this point the transition from a coupling conflict to a contradiction situation is typically reached (e). The coupled relations that can be detected in the AD design matrix, are replaced by contradictions between the identified, abstracted parameters. Further abstraction until the level of dependent physical parameters is reached, allows to identify the appropriate TRIZ engineering parameters involved in the contradiction to be resolved (f). Using the contradiction table, the case study problem can now be solved by applying the appropriate TRIZ inventive principles. In the case of a wanted (increased) pressure without the negative influence on the temperature distribution (unwanted burning effects), the Matrix 2003 version of the contradiction matrix advises respectively inventive principles 35: Parameter Changes, 3: Local quality, 19: Periodic action, and 2: Taking out. A scan of recent publications and patent applications identifies at least two attempts to deal with the described problem. One is using a specially designed nozzle configuration in which a second gas stream is created oriented deep into the cutting kerf and locally reinforcing the gas stream to increase the expulsion pressure [Neidhardt, 1993]. Depending on the cutting direction the location of the second gas stream, relative to the laser beam and main gas stream, is adjusted by an additional numerically controlled axis. This concept obviously makes use of the ‘Local quality’ inventive principle: to prevent an escalating oxidation process the expulsion jet is only reinforced where strictly needed. A second patent involves cutting with oscillating gas pressure [Fernandes, 1993], which is claimed to lead to improved surface quality and thus to an expanded process window for given quality requirements. This proposal fits into the ‘Periodic action’ principle and in terms of AD can be considered an effort towards decoupling thermal energy input from ejection pressure by introducing the oscillation frequency and amplitude as extra design parameters. In terms of complexity, both proposed solutions require additional subsystems and control functions to be integrated into the design, obviously increasing the total complexity of the overall design.

5. Conclusions By means of the case study of reactive gas assisted laser cutting, it was shown that a direct transformation from identified, coupled FR-DP relations in AD to a contradiction problem description is not evident. The design specific DP’s used when analysing an existing design solution with AD are situated at a different level of abstraction than the generic engineering parameters used in TRIZ. As was illustrated in Table 1, for later stages of a design procedure the number of steps required to reach a generic engineering parameter level is higher. A systematic mapping of independent, design specific parameters, to dependent, generic engineering parameters allows to overcome this hurdle. This effort can be limited since the mapping is only required for design parameters involved in coupled relationships, as identified by a proper AD analysis. At the level of abstraction where one-to-one mapping becomes impossible, the contradiction to be solved becomes visible. Additional steps may still be required to obtain a clear dependent engineering parameter contradiction problem that can be systematically solved by appropriate TRIZ techniques. While the case study used in this article may illustrate these observations in a comprehensive way, the analysis of this problem does not result in a systematic solution for the need to abstrahate design specific parameters. The distinction that can be made between independent and dependent parameters, however, is helpful to determine the need for further steps in the abstraction process. As such the suggested transformation procedure can contribute to the integration of AD and TRIZ in a systematic design methodology, making optimal use of both the analytical strength of Axiomatic Design and the systematic support of synthesis activities offered by TRIZ. On a higher level, the philosophical question can be posed whether the heuristics based evolutionary trends recognised in TRIZ can indeed be compatible with the believe in absolute design quality as expressed in the AD axioms. Recognising the dominance of either approach may be inevitable to avoid this dilemma. Further research and extensive practitioner feedback are therefore required to validate the effectiveness and efficiency of the proposed integration scheme.

6. References 1. Dieter, G.E., 1996, Engineering design: a materials and processing approach, McGraw- Hill, Boston (Mass.) 2. Fernandes, A. V., Gabzdyl, J. T., 1993,Improved apparatus for the thermic cutting of materials, European patent application EP0533387 3. Mann, D., 1999, Axiomatic Design and TRIZ: Compatibilities and Contradictions, TRIZ Journal, June 1996, TRIZ Institute 4. Pahl, G., Beitz, W., 1996, Engineering design: a systematic approach, Springer, Berlin 5. Neidhardt, G., 1993, Laser nozzle, US patent US005220149 6. Serruys, W., 2002, Sheet metalworking: State of the art, LVD Company, ISBN 90- 807224-2-1 7. Shin, G.-S., Park G.-J., 2006, Decoupling process of a coupled design using the TRIZ, Proceedings of the 4th Internat. Conf. on Axiomatic Design, ICAD2006, Firenze, June 2006 8. Suh, N.P., 1988, The principles of design, Oxford University Press 9. Trumpf GmBH Ditzingen, 1996, Fascination Blech: Flexible Bearbeitung eines vielseitegen Werkstoffs, Dr. Josef Raabe Verlags GmbH, ISBN 3-88649-187-0 10. Young, J. K., Skuratovich, A., Pyeong, K. C., TRIZ Applied to Axiomatic Design and Case Study: Improving Tensile Strength of Polymer Insulator, Proceedings of the TFC2004 Conference, Firenze, November 2004. 11. Zhang, R., Tan, R., Cao, G., 2004, Case Study in AD and TRIZ: A Paper Machine, TRIZ Journal, March 2004, www.triz-journal.com.

USING TRIZ IN THE FORECASTING OF THE COMPUTER ROLE PLAYING GAMES EVOLUTION

Michal KURELA Laboratory Presence & Innovation [email protected]

Pascal CRUBLEAU Laboratory Presence & Innovation [email protected]

Henry SAMIER Laboratory Presence & Innovation [email protected]

Abstract This research aims to find out the patterns existing in the computer role-playing games (CRPGs) design and to find out if the system evolution laws of TRIZ (Theory of Inventive Problem Solving) are applicable to them. Only part of the technical evolution laws was explored and only on the selected subsystems of CRPGs, because the complete analysis would constitute a much longer paper. The research was essentially qualitative. In conclusion it allows to state that TRIZ evolution laws are matching to many instances of CRPG subsystems evolution paths what allows to propose directions for the future development of CRPGs. Keywords: CRPG, role-playing game, system evolution, TRIZ

1. Introduction

1.1. Role Playing Game (RPG) Computer Role Playing Games (CRPG), considered in this article, are themselves a genre of the bigger family of games, thus their origins are lying long before the creation of computers. Pen and Paper (PnP) RPGs existed before CRPGs. They appeared on the beginning of the 70s in the USA. Historically the first RPG, “Chain mail”, was created by Gary Gygax1. The first CRPG was created approximately in 1975 by Don Daglow on the PDP-10 mainframe computers2.

Role-playing games (RPGs) allow gamers to play the part of a character and interact in the game world. They typically send the players' characters (PCs) on a major quest, often made up of smaller adventures (quests). Players are able to develop their characters, earning new skills and abilities by fighting battles or completing quests. While RPGs are traditionally associated with swords-and-sorcery fantasy, they can be set at any place or time.(3).

1 “History of Dungeons and Dragons” http://www.planetadnd.com/historyofdnd.php, last access: 1/04/2006 2 http://en.wikipedia.org/wiki/Dungeon_%28computer_game%29 , last access 1/4/2006 3 Definition of CRPG, http://www.allgame.com/cg/agg.dll?p=agg&SQL=GXD|||||||26, last access: 1/04/2006 According to John H. Kim (4), real RPG games can be distinguished by the fact that the player is able to “detach” himself from his real self and think as his game character, so it is very concentrated on the notion of psychological immersion. 1.2. TRIZ and its laws of system evolution TRIZ (Theory of Inventive Problem Solving) contains, among many other tools, the laws of system evolution formulated by G. S. Altshuller (5) and then developed by numerous scientists. They were used in this paper to define the steps leading to the Ideal Final Result (IFR) for CRPGs and their origins. IFR fixes the terminal point in the path of the evolution. TRIZ’s technical system definition was used to describe different subsystems and to help structuring the research.

2. Defining CRPGs as Technical System TRIZ describes technical systems (TS) as entities containing a working tool, an engine, transmission, control and casing. CRPG systems contain software and hardware components and each of them consists of the 5 parts mentioned above. This study is focused on the software’s working tool and engine:

The Working tool, here, is a CRPG’s user interface, both as the graphical and physical interface generated by the software, which allows the player to control his PC and to have feedback on his actions using hardware being the peripherals (keyboard, mouse, screens, speakers, etc.), which allows interacting with the game using human senses and manipulation body parts (e.g. hands). The Engine consists of a game engine software generating data, which is converted to human readable form by the working tool. However the whole system is maintained by hardware which requires electrical energy to function.

Even if the artistic part of CRPG systems is not technical, the studies such as Zlotin’s et al. (6) show that TRIZ can be applied to many non-technical areas including studies made on poetry, music and cartoons and system of Science Fiction ideas classification created by G. Altshuller, which is analogical to his inventions levels classification. He used approach based on reading, discussing and creating fantastic ideas in helping inventors increase their creative imagination. The process can be reverted and used for idea generation in fantastic worlds (as it was done by Boris Strugatskiy). Emotional aspects used for example in quests of CRPG can be treated as shown by Kowalick (7).

Modern CRPGs are generally composed of the quests (their descriptions and algorithms), dialogues between characters (contents, scripts, interface), characters which interact actively

4 Kim J. H., “What is RPG” http://www.darkshire.net/~jhkim/rpg/whatis/ , last access : 1/4/2006 5 Altshuller G. S., 1984, « Creativity as an exact science : The Theory of the solution of Inventive Problems », Gordon and Breach Science Publishing, New York 6 Zlotin B., Zusman A., Kaplan L. et Al., “TRIZ Beyond Technology: The theory and practice of applying TRIZ to non-technical areas”, 1999 Ideation International Inc. Detroit, MI, www.triz-journal.com\archives\2001\01\f\index.htm , last access 1/04/2006 7 Kowalick J.F., “The TRIZ approach - Case Study: Creative Solutions to a Human Relations Problem”, , TRIZ Journal, www.triz-journal.com\archives\1997\11\b\index.html , last access 1/04/2006 with game world to perform quests and between themselves (by fighting, discussing, trading, etc.), game world and its objects, game rules (e.g. defining the chance to hit enemy) and physical system defining how player can interact with game.

3. Mapping CRPGs to the TRIZ evolution patterns

3.1. Game population Since 1975, several hundreds of CRPGs were created for different hardware platforms. To conduct the present study, its perimeter was reduced to CRPGs made for Personal Computers and in non-Japanese style (these games are very different in terms of gameplay and story). The Massive Multiplayer Online RPGs (MMORPGs), which are played essentially in the Internet, were also excluded. The scope of this study includes CRPGs produced between 1981 and 2003.

3.2. Methodology According to S. Savransky (8), because of the lack of time dependence in the evolution of the TS, it is impossible to predict the evolution of such systems using quantitative methods. CRPGs as a system are dependent on a very wide choice of factors, which varied in time. For example, the simplest measure of level of innovation may be the level of sales of these games, however these numbers are meaningless taking into account the context of the market (9) which was similar to ideal competition in the 1980s and which is restrained in the 2000s by the oligopoly structure. Quantitative research on patents in the case of CRPGs is pointless, since many of its innovations concern objects which cannot be patented (storyline, quest types, game rules - mathematical formulae, etc.)

The only way to evaluate the evolution of CRPGs is currently through qualitative research. It is based on the detailed tests of a set of games for functional analysis and appreciation of the games’ features. It includes the review of opinions of players and industry experts found in 5 important knowledge bases on the computer games (10) to diminish the bias of the testers. The cumulative knowledge base was developed to trace the differences in games’ features taking into account their relations (such as games’ publication date or development teams’ composition). It used semantic and statistical analysis for its alimentation with data, their organization and exploitation because most of CRPG information sources do not apply scientific approach in the games’ classification and description. Milestone games were selected to illustrate technology transitions described in the next sections. The current results of the comparison are organized by theme of the chosen evolution laws of TRIZ.

8 Savransky S. S., “Forecast of Technical Systems”, “Engineering of creativity – introduction to TRIZ methodology of inventive problem solving”, pp.347 9 Cook D., 2005, “My Name is Daniel and I am a Genre Addict - The impact of psychological addiction on the game industry”, http://www.gamedev.net/reference/articles/article2227.asp 10http://www.jeuxvideo.fr/,http://www.gamerankings.com,http://www.allgame.com, http://www.mobygame.com , http://www.gamekult.com

4. Selected results

4.1. How did the idea of CRPG appeared CRPGs originate from the axiom of Technical Evolution (TE) stating that “both the quantity and quality of human needs, as well as requirements for humans, increase with time”.

Fig.1 - Principal TS contributing to Fig.2 – Multidimensional the creation of CRPGs character of CRPGs IFR

Chess (ancient, Performing arts around 2nd century (since 6th century BC) BC) Literature (from ancient myths, theatre, motion through ancient Ideality pictures, drama, epics, romances to Military wargames comedy, music, modern novels) Physical Dynamics, (1780) dance, opera, interaction Controlability magic

Hobby wargames Fantasy and SF by H.G.Wells literature (1913) (beginning of XXth century)

Computer games RPG (1973) by (1947) Gary Gygax

Intellectual interaction CRPG (1975) Immersion, tangibility of the system Taking into account human needs, there was a strong trend in the 1970s to dream and imagine other worlds because of the sociological, political and economical context. RPGs appeared in the USA (fig.1) for which that time was a hard period: defeat in the Vietnam War, stagflation and high inflation rates caused by oil shocks of 1973 and 1979, increase of poverty. The appearance of personal computers allowed the emergence of CRPGs. 4.2. Ideality The law of increase of the degree of ideality was defied by Petrov (11) as follows:

(1)

Where: I - the degree of ideality; F - a function delivered of a positive effect; P - negative effect, expenses; i - a number of variable F; j - a number of variable P.

11 Petrov V., 2002, “The Laws of System Evolution”, TRIZ Journal, www.triz- journal.com/archives/2002/03/b/index.htm, last access 1/04/2006 The ideal CRPG (fig.2) is a game which does not show any interface (physical immersion) to place a player in an infinite number of worlds and in an infinite number of contexts, which are original and allow total intellectual immersion (total association of oneself with the played character). The player has ideal interactivity with the game world, including story-related elements (e.g. intelligent dialogue responses) and ideal interaction with all physical senses. The game allows doing things that are impossible in the real world on our current level of technology (changes in physical laws, use of magic). From the point of view of engineering, the idea of “Matrix” would express very well an ideal solution. 4.3. Regularity of using space It is one of the most visible evolution factors. CRPGs have developed from text-based (1D) e.g. “Dungeon” (1975), through different 2D stages to fully 3D games which feature 3D characters and items, for example in “Morrowind” (2002). 4.4. Transition to super system This rule was quite often a reason of major breakthroughs in the CRPG world. Joining together into bi-systems or multi-systems, several computer game genres resulted in major advances. Very good example is given by Real Time Strategy (RTS) games such as “Dune 2” (1992) and “Command and Conquer” (1995) in terms of the tactical control of the units (PCs in CRPGs) resulted in the creation of real time isometric CRPGs of the “Baldur’s Gate” (1998) family that replaced most of more complicated mechanisms of party control existing before, such as the system in “Ultima IV” (1990) or “Eye of the Beholder” (1991). 4.5. The law of increasing su-field interactions in a system This law in the context of CRPG concerns the increase of dynamics of intellectual and physical exchanges. Early CRPGs, such as Ultima I, were exchanging mostly intellectual, abstract terms, ontologies (substance) using specific form, such as style of dialogues (fields). The analogy for su-field model is used similarly to D. Mann (12) in the business context where substances are business partners and fields are communication means.

Eventually physical interactions were developed by inclusion of graphical and physical components of higher level (first 2D and then 3D environments). Control system development, by use of the mouse instead of the keyboard starting from “Ultima VI”, allowed including the aspect of player’s strength, agility, and precision (for example in “Morrowind” and “Arx Fatalis”). The evolution in this domain can follow the development of Virtual Reality technology, gradually including new senses and body parts of the player in the interaction with the game world.

Intellectual interactions (ideas being substance) increased their dynamics by means of artistic means such as specific game levels design (style being a field), which was very simple in the beginning because of the hardware barrier (screen resolutions, numbers of colours and computation power) and then it was much more sophisticated, allowing the artistic liberty (e.g. “Planescape: Torment”). Music in CRPGs followed the same schema. It was very simple and low quality in the beginning of the CRPGs history, in the end of the 1990s, music for games have obtained the quality level used before only in the cinema or music industry (e.g. “Fallout”). The drama context can be introduced using “cutscenes”

12 Mann D., “Application of Triz Tools in a Non-Technical Problem Context”, TRIZ Journal www.triz-journal.com\archives\2000\08\a\index.htm, last access 1/04/2006 (scripted starring of game characters using game engine capacities). A very good example is “Baldur’s Gate 2” (2000), which builds its ambiance on the PC’s dreams cut-scenes.

The evolution will be continued when CRPGs become part of the art as it has happened with the cinema. The community will decrease its psychological inertia towards CRPGs which are considered as toys for children, adolescents and not serious “kidults” (13) because of demographics as those groups treat CRPGs seriously.

5. Conclusion

Despite lack of patents in CRPGs domain and relatively chaotic information structure in this industry, it was possible to match the examples of CRPG evolution paths with the TE laws of TRIZ what allows further prediction of CRPG development. It was equally possible to express artistic, intangible part of CRPGs in terms of substance (ideas, values, ontologies) and field (form of expression) what constitutes a base for many TRIZ tools and opens this part of CRPGs for potential TRIZ-based improvement. TRIZ can be used by games developers to treat software and hardware problems and in addition it can treat art content and game mechanics problems. The research will be continued to match su-field interactions law with both physical and artistic part of CRPG.

All these facts indicate that the underlying approach using knowledge base and TRIZ with semantic and statistic analysis is a promising direction of research for resolution of inventive problems in domains where there are few patents and other structured data sources.

6. References Altshuller G. S., 1984, « Creativity as an exact science: The Theory of the solution of Inventive Problems », Gordon and Breach Science Publishing, New York Cook D., 2005, “My Name is Daniel and I am a Genre Addict - The impact of psychological addiction on the game industry”, Crubleau P., 2002, Doctoral thesis: “The future generations identification of industrial products. Proposition of a method using the TRIZ evolution laws” “History of Dungeons and Dragons” http://www.planetadnd.com/historyofdnd.php, last access: 1/04/2006 http://www.gamedev.net/reference/articles/article2227.asp, last access: 1/04/2006, Kim J. H., “What is RPG” http://www.darkshire.net/~jhkim/rpg/whatis/ , last access: 1/4/2006 Kurela M., 2005, “Projet Teacher 3D”, ISTIA Innovation Petrov V., 2002, “The Laws of System Evolution”, TRIZ Journal, www.triz- journal.com/archives/2002/03/b/index.htm, last access 1/04/2006 Savransky S.S., 2000 “Evolution of Technique”, “Engineering of creativity – introduction to TRIZ methodology of inventive problem solving”, Chap.7, CRC Press LLC Savransky S.S., 2000 “Forecast of Technical Systems”, “Engineering of creativity – introduction to TRIZ methodology of inventive problem solving”, app. B, CRC Press LLC Zlotin B., Zusman A., Kaplan L. et Al., “TRIZ Beyond Technology: The theory and practice of applying TRIZ to non-technical areas”, 1999 Ideation International Inc., Detroit, MI, www.triz-journal.com\archives\2001\01\f\index.htm , last access 1/04/2006

13 http://www.game-research.com/statistics.asp, last access 1/04/2006 DIRECTED VARIATION: VARIATION OF PROPERTIES FOR NEW OR IMPROVED FUNCTION PRODUCT DNA, A BASE FOR ‘CONNECT AND DEVELOP’

Simon Dewulf Managing Director CREAX [email protected]

“The lexicon is really an appendix of the grammar, a list of basic irregularities” Bloomfield (1933) Abstract: This paper builds up a procedure to connect previously unrelated domains in order to transfer existing knowledge to a given guest domain. The connections are based on properties: (what is or has) and functions (what does or undergoes). The abstraction of any system in its property – function strings reveals a system or ‘product DNA’ a base for charting out innovation directions as described in Directed Variation®. Based on product DNA, related domains and/or products can be identified that act as a source for knowledge transfer. Combining the strengths of language technology and directed variation, the process can largely be automated. The process brings a new capability of TRIZ-based methodologies; the theory behind it is explored in this paper. Keywords: properties, functions, directed variation, TRIZ, product DNA, computer aided innovation

1. Introduction The methodology of TRIZ is rooted in the comparison of domains, products and systems, whilst extracting patterns for problem solving and idea generation. The way ‘how’ relevant domains are identified in order to connect to an existing domain is documented in this paper. This paper proposes a base for connection between products, processes and systems based on the comparison of their properties and functions. DNA is a breakdown of a system in genes (which represent properties) that are combined into amino acids that shape a protein, being the function. This model in abstract fits the theory of directed variation, through which the concept of ‘Product DNA’ was deducted.

What is brittle, shaped like a plate, transparent, mainly used to look trough while insulating rain and wind. Not very difficult to find the riddle as ‘window’. By listing up properties of a system, one can distinguish one object from another. According to Princeton Wordnet (1) a property is an attribute, dimension (a construct whereby objects or individuals can be distinguished). This is the first part of our product DNA string. Secondly, adding the main use of the system ‘to look through’, brings the function, purpose, role or use (what something is used for) (1). In order to connect a ‘window’ with other products or systems, commonality is needed. A window is as transparent as glasses or it is as brittle as ceramic. Through the first connection, one could transfer the self shading glasses into windows, or add some strengthening coating developed in the ceramic area. A function connection also exists with glasses as both windows and glasses are made to look through. One might think of changing the optical properties of the window, similarly to corrective glasses, to see further through the window.

Fig. 1 General model for connection through property or function (left) . Example procedure for windows (right)

Figure 1 shows the basic model. Extracting abstract properties and functions from a product or system brings generic terms. These terms can connect to products or systems that abstract in a similar way. The connected products are surrounded by knowledge or technology (specifications, patents, …) that can be transferred to the product or system of interest A connection is defined as a relation between things, where as a relation is defined as an abstraction belonging to or characteristic of two entities or parts together. (1)

2. TRIZ Relation - Directed Variation

TRIZ relation –Trends

Genrich Altshuller’s work relates to some 8 -12 original patterns of evolution in patents. Work at CREAX proposed 19 newly identified trends (2). The patterns (also referred to as trends of evolution), describe a path of changing property. An example of an original pattern can be solid-hollow-porous-capillary- (active capillary). An example of a newly proposed pattern can be opaque – translucent – semitransparent – transparent – (active transparent) (2). These patterns uncover nothing more than paths of changing property, porosity and transparency in the quoted examples. This insight provides an efficient opening to identify eventual remaining patterns, just by evaluating all possible properties. Moreover, the finding reduces the trends to changing properties of systems or products. The approach has lead to the development of the Directed Variation® method (3). Some TRIZ patterns combine two properties. For example ‘Dynamisation’ combines flexibility with state, ‘Object segmentation’ combines fragmentation with state and ‘Controllability’ combines ‘amount of information’ with ‘degree of automation’. Directed Variation has purified these patterns into single property patterns (3). Changes in properties relate to changes in function of the product or system. This relates to ‘benefits’(2) in trends of evolution, and will be elaborated below.

TRIZ relation – Principles One of the oldest tools in the TRIZ methodology are the 40 inventive principles (77 including Special and Combined (4)). All principles, except 22 ‘Blessing in disguise’ and 13 ‘The other way round’ fit into a property change. For example ‘self-service’ Æ automation, ‘hole’ Æ porosity, ‘dynamisation’ Æ flexibility. 22 and 13 are more related to the thinking mechanism of a creative attitude. Reducing both principles and trends to property changes, reduces the tool to a straight forward procedure, being much more accessible to any engineer. The relation between principles, trends and standards (discussed below) is thereby defined by their abstract relation to property changes. The theory of ’40 inventive principles’ is hereby greatly challenged through this checklist of property changes (much more than 40, example ‘temperature’), elaborated in Directed Variation.

TRIZ relation – Conflicts and contradictions Thinking in conflicts and contradictions has been a great contribution of Altshuller’s theory and constitutes an axiom of TRIZ. Be it through the contradiction matrix, matrix 2003 (4) or thinking in physical contradictions, the process indicates conflicting requirements can be solved using a set of inventive principles. Referring to both paragraphs above, a conflict can be expressed in ‘conflicting properties’(5). The resolution of conflicting properties is reduced to changing a property, which is linked to Su-fields below. E.g., strength conflicting with weight can be resolved changing fragmentation, homogeneity or porosity.

TRIZ relation – Function database As accessible through a free web resource (6), knowledge can be classified by function (rather than alphabet). This classification opens a wide range of solutions for a challenge function. The classical example: In how many ways can one empty a glass of water without touching the glass? A brainstorm brings use a straw or heat the environment, or just wait. The function database answers with 48 ways to move a liquid (6). In order to achieve this function, a property needs changing. Directed Variation® indicates that 10 properties can be changed to achieve this function. I.e. porosity (+ concentration) includes solutions like osmosis and absorption or temperature (+ pressure) includes boiling and evaporation. All 48 solutions can be classified within these 10 property changes.

TRIZ relation – S- Fields S-fields analysis suggests changes in a substance i.e. changing a property of a substance, adding a new substance i.e. adding a substance exhibiting different properties to the whole system or adding a field (state change). The completeness here of a working system relates to completing the necessary properties for the required function. Again this is part of the Directed Variation theory as it comes down to changing properties and thus functions.

3. Directed Variation®: the basic process Through Directed Variation® CREAX aims to reduce most tools of the TRIZ methodology to a straightforward procedure (7). In itself, this is more of enrichment than reduction as the tool can now be expanded to any domain, and brings a much more accessible framework for the inventive engineer. This, as any domain is defined by its specific properties and functions. As clarified above, most of the functional tools of TRIZ can now be summarized in a negotiation of properties of a system, for new or improved function. Rather than describing ‘changing property paths’ or the TRZ trends of evolution, Directed Variation® identifies pure property spectra. A property spectrum depicts the variety, range or scale in which a property is variable. Take the property spectrum ‘state’, this includes properties solid – liquid – gas – field. A summary of the method (8) is given in Fig 2. below:

X the product, the process or service, the system, thing or article subject to study; mainly expressed in a noun. Examples: table, pen, car, bank, restaurant.

function the purpose of X, it’s useful action, what X does or undergoes, mainly expressed in verbs and related to the technologies. Examples: joining, cleaning, wearing, measuring.

property a variable; its attributes, what X is or has, mainly expressed in adjectives and related to the sciences. Examples: hollow, smooth, transparent, strong, flexible.

spectrum property spectrum, the variety, the range or scale in which a property is variable Examples: porosity, surface, flexibility, strength.

property X => function

example : property = jointed spectrum = flexibility X = ruler e.g. A jointed ruler folds. function = folding

Fig 2. Basics of Directed Variation

The beauty of the system is that whilst the relation between a property and a function is solid, it is independent of products. This abstraction gives the ground for connecting products, systems and/or technologies (3). Any transparent system connects the function ‘to look through’, any jointed system connects ‘folding’, any protruded system connects more surface or grip. This allows the construction of idea generating matrix texts in which the variations are connected to any product, to be interpreted by the reading engineer (3). An example formulation is given below in Fig. 3.

Your porous PRODUCT is easier to transport as it reduces the weight. Your hollow PRODUCT can contain a related substance. Making your PRODUCT flexible or jointed allows you to fold or direct your PRODUCT to be more precise or compact. A protruded surface on your PRODUCT gives more surface area which can provide better heat transfer of grip.

Fig. 3. Extract of idea generation matrix text; fill in any product (3)

4. Product DNA® – graphical representation

Fig 4. Product DNA real and stretched out

As an example, consider sugar cubes. If property spectra porosity, surface, flexibility, and some more as shown on Fig 4 above are stretched out; one can indicate (dots) the specific properties of our product ‘sugar cubes’. Again, looking at the Product DNA above, combined with its main function, sugar can be identified as a riddle. Below (Fig. 5) is a top view of Figure 4, whilst every spectrum is twisted 16 degrees. The map can be used as a base for connection. Through a Product DNA search in patents, medical products, chocolate pellets, ice cubes, effervescent tablets and dish wash tablets appear as connected.

72 % match

Fig 5. Product DNA sugar; view from above. Right: Product DNA of a close match, dish washing pallet

Figure 5 (right) shows a 72% (8 of 11) matching connection: dish wash pellets. The 28% mismatch of dish wash pellets is inspiration for the sugar cube, for example, add a second layer with milk powder,(components), indicate the difference with colour (colour and information). By evaluating the range of dish washing products, their variations may inspire liquid sugar pads or liquid/spray dispensers. Since these new variations have solved problems in their area, they can solve similar problems in sugar, as for example liquid sugar dissolving faster. Other domains above bring effervescent sugar cubes or sugar ice cubes for ice coffee as further examples. These examples show that the property or function connections, that are schematically depicted in Fig. 1, form ideal connections for comparative innovation. Since these related domains can be identified by computer algorithms, the searches are unbiased by human knowledge and thereby often create many unexpected solutions.

5. Computer aided Creativity, data mining through directed variation (patent pending) A small but important finding, whilst remembering Bloomberg’s quote on top of this paper, a good structure for automated data mining was uncovered. Since properties are not really listed in patents, language structure brings distinction. A property is what a product is or has, i.e. a transparent window, a hollow window, a brittle window or a double window. Each property here is expressed as an adjective, a property-adjective. Some exceptions are verbal adjectives, breakable glass or cleanable glass, mainly ending with –able. Nevertheless, by searching adjectives, one can distill all related properties of a system. An example case was described in reference (8). An adjective is moreover defined as a word that expresses an attribute of something (1) Adjectives, properties are related to the science of the system. The functions of a system are, for example, holding, cutting, cleaning or draining; they are described through verbs. Verbs are defined as a content word that denotes an action or a state (1). By charting all verbs connected to a system, one can structure all related functions. In order to research property-function connections, adjective-verb relations are investigated.

Fig. 6. DIVA generated property plot of propeller patents (limited patent pool) Numbers depict amount of patents, position show the variation.

Figure 6 illustrates an automated search (9) identifying the patents that describe propellers with distinct properties. The survey indicates 94 transparent propeller patents, 94 fully flexible propeller patents, 230 smooth propeller patents and 92 surfaced ones. The main patented area appears as porous propellers. If this search is segmented over time, one can evaluate the evolution of the propeller design, similarly discussed in reference (8) on piston rings. An important input is related to the ‘not patented’ properties, for example no patents were identified on scented propellers, where one could imagine its use in air conditioning systems.

6. Solution Marketing: systematic identification of new markets for existing products Starting from Product DNA, one can identify by every property and their connected functions, which other domain requires the product. The sugar example property self dissolving in water has been used in road construction. The sugar being mixed in the concrete, dissolves with rainfall, giving the necessary porosity to drain the water. Also in concrete the emulsifying property of soy peal is used to create a more homogeneous concrete. Referring to the paragraph above, this process can largely be automated and has successfully been applied in raw material industry. General process in Fig. 7.

Fig. 7. Solution Marketing process based on product DNA Example from windscreen wiper to floor cleaning 7. Conclusion and application of Directed variation® In order to connect two items, they have to have similarities, as defined above. Properties and functions form a string of product DNA. As shown below.

Property to function (first 2) Function to property (first 2)

Porosity: 1.contain 2. cool Transport: 1.porosity 2. unity

Surface: 1. hold 2. cool Hold: 1. surface 2.shape

Flexibility: 1. fold 2. hold Store: 1. flexibility 2. unity

Transparency 1. view 2. inspect View: 1. color 2. transparency

Shape: 1. assemble 2. fit Assemble: 1. unity 2. porosity

State: 1. dissolve 2.flow Dissolve: 1. state 2. unity

Fig. 8. Linked property – function relationships in Directed Variation

The first column illustrates a process starting from each property and changing that property, to gain new or improved function. For example, hollow sugar can contain something (10). The right column is the opposite process. Knowing the desired function, what properties should I change; i.e. if I want the sugar to dissolve, change property state to liquid sugar, or unity to finer powder sugar. Related to the TRIZ methodology, Directed Variation® brings a measurable, rigorous approach to innovation. The proposed methodology transforms the functionality of TRIZ tools into basic property-function relationships. Through the use of language technology, Directed Variation® allows automation in property-function searches. This enables to identify related products and technology domains that can act as inspiration for product or system innovation. Through properties and functions of an existing system (Product DNA®), new markets can be identified that require those properties and functions. The research can be largely automated.

8. References 1. http://wordnet.princeton.edu/ 2 Mann, D, (2002), Hands-on Systematic Innovation, CREAX Press, Ieper. 3. Dewulf, S., Directed Variation, Solving Conflicts in TRIZ Part 3, TRIZ-journal Nov. 2005 4. Mann, D. , Dewulf, S. Zlotin, B. Zusman A., (2003), Matrix 2003, updating the TRIZ contradiction matrix, CREAX Press Ieper. 5. Dewulf, S., Directed Variation solving Conflicts in TRIZ Part 1, TRIZ-journal Sept. 2005 6. http://function.creax.com 7. Dewulf, S., Directed Variation solving Conflicts in TRIZ Part 2, TRIZ-journal Oct. 2005 8. Dewulf, S., Lahousse, B. Theeten, V., Directed Variation® Piston Ring Case, TRIZ- journal Jan 2006 9. CREAX, Directed Variation® , DIVA innovation suite, released September 2006 10. Dewulf, S., Directed Variation® Talent of the product, TRIZ-journal December 2005 Directed Variation, registered ® 0773180 BE Product DNA ® 1111891 BE © CREAX 2000-2006 All rights Reserved TOWARDS A RHETORIC OF TRIZ

Conall Ó Catháin Senior Lecturer in Architecture, Queen’s University, Belfast, N. Ireland. [email protected]

Abstract “The function of Rhetoric, then, is to deal with things about which we deliberate, but for which we have no systematic rules.” (Aristotle: Rhetoric). If we substitute the word ‘Design’ for the word ‘Rhetoric’ this statement could be the introduction to a text on design theory. This paper puts forward the view that a parallel can be drawn between rhetoric, design and TRIZ in particular. Aristotle taught that rhetorical communication involved three components: the speaker, the audience, and the speech itself. The paper goes on to describe briefly the system of Aristotle's Rhetoric in order to give some insights into the parallel. A prominent part of rhetoric is Invention. This has been variously interpreted at different times as the discovery of ways of persuading the audience of the speaker's point of view, or alternatively, the discovery of ways of improving mutual understanding between them. There is a clear parallel with design. The paper suggests that the conceptual and check-list structure of TRIZ may be seen to resemble some of the technical and other aspects of rhetoric, yielding what might be termed a rhetoric of TRIZ. Keywords: Rhetoric, Invention, Design, Communication, User, TRIZ

1. Introduction The word ‘rhetoric’ has come to mean showy or florid, often misleading, language. In this paper it is used in its strict meaning of the art of discovering the means of persuading an audience to agree with a speaker’s point of view, including the associated techniques that were first elaborated in antiquity. In spite of a ‘bad press’ in recent centuries, the teaching of rhetoric has persisted in the areas of English composition and Law right up to the present day, admittedly in an attenuated form. In fact it has recently had something of a revival. The evolution of the Greek πολις (polis) or city-state of the fifth century BC was unprecedented. The polis was a small, politically independent, democratic state. Centred around the Aegean Sea there developed an enormous number of these πολεις with what we would now regard as very small populations. When a polis grew too big, group of its citizens would leave the μητροπολις (metropolis) or mother-city and found a colony elsewhere. Kitto (1951, p66) says that only three had a population of more than 20,000 citizens, i.e. adult male citizens, excluding women, children, slaves and foreigners. How this may have come about is not clear. Kitto explains it as follows. The typical Greek citizen was a farmer who preferred to live in the town and walk out to work in the field, spending his leisure talking in the town or village square. The physical geography of Greece made the transport of goods difficult. There was therefore no great economic interdependence strong enough to overcome the desire to live in small communities. According to Kitto, the gods, “arranged for the Greeks to have the eastern Mediterranean almost to themselves long enough to work out what was almost a laboratory experiment to test how far, and in what conditions, human nature is capable of creating and sustaining a civilization. … Therefore this lively and intelligent Greek people was for some centuries allowed to live under the apparently absurd system which suited and developed its genius … and made it what afterwards became, a race of brilliant individuals and opportunists.” (Kitto, pp69/70) This produced the conditions for the birth of δημοκρατία (demokratia) or democracy, that is, ‘people-rule.’ People made decisions in town meetings. Everyone could speak and vote.

2. Rhetoric: the technology of persuasion In this kind of society, where decisions were made as a result of public discussion, meetings and speeches it is easy to see why the art of rhetoric developed. For an individual it was very important to be able to make good speeches in order to convince others of his point of view. Rhetoric had first received systematic study in the Greek city-states of Sicily after the fall of the tyrants there (Burn, 1966, p250). The methodology, brought to Athens in 427BC by Gorgias, became a central part of every Athenian’s education. Numerous books were written on the subject which was perfected by Aristotle. Briefly, three categories of rhetoric evolved, each with a distinct function:

- Judicial: for judging past actions - Deliberative: for deciding future actions - Epideictic: for praising the virtuous or blaming wrongdoers

In each case there is a speaker, an audience to be convinced and an argument. Aristotle identified three aspects of persuasion which should be considered in getting an audience to agree with the speaker’s point of view. The ’εθος (ethos) of a speaker referred to his character or standing and credibility. Then as now the ethos of a speaker was a critical success factor. The λογος (logos) was the argument employed. Arguments were carefully chosen from a structured array of prepared strategies. Finally the παθος (pathos) of the audience was its degree of receptiveness to emotional appeals to their sense of loyalty, sympathy, fairness and so forth. The successful ‘ρητωρ (rhétór) or orator would convince the listeners through the conscious and simultaneous targeting of all three persuasive aspects.

’εθος (ethos) It is essential to the design process that the client trust the designer to satisfy his wishes. There are some designers who have a great deal of credibility with certain audiences, whose names are household words even. The ’εθος (ethos) of a designer can be considered to be his broad reputation and standing. People assume that if an object has been designed by a famous designer then it must be good. People will pay more for a ‘designer’ watch. Every city with pretensions to being ‘world-class’ should have a gallery or museum designed by a member of an elite group of famous architects. The client wants some of the mystique that the famous designer exudes. Interestingly, a strong reputation also inhibits criticism: it takes courage to suggest that the Emperor has no clothes. The audience or user needs are often forgotten. This straightforward formulation does not apply in some circumstances: competitions and forgeries. In architectural competitions the designer’s identity is kept hidden. Each anonymous entry has to convince a “jury” that it is the best design. The “jury” normally contains people of high, possibly international, standing. These competitions may also have separate “technical assessors,” who make sure that the winning design is technically viable, in case the great and the good of the “jury” should choose a design that turns out to be deficient. The winning design then assumes the collective ethos of the “jury.” In the rhetoric of forgery, people try to impress others by wearing a fake Rolex watch, for example. Here is a form of communication that is false but still serves its purpose precisely.

λογος (logos) People will naturally evaluate a design before buying it. Consumers make comparisons. The fashion-conscious have a list of “must-have” brands for personal items such as trainers and clothing. Manufacturers go to great lengths to create brand loyalty. Where something cannot be bought in the market some procurement method has to be followed, as in the case of architecture and space travel. In all these cases people become convinced that they want a particular product. The various forms of communication and persuasion that bring about this conviction constitute the λογος (logos). Obviously this does not automatically equate to logical thinking or objective truth as the space shuttle disaster showed. There have been a great many cases where an architect’s client trusted him to produce a satisfactory design, only to be disappointed with the result, as studies have shown (Ó Catháin, 2003).

παθος (pathos) From the original meaning, “to suffer,” this is the passion or susceptibility of the consumer or client. Fashion ‘victims’ epitomise this aspect of design. Other manifestations exist also, for example people’s desire to have the latest mobile phone or other gadget. People can be convinced by an appeal to their emotions to buy or accept something unsuitable. In the United Kingdom consumer law allows for a “cooling off period” that, for a short time after a purchase is made, lets people change their mind about certain kinds of purchase. In architecture it is not uncommon for clients to be persuaded to accept design proposals that sacrifice their emotional needs to those of the architect (Ó Catháin).

3. The organization of knowledge and reasoning Aristotle draws a parallel between dialectical logic and rhetoric. He states that where the former uses induction and the syllogism, rhetoric uses example and enthymeme1, example corresponding to inductive and enthymeme corresponding to deductive. “When we base the proof of a proposition on a number of similar cases, this is induction in dialectic, example in rhetoric; when it is shown that, certain propositions being true, a further and quite distinct proposition must also be true in consequence, whether invariably or usually, this is called syllogism in dialectic, enthymeme in rhetoric.” (Rapp, 2002). Writing about common sense as a tool of reasoning Lonergan (1957) says that there is certain communal knowledge shared among ordinary people; that they can reason and make judgements about practical affairs; that they can have a high probability of being right in their particular deliberations. But because of the certainty that comes from deductive methods, and the standing of science in the modern world, many people have been under the illusion that the only certain knowledge is scientific knowledge, and that all other knowledge is inferior to it.2 If reasoning is not deductive, that does not mean that it cannot be rational or

1 A syllogism where one part of the argument - thought to be so obvious as not to need stating - is missing. 2 The extreme positivism of Lord Kelvin is an example of this position. logical or indeed practical. This kind of reasoning is widely employed in rhetoric. Aristotle’s section on ‘relative expediency’ below gives an exhaustive list of cases. TRIZ is essentially a system for organising human knowledge to facilitate creativity and retrieval. It has been devised in such a way that users can access relevant knowledge even in areas previously unknown to them. Whereas TRIZ developed out of technical systems, rhetoric is concerned with human nature, as the importance given to the three aspects of ethos, logos and pathos has already attested above. Both TRIZ and rhetoric also extend to taking human behaviour into account. In a similar way, rhetoric organises human knowledge for its purpose: it is, in the words of Aristotle, “the technique of discovering the persuasive aspects of any given subject-matter.” (Aristotle: Lawson-Tancred, 1991, p65). After setting out the most important subjects to be mastered by the orator, Aristotle3 then describes deliberative rhetoric which as will be seen has close similarities to TRIZ in its organisational approach. “The business of deliberation and advice is to present the advocated course of action as likely to promote some desired end, so that by investigating the ends of conduct, those things which men tend to seek out, that we will discover the sources of deliberative persuasiveness.” (Aristotle: Lawson-Tancred, p86). Aristotle goes on to discuss the components of happiness:

- good birth - wealth - good repute - honour - health - good old age - friends - good fortune - virtue

Some of the above are subdivided some into further elements in the course of the discussion. Of course these are mostly not under the control of the individual. “One does not deliberate about whether, but about how, to be happy.” (Lawson-Tancred, p91). What should be done to promote happiness? Aristotle again: “Now the political or deliberative orator's aim is utility: deliberation seeks to determine not ends but the means to ends, i.e. what it is most useful to do. Further, utility is a good thing. We ought therefore to assure ourselves of the main facts about Goodness and Utility in general. We may define a good thing as that which ought to be chosen for its own sake; or as that for the sake of which we choose something else …” (Aristotle: Rapp). The Good and the Expedient are then discussed. Some are further subdivided.

- self-sufficiency - a greater in lieu of a lesser good - the virtues - happiness - virtues of the soul - virtues of the body - wealth

3 This paper does not presume to précis the work of the great man: it uses parallels where it finds them. - friendship - recognition - verbal and practical capacity - native wit - being alive - justice

Aristotle’s next section on ‘relative expediency’ provides reasons and thus arguments for preferring one thing to another.

- if the greatest advantage of one kind exceeds that of another - when a accompanies b, but not b a - things that exceed by more are greater - that whose productive cause is greater - the eligible in itself more then the not eligible in itself - if one thing were to be an end and the other not - what needs less of some other thing - when one thing does not exist or cannot come into existence without a second - what is relatively rarer - what is harder - that is the greater good whose contrary is the greater evil - things whose functions are more or less noble are greater goods - those things are greater goods, superiority in which is more desirable or honourable - the excess of better things are better - a thing is more honourable or better than another if it is more honourable to desire it - things of which the sciences are nobler or more serious - the properties of better men - what the better man would choose - the more pleasant is greater than the less pleasant - the nobler is greater than the less noble - those things also are greater goods which men desire more earnestly - longer-lasting things are better than shorter-lived ones - what all choose is greater than what not all choose - that is the better thing which is considered so by competitors or enemies - things that are more praised - things for which the punishments are greater are greater evils - things that are better than others admitted or believed to be good - the same effect is produced by piling up facts in a climax - the home-grown is better than the acquired - the best part of a good thing is particularly good - things more needed are more useful - of two things that which leads more directly to the end in view is the better - the possible rather than the impossible - those things which are of service when the need is pressing - what aims at reality is better than what aims at appearance - all things that we wish to be rather than to seem - things more useful for many purposes - what is relatively painless and produces pleasure - of two things that which added to the original makes the whole greater - things whose presence is noticed rather than not - that which is dearly prized is better than what is not

These arguments are directly applicable to design, and display the sort of reasoning often used intuitively by designers, especially where quantities are not involved. In some cases what this list brings out is not perhaps so much the designer’s brief as the flavour of the advertising world with its promise of the good life obtainable from consuming.

4. Concluding Observations Lonergan’s analysis makes it plain that the kind of commonsense knowledge just described and codified for practical use by Aristotle is not only equal in value to deductive knowledge, but also a necessary part of scientific progress. “To regard them as rivals or competitors is a mistake, for essentially they are partners and it is their successful co-operation that constitutes applied science and technology, that adds invention to scientific discoveries, that supplements inventions with organisations, know-how, and specialized skills.” (Lonergan, 1957, p298) It is clear that the kinds of deliberative rhetorical reasoning needed for deciding future actions are exactly the same as those needed by designers when designing, and by consumers when evaluating existing designs. “The problem … is not the construction of the arguments but rather the initial discovery of these premises. For this activity rhetorical tradition has the technical concept of invention and invention can be said to be the primary subject of Aristotle’s Rhetoric. … The Rhetoric might indeed be called an encyclopaedia of invention.” (Lawson-Tancred, p19). “Usually TRIZ thought has been seen as a ‘product’ of the laws of dialectics by Marx and Engels who, in turn, used Hegel's metaphysical dialectics as a basis for their advancement. ” (Anon., 2006). Although he was born in 1926 it is tempting to speculate that Altschuller, the founder of TRIZ, may have had some exposure to Aristotelian ideas. Even if he did not, there is still a debt to Aristotle, as can be inferred from this comment on Hegel's work, “Hegel came to be one of the main targets of attack by the founders of the emerging ‘analytic’ movement, Bertrand Russell and G. E. Moore. For Russell, the revolutionary innovations in logic starting in the last decades of the nineteenth century had destroyed Hegel's metaphysics by overturning the Aristotelian logic on which it was based … .” (Redding, 2006). It is not being suggested here that there is any straightforward mapping between Rhetoric and TRIZ. It is only part of the story. The core of TRIZ is the identification of contradictions. But Altschuller did not invent this idea: it is precisely the method of Dialectic used by Aristotle and many others. The identification of a contradiction leads to a new formulation. Aristotle states that Rhetoric is a counterpart to Dialectic: they are tools to be used as appropriate. This paper has no more than scratched the surface of the subject, but it is clear that, just as TRIZ makes extensive use of lists (usually) backed up by scientific knowledge, rhetoric also makes extensive use of lists backed up by practical advice on how to choose arguments tailored to the particular circumstances and audience or user. The Rhetoric, backed up by Aristotle’s great corpus of other writings, makes a formidable system, in his words, “to deal with things about which we deliberate, but for which we have no systematic rules.” Hence it offers designers a way of practical reasoning about issues during the process of design.

5. References Anonymous, referee, (2006). Burn, A. R., (1966), The Pelican History of Greece. Harmondsworth, Penguin Books. Kitto, H.D.F., (1951) The Greeks. Harmondsworth, Penguin Books, (1974 edition consulted). Lawson-Tancred, H. C., (1991), Aristotle: The Art of Rhetoric (Translated with an Introduction and Notes). London, Penguin Books. (2004 edition consulted). Lonergan, B., (1957), Insight, A Study of Human Understanding. London, Longmans, Green and Co., Ltd. (edition consulted: New York, 1970, Philosophical Library). Rapp, C., (2002), “Aristotle's Rhetoric,” The Stanford Encyclopedia of Philosophy (Summer 2002 Edition), Edward N. Zalta (ed.), URL = Redding, Paul “Georg Wilhelm Friedrich Hegel,” The Stanford Encyclopedia of Philosophy (Fall 2006 Edition), Edward N. Zalta (ed.), forthcoming URL = . Ó Catháin, C., Designer Denial – the Dark Side of Architectural Practice, Journal of the Asian Design International Conference Vol. 1., Asian Society for the Science of Design. Tsukuba, Japan, Oct. 2003, ISSN 1348-7817. http://www.6thadc.com/webmaster/re_e/CD/6thADC.html.

FRACTALITY OF KNOWLEDGE AND TRIZ

Victor D.Berdonosov Komsomolsk-na-Amure State Technical University [email protected]

Abstract In the report one of the possible variants of solving contradictions “volume of knowledge – mastering” is considered. It is proposed the procedure of systematization knowledge on the base of its fractality. It is proved the assumption that knowledge is also fractal as everything in nature. There are analogues in the development of traditional nature objects: crystals, plants, , and knowledge. The procedure of systematization of applied knowledge is illustrated on the example of the development of dynamic type core storage. Keywords: contradictions “volume of knowledge – mastering”, systematization of applied knowledge, self-similarity, fractal.

1. Introduction The offered material is the development of the idea to use TRIZ for the system education /1/. The main contradiction within any educational system comes from the volume of delivered knowledge and time, required for its mastering. There can be knowledge and data in any subject. For example, Archimedean principle is knowledge; however, the relative densities of water and of gold are the data. There are no fundamental difficulties in accumulating data. It is not necessary to remember them, because there are a great number of reference books both paper and electronic. These books can be compared with a bag or a box, in which the data are stored.

A man, systematizing knowledge

Check of the reliability Selection of the field of a “new” concept (6) of knowledge (1)

Methods of Data domain: Formulation Theory of “New” of initial development concept (5) electric drive , concepts (2) development of creative computer system imagination engineering Set of concept Rules of of a field of and ets "transition" knowledge (4) (3)

DCI TRIZ LDS

Figure 1. Structure scheme of the systematization knowledge using TRIZ methodology

Any of them can be simply found there. A question of knowledge is considerably more difficult. We are not satisfied with “a bag” of knowledge; it takes a lot of time to sieve through it. Besides, knowledge is constantly developing. A good specialist differs from a bad one (or a student) by his systematized knowledge. Systematized knowledge as a rule is not being taught. Logical system of TRIZ helps eliminate this weakness, and solve the contradiction between the volume of knowledge and the time, required for its mastering (see figure 1).

2. About system fractality The development of knowledge can be compared with the development of the natural objects: plants, crystals, animals. Mandelbrot showed in his works, that everything in nature is self-similar, that is fractal. Self-similarity observed in the fern is presented in figure 2.

Figure 2. Examples of self-similar objects in nature Simply put, the development is realized by self-recurrence, self-imitation of an initial specimen or a pattern. Let’s illustrate this position on the example of crystal growing. The shape of a crystal is defined by the seeding grain (or the initial specimen). Three components are necessary for a crystal growing: a seeding grain, constructional material and the rules of construction. We try to obtain the structure fern, using Mandelbrot Fractal Geometry. We take geometrical object represented in figure 3a as “the seeding grain” (only three upper lines are considered as the “seeding grain”, the vertical line located below does not relate to the “seeding grain”). Let’s formulate the rules of “transition” from the current into the higher state of system, i.e. the rules of growing of “fractal fern”. The proportionally reduced copy of entire model substitute each element (line) of prototype ; thus, one step of the iteration is realized (see figure 3 b, 3 c). a) b) c)

Figure 3. One-step realization of the iteration of the construction of fractal image In general, the quantity of iteration is unconfined and the more iteration is realized, the more the fractal model is adequate to real object and “the fractal fern” is nearer to the real. In figure 4 the sequence of iterations is represented: zero (prototype), the third, the fifth and the eighth.

Figure 4. A fractal model of fern It should be noted that even in such simple plants as fern the prototypes and the rules of “transition” from the iteration to the iteration are more complex than in the example given above. However, there is a basic principle of self-similarity in all plants. Animals have more complex process of self-similarity. It is possible to assume that all the necessary information about the prototypes (patterns) in animals is placed in the genes /2/, and the laws of nature determine the rules of “transition”. By the way, it is the knowledge of these laws that is the basic purpose of the education.

3. About knowledge fractality. The evolution of the self-developing (“living”) organisms can be represented in the following form: crystals, algae, corals, ferns, fishes, highest plants, birds, mammals, men. That is to the present moment man possesses the highest level of complexity. It is possible to assume that in man the fractality should appear not only on the physical level, but also at the spiritual level, i.e. in the consciousness. A man experiences with the consciousness, i.e. the system of knowledge is constructed. Then it is natural to assume that the knowledge is also fractal. In fact, knowledge is the reflection of the world picture, and if the world is fractal knowledge is fractal too. Now we are going to present the analogs of concepts: a pattern, resources, iterative rules for above objects: the Fractal Geometry by Mandelbrot a pattern is an initial geometrical object (see table 1). The prototypes (images) of fractal (fundamental) knowledge are axioms of the corresponding field of knowledge. In the Fractal Geometry “the rules of construction” are the iterative rules, according to which the proportionally reduced initial geometrical object replaces each fragment of a geometrical object. Iterative rules of fractal (applied) knowledge are Substance-Field (Su-Field) conversion, the ways of solving contradictions. Actually, all new knowledge appears after solving standard contradiction between the old knowledge and new facts, which appeared as a result of observation of the peace. Boris Zlotin and Aleksandre Lyubomirskii /3/ mentioned it in their work on solution of research problem. “Construction materials” are resources in TRIZ understanding. When the resources of the development of science become exhausted, the association of sciences appears for obtaining the new volume of resources in a strict correspondence with the Law of transition into the super-system.

Table 1. Pattern Resources Iterative rules The Fractal Initial Geometric space Exchange an element of the Geometry geometrical specimen by topologically object transformed copy of the specimen Crystals The seeding Salt solution Doubling of the seeding grain grain (image) lattice Plants Protogene Soil mineral Cell division (fern) substance Animals Genes Proteins, fats, Cell division carbohydrates from food Fractal Axioms, starting Observations, Developing axioms according to the fundamental positions facts principles of integration (Dao bears knowledge one, one bears two, two bear three, three bear all (parable 42) /4/) Fractal Fundamentals of Resources of Developing axioms according to Su- applied correspondence problem domain Field conversion and ways of knowledge problem solving contradictions. domain.

4. Examples of the systematization of knowledge There are some examples from the area of computer technology. The discussion deals with the systematization of already known knowledge, but not with the discovery of new one. The first example relates to dynamic type core storage DRAM (Dynamic Random Access Memory) /5/. One of the main advantages of this memory is extraordinary simplicity and, as consequence, low cost. The nucleus of the microcircuit of dynamic memory consists of many cells, each of which stores only one bit of information. The “heart” of cell is a simple device, which consists of one transistor and one capacitor. One of the central failures is relatively small speed, which is determined by the system of addressing. Problems with the addressing appear as a result of a huge quantity of cells in the memory microcircuit (to billion). So the applied knowledge relates to the core storage. It is the knowledge of record, storage and readout of information. This knowledge should be mastered only one time, since in all DRAM these processes are equally realized. Only the system of addressing is changed. The simplest and the most natural system of addressing is direct one. In this type of addressing each cell has the direct line of address. There is a natural contradiction here: with increasing the capacity of storage (quantity of storage cells) a quantity of address lines inadmissibly increases. This contradiction is solved by the method of “transition into another measurement”. At the physical level the cells are united into rectangular matrix which horizontal rules are ROWs and vertical – COLUMNs or PAGEs. In this solution a quantity of address lines is less than storage cells into the square root. Nevertheless, the contradiction formulated above appears again but already at the level of the supersystem: with increasing the capacity of storage (quantity of storage cells) a quantity of address lines of microcircuit (outlets) inadmissibly increases. This contradiction is solved by the use of several methods: “the principle of universality”, “the principle of preliminary action”. Columns and rows of matrix of memory are combined into the united address lines. In this case a quantity of address lines is halved, though the selection of concrete storage cell is takes away twice much time, because it is necessary to consequently transfer the numbers of column and line. To indicate what is on the address line at the present moment (the number of line or the number of column) two additional conclusions were added: RAS (Row Address Strobe) and CAS (Column Address Strobe). In the calm state the high signal level is supported in both outlets, which tells the microcircuit that there is no information on the address lines and it is necessary to undertake no actions. Further contradictions are connected with increasing the speed of the microcircuits of memory and they will be illustrated by timetables. This first diagram relates to the base microcircuit DRAM with RAS and CAS outlets (figure 5a). RAS to CAS CAS RAS latency latency precharge a) RAS

CAS Address Row 1 Column 1 Row 2 Column 2 DRAM Bus Data Bus Data 1 Data 2

b) RAS

CAS Address FPM Bus Row 1 Column 1 Column 2 Column 3 DRAM Data Bus Data 1 Data 2 Data 3 c) RAS

CAS Address EDO Bus Row 1 Column 1 Column 2 Column 3 Column 4 DRAM Data Bus Data 1 Data 2 Data 3 Data 4

Figure 5 Time diagram, which illustrate the work of some memory types So, let’s formulate the contradiction: with increasing the speed of memory its costs inadmissibly increases. This contradiction is being commented now. According to the recommendations of G.S.Altshuller /6/ in the formation of the right side of a contradiction it is necessary to use simple, obvious solutions, which only strengthen contradictions. In this case to increase the speed it is proposed to use the static memory microcircuits, which are not only “faster” than dynamic ones but also more expensive. To permit the formulated contradiction it is used “the principle of continuity of useful action”. The memory of Fast Page Mode DRAM (FPM-DRAM) was developed. The support of the reduced addresses became the main difference from the memory of previous generation. If the next required cell is at the same line as the previous one, its address is uniquely determined only by number of column and the transfer of the line number is no longer required (see figure 5b). The new type of the memory microcircuits only for a while solved the contradiction formulated above. The same contradiction appeared again. To permit it the law of the agreement of the system rhythmic was used. The synchronous dynamic memory (SDRAM) was developed. SDRAM memory microcircuits work synchronously with the controller which guarantees the completion of cycle within the strictly assigned period (see figure 6). The advanced batch mode of exchange is also realized in SDRAM. The controller can inquire both one and several serial storage cells, and if desired – the whole entire line.

RAS to CAS CAS latency latency CLK

RAS

SDRAM CAS Address Bus Row Column Data Data 1 2 3 4 Bus Figure 6. Time diagram, which illustrates the work of the contemporary types of memory. Then the standard contradiction “speed-cost” was solved by “the principle of universality”. For it DDR-SDRAM – Double Data Rate SDRAM was developed. Doubling of speed was achieved due to the transmission of data both on the front and on the decrease of timing pulse (in SDRAM the transmission of data is achieved only on the front). Thus, for mastering the knowledge on the microcircuits of dynamic memory it is necessary to understand the work of a storage cell, then to solve contradictions one by one, using methods /7/: “transition into another measurement”, “the principle of universality”, “the principle of preliminary action”, “the principle of continuity of useful action”, the law of coordination (harmonization) of rhythms was used, “the principle of universality”.

5. Conclusion The proposed approach to the systematization of knowledge was tested in teaching the cycle of TRIZ subjects: Development of Creative Imagination, Dialects of Computer Systems and Technology of Computer Creation of specialty “Computer Engineering” of Komsomolsk-na-Amure State Technical University.

6. References 1. TRIZ Future Conference 2005. in ETRIA World conference. 2005. Graz, Austria: Leoben: Leoben University press. p. 453-466. ISBN 3-7011-0057-8. 2. Web of Life: A New Scientific Understanding of Living Systems C.Fritjof New York: Anchor Books, 1996. 3. Journal of TRIZ, No 10, 1995, ISSN 0869-3943. 4. Dao De Jing: The Book of the Way. Lao-zi: University of California Press. 235 pages ISBN 0520242211. 5. High Performance Memories: New Architecture DRAMs and SRAMs - Evolution and Function. Betty Prince. John Wiley & Sons, 1999. ISBN 0471986100 6. G.Altshuller: (1984), Creativity As An Exact Science: The Theory of the Solution of Inventive Problems. Translated by Anthony Williams. Gordon and Breach Science Publishers. 7. Zlotin B., Zusman A., Altshuller G., Philatov V.: 1999, TOOLS OF CLASSICAL TRIZ. Ideation International Inc.

OTSM-TRIZ PROBLEM NETWORK TECHNIQUE: APPLICATION TO THE HISTORY OF GERMAN HIGH-SPEED TRAINS

Nikolai Khomenko INSA Strasbourg, France EIfER, Karlsruhe, Germany Insight Technologies Lab, Toronto, Canada [email protected]

Eric Schenk INSA Strasbourg, France; [email protected]

Igor Kaikov EIfER, Karlsruhe, Germany [email protected]

Abstract Research is a human activity that involves many intellectual instruments needed to solve various problems. These intellectual products are also used to continue the research in order to get theoretical results that can be practically applied. Historical research often yields a wide field for analysis of problems that were solved in the past as the research on a particular subject evolved. Any research in the domain of management and investment needs this kind of analysis. Instruments based on classical TRIZ and OTSM (referred to as OTSM-TRIZ thereafter) are dedicated to dealing with networks of problems, contradictions and parameters. All of those networks are most efficient when they are used as a system. However, each of them could be efficiently used as an independent instrument for certain specific situations. For instance, Network of Problems can be used as an instrument for historical analysis of various problem situation or project development. This paper briefly presents the “Network of Problems” technique and provides a short overview of its application in scientific research relevant to the area of Investment in Innovation The subject of analysis, in this paper, is the “History of German High Speed Trains Project”, and the brief glossary at the end explains some OTSM terms. Keywords: TRIZ, OTSM, Network of problems, Investment, Innovation.

1. Introduction: What is the “Network of Problems”? There is a set of main and auxiliary instruments based on OTSM and TRIZ findings [Altshuller 1969, 1973, 1984, 1986, 1991, 1999; Khomenko 1997-2000, 2005], one of which is the “Problem Flow Networks” approach (PFN). “Network of Problems” is used at the first stage of problem situation analysis in frame of the PFN approach1. It is used to obtain an

1 Networks of Contradictions and Parameters are used as next stages for the complex interdisciplinary problem solving process. However OTSM Network of Problems could be used as a separate instrument for historical analysis which is a traditional way for many scientific researches in various domains of science. overall understanding of the problem situation, a so-called “big picture” of the situation. When this big picture is presented according set of OTSM rules as a network of problems (which is a kind of semantic network), the network can be analysed by OTSM rules. Traditionally, the initial problem situation analysis yields a set of problems that should be the first to be solved. However, the process of selecting these problems is not obvious, especially when dealing with complex cross-disciplinary, non-typical (also known as non- standard or non-routine) problems. The “Network of Problems” technique was developed to increase level of formalisation for managing this process by analysing sub-graphs of the network. In carrying out historical research of certain investment projects, one also needs an overview of existing problems and their solutions in the course of the project. It must be kept in mind that it was not a goal of our research to find alternatives or better solutions, and this is why only a fragment of PFN approach was used as an independent instrument. Within the frame of PFN approach, a complex problem situation is viewed as a network of problems. In the most common case before anything else is done, all relevant problems should be collected and listed. In our case it was done in the paper of Llerena and Schenk [Llerena and Schenk, 2005]. As soon as the list of initial problems is ready, we can start developing the OTSM Network of Problems. While this process is taking place, some further problems, which were not previously listed, can be discovered. This often happens if the network of problems is used in project where a complex problem is being solved. One of possible sub-graph indicator of a missing problem is shown in the Fig. 1

Super-Problem

Sub-Problem ?Missing Problem?

Sub-Problem Fig. 1. Direct line indicates that at least one problem was missed.

Here are some rules used to construct and analyse the Network of Problems for various projects. Collect all available problems from various sources: books, papers, interviews, using your own judgement (but then test with Specific Knowledge Experts). Describe the problems briefly. In some cases, constructing a network of problems can be done without the initial list of problems. Usually this happens during OTSM-problem solving coaching sessions, or at various meetings and sometimes in the process of brainstorming. In these cases, as soon as problem is mentioned by a participant, it is discussed and linked with other problems of the network according to the rules provided bellow. For each problem, its super- and sub-problems should be identified (see brief glossary). In the beginning the graph called the Network of Problems appears as several hierarchical non- linked sub-graphs, also known as trees of problems and partial solutions. Super-Problem

Sub-Problem Partial Solution

New Sub-Problem

Fig. 2. Relationships between problems and partial solutions and indicator of a contradiction.

As soon as a relationships arrow appears between trees, the graph usually loses its tree structure.. The arrow starts from the bottom part of the Super-Problem node and arrives at the top of the Sub-Problem node or the Partial Solution node. This link between the two nodes could be read like this: in order to solve the super-problem, the sub-problem should be solved or partial solution should be implemented. Some partial solutions give rise to new sub- problems. In this case the arrow goes out of bottom part of the Partial Solution node and comes into top part of the Problem node (Fig. 2). Sub-graph Problem -- Partial Solution -- New Problem indicates an opportunity to discover the contradiction that has to be solved. Super-Problem – Sub Problem relationship is usually identified between two and only two problems. This is an important point in the construction process of the OTSM Network of Problems: here a hierarchical relationship is considered just for one pair of problems. Sometimes we may arrive at the closed circle of problems. This indicates the presence of a hidden contradiction that should be discovered and resolved. Closed circle can be seen as a “bottleneck” of a set of problem and should be solved immediately in order to eliminate the initial problem situation. (See Fig. 3)

Problem A.

Problem B.

Problem D.

Fig. 3. Closed circle of problems

A network of problems is an oriented graph that consists of nodes and arrows that link the nodes. The arrows come out of the super-problem into a node that symbolizes a sub-problem that must be solved in order to eliminate super-problem or into a node that symbolizes a partial solution that could be implemented to solve the super-problem. An arrow that comes out of partial solution may also show sub-problem that must be solved in order to implement the partial solution. OTSM Network of Problems is a graph that could be viewed as a semantic network. The semantic of the network is organized in such a way that allows us to formally recognize a bottleneck problem (node) that can contain contradiction to be resolved. The node that has two or more inputs from different super-problems or partial solutions indicates the presence of the bottleneck for the initial problem situation (Fig. 4). We should emphasize that the Network of Problems should not be viewed as a cause-effect diagram or a root-cause diagram or even a system or process diagram, but all of those diagrams could be useful to develop the initial list of problems. It is important to keep in mind that the Network of Problems presents relationships between problems and partial solutions. It should be constructed and analysed according to OTSM rules that were discovered in the course of our research and presented in this paper.

Super-Problem A Super-Problem B Super-Problem C

Bottleneck Problem

Fig. 4. Bottle neck problem.

As soon as some partial solutions can be inserted into the network of problems, we should consider - and put onto the map below the partial solution all disadvantages of this partial solution as a Sub-problem (Fig. 2.) that prevent us from viewing the solution as a Final Conceptual Solution that can be prototyped or implemented [Khomenko N., Kucharavy D.- 2002]. While constructing each of Hierarchical Trees of problems, we should consider how nodes of each tree – if any - are linked to nodes of other trees. Eventually, the set of hierarchical trees will start transforming into Networks of Problems to be analysed and developed further, according to specific application. If we carry out our research in a domain of cross- disciplinary research, the network of problems should be maintained in the course of the entire research process. If the network of problems is used for problem solving, then it should be transformed into a Network of Contradictions and developed further, according to OTSM problem solving process [Khomenko, De Guio 2006]. If the network of problems is used for a Ph.D. project, then both of the above approaches could be utilized: first for literature research and presenting the state of art in the domain of the Ph.D. project, and then as the instrument for choosing problems for Ph.D. research and solving research problems according to the OTSM-TRIZ problem solving process. While one is going through the previous steps, one could encounter some additional sub- and super-problems, as well as partial solutions that were not mentioned before. Those additional problems and partial solutions should be inserted into appropriate places of the Network of Problems. Also, partial solutions should be collected and used in arriving to the Description of Final Conceptual Solution [Khomenko N., Kucharavy D.-2002]. Each link that comes in or out of the node should start from a separate connection point. One is not allowed to start or finish more than one link at one connection point. This rule is effective only in the case of visual analysis being performed by a human. If the analysis is going to be carried out with the help of a computer, one could neglect this rule and use just one connected point. Usually, at least in the beginning, analysis is carried out visually by a human. As the number of nodes and links grows, more computer support will be used. Today we have the prototype of software constructed to support this analysis [Khomenko, De Guio 2006]. For beginners we could recommend to follow the basic rule about connection points: one arrow per connection point.

2. The construction of a Network of Problems for the research of investment problems in the course of the German “High Speed Train” project. This network is based on the information available in the paper presented by Llerena and Schenk [Llerena and Schenk, 2005]. Additional information from various other sources has helped us clarify some points and answer some questions that were generated as the network was constructed and analysed. This paper presents a historical overview of the project development from the point of view of investment. While creating the network of problems, we took into consideration only relationships between problems and partial solutions. As usual, in constructing a Network of Problem, we did not pay much attention to hierarchy of systems, historical aspects and the timeline but used all of these models to obtain the initial list of problems. In this particular situation, we were primarily interested in cause-and-effect relationships between problems and partial solutions, as well as the way in which they could generate a new situation with new problems and new partial solutions. It should be noted that this is a key point in the Network of Problems approach. This Network of Problems is important for further research, as it can be used to collect and generalize previous experience and develops practical instruments based on the generalization of the past networks of problems, in order to better understand the present situation and be able to discuss future opportunities and changes. In the case of the German High Speed project, we primarily focused on the evolution of the Network of Problems throughout the project, on the fashion in which these problems were solved and on the new problems to which these solutions led. Later on, analysis and generalisation based on previous research and experience of individual experts led us to certain important hypotheses and ideas that had to be tested in the follow-up research about investments made into various projects. For instance, we have identified one of the contradictions that lie in the basis of the problem of choosing the direction for investment. This hypothetical contradiction should be thoroughly tested and developed. It is also necessary to identify the typical ways of resolving this contradiction that have been accumulated in the past and can be used more purposefully in the future. For that, additional research has to be carried out. This is an illustration of how applying the Network of Problems can reveal interesting and promising directions for research the purpose of which is improving the theory and, on the basis, bettering the existing instruments for investors. The preliminary formulation of the contradiction encountered by investors can be done in the following way. In the beginning an investor is able to be very flexible and choose from several options for investing; however, in the beginning of a fairly innovative project there is usually little information available to help the investor make a good choice. Furthermore, risk in such a situation is high. On the other hand, the more investment is done in the course of the project, the more information becomes available to help one make the right choices, but the flexibility of investment is not as high as it was in the beginning. In other words, the more investment is done, the more information one has to make the right choice, but, at the same time, one becomes less flexible in operating various choice options. It can be useful, in future research, to collect typical solutions for a contradiction that was encountered by investors and project managers in the past in order to develop them further and perhaps implement them in the future. Some basic OTSM-TRIZ solutions can also be utilized for this contradiction, and it serves as a subject for research about efficiency of such solutions. Therefore, one can say that application of OTSM Network of Problems technique can be quite useful for research. It is always helpful to carry out analysis of the past experience and plan the follow-up research to be conducted. It is also effective to propose some ideas about the ways in which the typical solutions of the past could be improved and some new solution generated through the use of other techniques based on OTSM and TRIZ. Some other conclusions relevant to the investment problematic are provided below.

3. Conclusion. The support for public innovation (e.g., research and development funding) can be directed towards various objectives: for example, at early stages of the innovation process, exploration of technological opportunities is sought. At later stages, public support often seeks to foster the adoption of the new technology. Even though these objectives may be distinct, they also can overlap – for instance, in those cases when several technologies are being supported simultaneously. In this case, an essential point is to select “the right technology at the right time”. When the supported technologies are in competition with the ones previously existing on the market, the situation becomes event more intricate. OTSM-TRIZ-based Network of Problems technique was useful in the analysis of the difficulties that can be encountered in such situations. From this point of view, an interesting case is provided by the history of the German High-Speed Train Programs. The case of German High Speed Train project is also interesting because in it two innovations – incremental innovation (improvement of the existing system) and radical innovation (Magnetic Levitation System) – were developed simultaneously. The project, therefore, yielded some findings about the relationships between incremental and radical innovations and investment problems that arise in situations where it is not clear in the beginning what innovations can be invested. Several stages can be distinguished in this history since the early 1970s, involving various actors in an evolving environment: during the first stage (1971–1977), innovations in the Magnetic Levitation (MagLev) and Wheel/Rail technologies were pursued under the sponsorship of the Federal Ministry for Research and Technology (BMFT). In 1977, the ‘generic’ programme was split into two separate projects. The BMFT was responsible for the further development of the Magnetic Levitation technology, while the Federal Ministry of Transports (BMV) took responsibility for the development of the more traditional Wheel/Rail system. From that time, the two projects proceeded on two separate paths. At the end of 2000, despite the maturity of the MagLev technology, the Transrapid was not adopted for the Hamburg–Berlin line. Some of the reasons given were the high costs of the technology, its small performance advantage over the existing ICE, and uncertainty of demand. An alternative outlet for this technology, namely the 31.5-km Chinese project linking Pudong airport to the Long Yang road-station in Shanghai, was found only recently. In the meantime, China’s announcement that a locally made MagLev train has been successfully tested shows that this country has been able to catch up, at least partially, with Germany in terms of the technological know-how regarding MagLev trains. The sequences of events that led to the present situation and specific points in the sequence have been documented [see Llerena and Schenk, 2005]. OTSM Network of Problems technique described above was used for depth analysis of the interplay between these events. This should contribute to a better understanding of the management of large projects. 4. Brief Glossary Partial Conceptual Solution or Partial Solution is a solution that can potentially bring at least one positive contribution to the problem-solving process. A partial solution also can be viewed as a system that has advantages and disadvantages, but in which the disadvantages are significant and cannot be accepted in the present form. Therefore, such a solution cannot be considered a Final Conceptual Solution. It is also can not be viewed as a problem. The phenomenon of Problem-Solution dichotomy should be discussed more precisely in separate paper. Here we just need to mention that, as soon as partial solutions can be viewed as hypothetical systems, we are able to converge those hypothetical systems according to TRIZ rules of convergence for engineering system [Igor Vertkin. 1984] in order to obtain the description of the Final Conceptual Solution. Final Conceptual Solution is a conceptual solution that is feasible and chosen to develop a prototype of a new system. Detailed OTSM classification of solution was presented in Khomenko, Kucharavy 2002. Super-Problem is a goal to be reached, which will help us to clarify what should be solved for each of the Sub-problems. Sub-problem is a problem that should be solved in order to find a solution for the super- problem.

5. References: Altshuller Genrich S. (1986), (1991); “To Find an Idea: Introduction to The Theory Of Inventive Problem Solving”, Nauka, Novosibirsk. (1986), (1991) Altshuller, Genrich S., “Algorithm of Invention”, Moscowskiy Rabochy, Moscow (1969), (1973) Altshuller, Genrich S. (1999). “The Innovation Algorithm: TRIZ, systematic innovation, and technical creativity.” Worchester, Massachusetts: Technical Innovation Center. 312 pages, ISBN 0964074044 Altshuller Genrich S. (1984). “Creativity as an exact science: The Theory of the Solution of Inventive Problems”. Translated by Anthony Williams. Gordon and Breach Science Publishers. ISBN 0-677-21230-5 Llerena, Patrick, Schenk Eric. (2005). “Technology Policy and A-Synchronic Technologies: The Case of German High-Speed Trains.” In the book Knowledge-Based Economy, Innovation Policy. Berlin, Heidelberg, New York. Springer. ISBN: 3-540-25581-8 Khomenko, N. (1997-2000). ”The inventive problem solving theory.” Hand book for OTSM-TRIZ course., LG-Electronics and Samsung. Khomenko, Nikolai, Kucharavy, Dmitry,. (2002). "OTSM-TRIZ problem solving process: Solutions and their classification", in Proceedings of TRIZ Future Conference, Strasbourg, France. Khomenko, Nikolai. (2005). “Utilisation de la theorie TRIZ dans les metiers du BTP”. Extrait du Rapport Final pour le Ministere de l’Equipement des Transports, du Logement, du Tourisme et de la Mer. In Journal: Les Cachers de l’INSA de Strasbourg. Numero 1. ISSN 1776-3363. Strasbourg, INSA. Khomenko, Nikolai, De Guio, Roland. (2006). “OTSM-TRIZ Based Computer Support Framework for Complex Problem Management. In International Journal International Journal of Computer Application in Technology. Special Issue: Computer Application in Innovation. ISSN (Online): 1741- 5047 - ISSN (Print): 0952-8091 (Submitted) Vertkin, Igor. Mechanisms of convergence appear in the evolution of an engineering system. Manuscript 23 July 1984.

PRACTICE-BASED METHODOLOGY FOR EFFECTIVELY MODELING AND DOCUMENTING SEARCH, PROTECTION AND INNOVATION

Roberto Nani Bergamo, www.scinte.com [email protected]

Daniele Regazzoni University of Bergamo, Industrial Engineering Dept. [email protected]

Abstract This work relates to a methodology for effectively modeling an Action and Problem System and documenting a path built by means of patent databases. The aim of this work is to provide an improved method and operative tool for a quick and reliable patents investigation driven by Boolean algorithms. The method has been tested with several projects for companies of different industrial areas. Moreover in the last months the method has been used in case studies by students from the University of Bergamo with good results after a very few hours of training. Two specific case studies will be discussed in this paper in order to clarify the operative value of said method and to show the results obtained in terms of solutions found and of efforts requested. Keywords: kinetic model, potential model, patent investigation, patent databases.

1. Introduction In the last years Intellectual Property (IP) issues has been getting increasing attention form companies and research centres. This is mainly due to two factors: (1) the need of companies for a better protection of the results of R&D activities and (2) the emerging awareness of the unexploited value of their patents, e.g. for non-competitive applications in other industrial sectors. As IP is becoming more and more a key asset also for SMEs, the research on methods and tools to perform patent search, analysis and circumvention represent a crucial matter. The aim of this work is to provide an improved method and operative tool for a quick and reliable patents investigation driven by Boolean algorithms [1]. The method proposed took its origins by existing methods, such as: FOS (Function-Oriented Search) developed by Litvin [2] and APOS (Function an Problem Oriented Search) by Axelrod [3].

2. Proposed method The proposed method consists in defining: - the intrinsic and extrinsic factors capable of describing the context that is the object of the quest, protection and innovation; - the relationship between said intrinsic and extrinsic factors; - an Action-and-Problem system as a physical expression of its combination with said intrinsic and extrinsic factors and their relationship; - the results of a constraining action on the model developed. Three different model are used to carry out this new method for patent investigation: Kinetic model, Potential model and Forced Model. The terms used in the followings are freely inspired to physics and should be intended with a general meaning. A Kinetic Model [M] = f(V, ρ) of a system is an expression of Class1 V to which said system refers and the intrinsic characteristic ρ of said class V. A Potential Model [K] = f(A, E) of a system is an expression of the Subclass or Group A to which said system refers and the extrinsic properties E of said subclass or group A. No relationship exists between Kinetic Model [M] and Potential Model [K], if taken separately. Forced Model is the model obtained combining the class V and its intrinsic characteristic ρ, and the subclass A and its extrinsic properties E, and it is exerted by a force [F] to gather the required set of patents. The analogy with Lagrange’s equations, as expressions of kinetic energy and potential energy and their transformation into the equation of motion [M]ẍ+[K]x = f, supports inventors by determining the meaning of intrinsic and extrinsic factors and of their relationships for the presented method. Thus, the parameters of a Kinetic Model [M] = f(V, ρ) are analogue to Volume (V) and Density (ρ) of masses of a discrete system while the parameters of a Potential Model [K] = f(A, E) meet analogy with Area (A) and Modulus of Elasticity (E) of springs of a discrete system.

3. Methodology application The method has been tested with several projects for companies of different industrial areas. Moreover in the last months the method has been used in case studies by students from the University of Bergamo with good results after a very few hours of training. Two specific case studies are discussed in the followings in order to clarify the operative value of said method and to show the results obtained in terms of solutions found and of efforts requested. In order to show the wide range of application of the method the chosen cases belong to different industrial field (ICT and mechanical) and provide a solution to different goals. The former case relates to the analysis of a European patent application that does not meet the requirements of the European Patent Convention (EPC); an investigation of the state of the art has been enterprised in order to answer to a Communication pursuant to article 96(2) of EPC. The latter relates to the field of Pressure Die Casting or Injection Die Casting: thermal expansion and corrosion of an automotive piece generate problems, which are analysed within the process comprising firstly die casting, and secondly galvanisation. 3.1 Study case 1: Communication pursuant to article 96(2) of EPC This case refers to a real issue some inventors had to face to convince the European Auditors of the innovative value of their software product. The key topic consists in demonstrating that sending an end users based in a specific location ads from companies

1 Classes, Subclasses and Groups are taken from IPC (International Patent Classification) www.wipo.int/classifications/ipc/en/ based in the same area is a clearly an innovative feature of messaging-service. Thanks to the Kinetic and a Potential Model approach the problem system will be modelled and forced, and a certain set of meaningful patents will be quoted. The documentation about the problem of this case study can be found using the Customer Service of the European Patent Office that includes an “online Public File inspection” http://ofi.epoline.org/view/GetDossier. Actually, this site allows sorting out documents by publication number (in this case EP1363430: - EUROPEAN PATENT APPLICATION NO. 02425294.2) as those reported in the followings. The first claim reports: [System for managing and transmitting digital information by means of a messaging system [] characterized in that comprises a protocol [] being able to be automatically executed in order to send [] a set of pieces of information comprising advertising messages that can be divided into geographical communication environments, said environments being in particular Nation, Province or Municipality of the user.] From the Annex to the Communication (March 2006), par. 1: [The statement “a set of pieces of information comprising advertising messages that can be divided into geographical environments” used in claim 1 is vague and unclear and leaves the reader in doubt as how a piece of information is divided into geographical environments … (Article 84 EPC)]. A Kinetic Model [M] of this statement, referring to the object of the patent application, is represented by the following Boolean expression: [M] = f(V, ρ) = ((string OR command) AB ) AND ((protocol OR draft OR record) TI ) (1) where: - protocol, draft, record = class V of the system object of patent application; - string, command = intrinsic characteristic ρ of said class V. The classes according to IPC-R) characterizing the Boolean algorithm (1) applied to a Patent DB are explained in fig. 1. Result Set for Query: (((string OR command) AB ) AND ((protocol OR draft OR record) TI))

Collections searched: US (Granted), European (Applications), European (Granted), WIPO PCT Publications, US (Applications) 4,603 matches found of 9,546,891 patents searched IPC-R Code- 4 digit Items % Bar Chart G11B G — Physics ; Information Storage ; Information 1486 23.3 % Storage B H04N H — Electricity ; Electric Communication Technique 936 14.7 % ; Pict G06F G — Physics ; Computing ; Calculating ; H04L H — Electricity ; Electric Communication Technique 674 10.6 % ; Tran H04Q H — Electricity ; Electric Communication Technique 193 3.0 % ; Sele

Fig. 1: main IPC-R classes of [M]

A Potential Model [K] of this statement is represented by the following Boolean expression: [K] = f(A, E) = ((discrimin* OR divid* OR separat*) AB ) AND ((manag*) AB ) (2) where: - management = subclass A of the object of patent application; - discriminating, dividing, separating = functional action, as extrinsic properties E of subclass A. The Classes according to IPC-R characterizing the Boolean algorithm (2) applied to a Patent DB are explained in the table of fig. 2.

Result Set for Query: ((((discrimin* OR divid* OR separat*) AB )) AND ((((manag*) AB ))))

Collections searched: US (Granted), European (Applications), European (Granted), WIPO PCT Publications, US (Applications) 6,246 matches found of 9,551,220 patents searched IPC-R Code- 4 digit Items % Bar Chart G06F G — Physics ; Computing ; Calculating ; 2173 26.5 % H04L H — Electricity ; Electric Communication Technique 1085 13.2 % ; Tran H04N H — Electricity ; Electric Communication Technique 567 6.9 % ; Pict G11B G — Physics ; Information Storage ; Information 525 6.3 % Storage B H04Q H — Electricity ; Electric Communication 483 5.8 % Technique ; Sele Fig. 2: main IPC-R classes of [K]

[M] and [K] respect the following condition:

Main IPC-R classes of [M] ≠ Main IPC-R classes of [K] (3) G11B INFORMATION STORAGE G06F ELECTRIC DIGITAL PROCESSING H04N PICTORIAL H04L TRANSMISSION OF DIGITAL COMMUNICATION, e.g. TELEVISION INFORMATION

The sentence (3) allows to combine [M] and [K] constituting a Model of the class V and its intrinsic characteristic ρ, and the subclass A and its extrinsic properties E. An example of a bad Model, due to the coincidence of main classes IPC-R of [M3] and [K3]: [K3] = ((discrimin* OR divid* OR separat*) AB) AND ((protocol OR draft OR record) TI ); [M3] = ((geograph*) AB ) AND ((environment) TI); is represented by tabs of fig. 3. Result Set for Query: K3 = ((((discrimin* OR divid* OR separat*) TI )) AND ((((protocol OR draft OR record) AB ))))

Collections searched: US (Granted), European (Applications), European (Granted), WIPO PCT Publications, US (Applications) 1,023 matches found of 9,525,564 patents searched IPC-R Code- 4 digit Items % Bar Chart H04L H — Electricity ; Electric 70 60.9 % Communication Technique ; Tran H04B H — Electricity ; Electric 10 8.7 % Communication Technique ; Tran H04M H — Electricity ; Electric 10 8.7 % Communication Technique ; Tele

Result Set for Query: M3 = (((geograph*) AB ) AND ((environment) AB)) IPC-R Code- 4 digit Items % Bar Chart H04L H — Electricity ; Electric Communication 73 12.3 % Technique ; Tran G06F G — Physics ; Computing ; Calculating ; 71 11.9 % G06Q G — Physics ; Computing ; Calculating ; 68 11.4 %

Fig. 3: main classes IPC-R [M3] = main classes IPC-R [K3]

The main IPC-R group obtained by the Boolean algorithm (2) is H04L12/56 (TRANSMISSION OF DIGITAL INFORMATION, PACKET SWITCHING SYSTEMS). The tab of fig. 4 represents the IPC-R Code of classes constituting the main group H04L12/56.

Result Set for Query: K = ((((discrimin* OR divid* OR separat*) AB )) AND ((((manag*) AB )))) Work File: K-->H04L12/56 IPC-R Code- 4 digit Items % Bar Chart H04L H — Electricity ; Electric 394 64.2 % Communication Technique ; Tran H04Q H — Electricity ; Electric 137 22.3 % Communication Technique ; Sele G06F G — Physics ; Computing ; 34 5.5 % Calculating ; H04J H — Electricity ; Electric 18 2.9 % Communication Technique ; Mult H04B H — Electricity ; Electric 14 2.2 % Communication Technique ; Tran H04M H — Electricity ; Electric 12 1.9 % Communication Technique ; Tele H04H H — Electricity ; Electric 2 0.3 % Communication Technique ; Broa H04N H — Electricity ; Electric 2 0.3 % Communication Technique ; Pict (Below cutoff) 1 0.163 Fig. 4: IPC-R Class Code of main group H04L12/56 The combination of every class forming the main group H04L12/56 with the Boolean algorithm (1) allows the individuation of two relevant IPC-R classes (fig. 5). The relative global Model is represented by the Boolean algorithms: [K(G06F)] AND [M] = ( (G06F) IC) AND ((string OR command) AB ) AND ((protocol OR draft OR record) TI )) (4a) [K(H04N)] AND [M] = ( (H04N) IC) AND ((string OR command) AB ) AND ((protocol OR draft OR record) TI )) (4b) The Model represented by algorithms (4a) and (4b) can be forced. In the specific case, the geographic management of a software, discussed on the Comunication pursuant Article 96(2) IPC, is a force applied to algorithm (4a): [F] = ((geograph* OR regional ) AB ) (5) The exerted Model is: [K(G06F)] AND [M] AND [F] = ( (G06F) IC) AND ((string OR command) AB ) AND ((protocol OR draft OR record) TI )) AND ((geograph* OR regional ) AB )) (6)

1400

1200

1000

800

600

400

200

0 H04L H — H04Q H — G06F G — H04J H — H04B H — H04M H — H04H H — G01N-G H04N H —

Fig. 5: [K(IPC-R Class H04L12/56)] + [M]

Results of exerted model: WO03077116A2 SYSTEM MANAGEMENT CONTROLLER NEGOTIATION PROTOCOL A computer system module includes a system management controller to negotiate with other system management controllers to determine the controller's initial operational state. In an embodiment, negotiation with other system management controllers is based at least in part on one of controller capability, user configured preference, module type, and geographical address. WO03021461A1 SYSTEM AND METHOD FOR INTEGRATING VOICE OVER INTERNET PROTOCOL NETWORK WITH PERSONAL COMPUTING DEVICES A method and system state-of-the-art integration of a personal computing device with a private VOIP network and the PSTN (202) to control voice sessions on telephony devices (102, 106) of residential and business PC users across a geographic region shown.

3.2 Case Study 2: Corrosion plus thermal expansion of an automotive piece The problem of thermal expansion and corrosion of an automotive piece involves two phases of the process: die casting and galvanisation. Thus, two Kinetic Models are defined while only one Potential Model is needed to represent the extrinsic features of both phases, because they refer to the same global process. The Kinetic Model [Mdc] of die casting is expressed by: [Mdc] = f(Vdc, ρdc) = ((metal* ) AB ) AND ((fluid*) TI) (7) where: metal = object of die casting, corresponding to the class V of die casting; fluid = a characteristic of die casting (e.g. spry, powder, etc.), corresponding to the intrinsic factor ρ of said class V. Classes according to IPC-R characterizing the Boolean algorithm (7) applied to a Patent DB are explained in fig. 6. Result Set for Query: Mdc = ((((((metal* ) AB ) AND ((fluid*) TI)))))

Collections searched: US (Granted), European (Applications), European (Granted), WIPO PCT Publications, US (Applications) 4,430 matches found of 9,555,221 patents searched Bar IPC-R Code- 4 digit Items % Chart C10M C — Chemistry ; Metallurgy ; Petroleum, Gas or 539 7.5 % Coke Indus B01J B — Performing Operations ; Transporting ; 418 5.8 % Physical or CH C09K C — Chemistry ; Metallurgy ; Dyes ; Pai 379 5.3 % F16L F — Mechanical Engineering ; Lighting ; Heating 337 4.7 % B01D B — Performing Operations ; Transporting ; 263 3.6 % Physical or CH

Fig. 6: main IPC-R classes of [Mdc]

While the Kinetic Model [Mg] of galvanisation is: [Mg] = f(Vg, ρg) = ((metal*) TI ) AND ((coat*) TI) (8) where: metal = object of galvanisation, corresponding to the class V of galvanisation; coat = a characteristic of galvanisation, corresponding to the intrinsic factor ρ of said class V. Classes according to International Patent Classification (IPC-R) characterizing the Boolean algorithm (8) applied to a Patent DB are explained in fig. 7.

Result Set for Query: Mg = ((((((metal*) TI ) AND ((coat*) TI)))))

Collections searched: US (Granted), European (Applications), European (Granted), WIPO PCT Publications, US (Applications) 7,757 matches found of 9,555,221 patents searched IPC-R Code- 4 digit Items % Bar Chart C23C C — Chemistry ; Metallurgy ; Coating Metallic 2672 20.1 % Material

Fig. 7: main IPC-R classes of [Mg]

A common Potential Model [K] can be used for both casting and galvanisation phases.Its Boolean representation is: [K] = f(A, E) = ((35,Parameter Change) AB ) AND ((zinc OR Copper) AB)) (9) where: - zinc, copper = subclass of metal, corresponding to subclass or group A of the process; - Parameter Change = action of the process, corresponding to the extrinsic properties E of said subclass or group A. Classes according to International Patent Classification (IPC-R) characterizing the Boolean algorithm (9) applied to a Patent DB are listed in fig. 8. Result Set for Query: K = ((( 35 Parameter Change)) AB ) AND ((zinc OR Copper) AB))

Collections searched: US (Granted), European (Applications), European (Granted), WIPO PCT Publications, US (Applications) 4,434 matches found of 9,555,221 patents searched IPC-R Code- 4 digit Items % Bar Chart H01L H — Electricity ; Basic Electric Elements ; 434 6.0 % Semiconductor B01J B — Performing Operations ; Transporting ; Physical 357 4.9 % or CH C23C C — Chemistry ; Metallurgy ; Coating Metallic 323 4.5 % Material H05K H — Electricity ; Electric Techniques Not Otherwise 237 3.3 % Provided For ; C07C C — Chemistry ; Metallurgy ; Organic Chemistry 216 3.0 %

Fig. 8: main IPC-R classes of [K]

[Mdc] and [K] respect the following condition:

Main IPC-R classes of [Mdc] ≠ Main IPC-R classes of [K] (10) C10M LUBRICATING COMPOSITIONS H01L SEMICONDUCTOR DEVICES B01J CHEMICAL OR PHYSICAL B01J CHEMICAL OR PHYSICAL PROCESSES, e.g. CATALYSIS, COLLOID PROCESSES, e.g. CATALYSIS, COLLOID CHEMISTRY CHEMISTRY

The sentence (10) allows to combine [Mdc] and [K] to constitute a Model representative of the die casting. However, this model has same limits due to the coincidence of a main class. [Mg] and [K] respect the following condition:

Main IPC-R classes of [Mg] ≠ Main IPC-R classes of [K] (11) C23C COATING METALLIC MATERIAL H01L SEMICONDUCTOR DEVICES C09D COATING COMPOSITIONS, B01J CHEMICAL OR PHYSICAL PROCESSES, e.g. CATALYSIS, COLLOID CHEMISTRY

The sentence (11) allows to combine [Mg] and [K] to constitute a Model representative of the galvanisation. The main IPC-R group obtained by the Boolean algorithm (9) is H01L21/02 (Manufacture or treatment of semiconductor devices). The tab of fig. 9 represents the IPC-R Code of classes constituting the main group H01L21/02.

IPC-R Code- 4 digit Items % H01L H — Electricity; Basic Electric Elements; 206 54.8 % Semiconductor C25D C — Chemistry; Metallurgy; Electrolytic or 38 10.1 % Electrophore C23C C — Chemistry; Metallurgy; Coating Metallic 29 7.7 % Material C09G C — Chemistry; Metallurgy; Dyes; Pai 7 1.8 % C25F C — Chemistry; Metallurgy; Electrolytic or 7 1.8 % Electrophore B08B B — Performing Operations; Transporting; 5 1.3 % Cleaning C08L C — Chemistry; Metallurgy; Organic 2 0.5 % Macromolecular Compo H01J H — Electricity; Basic Electric Elements; Electric 2 0.5 % Disc

Fig. 9: IPC-R Class Code of main group H01L21/02

The combination of every class constituting the main group H01L21/02, respectively with the Boolean algorithms (7) and (8) allows to individuate relevant classes and relative global Models: [K(B01J)] AND [Mdc] = ( (B01J) IC) AND ((metal*) AB ) AND ((fluid*) TI )) (12) [K(C09D)] AND [Mg] = ( (C09D) IC) AND ((metal*) TI ) AND ((coat*) TI )) (13) The global Model of the process (die casting + galvanisation) represented by algorithms (12) and (13) can be forced in a way to solve thermal expansion and corrosion problems. The exerted Model with a force: [F] = ((galvanis*) AB)) (14) is: [K(C09D)] AND [Mg] AND [F] = ((metal*) TI ) AND ((coat*) TI ) AND ( (c09d) IC )) AND ((galvanis*) AB)) (15) Results of exerted model (15): WO03076534A1 SURFACE BASE-COAT FORMULATION FOR METAL ALLOYS Chromium-free coating composition with anti-corrosion and anti- fingerprint properties, particularly suitable for metal alloys, especially galvanized steel, and coated articles. Composition comprises aqueous- resin emulsion, hazardous air pollutant-free co-solvent, organo-functional silane, metal chelating agent, and chromium-free corrosion inhibitor, and optionally pH adjusting agent. WO9920696A1 METHOD FOR COATING METALS AND METAL COATED USING SAID METHOD The invention relates to a method for coating surfaces consisting of steel, tinned steel, galvanised steel, zinc-alloy-coated steel or aluminium. A solution or a dispersion of a source of ions of bivalent to tetravalent metals and an organic film former and a solution or a dispersion ofa source of phosphate ions and an organic film former are applied to the metal surface in any order, together or one after the other, and dried in, giving a total dry layer thickness of 0.2 to 3 g/m2. The invention also relates to metal parts coated using the inventive method.

4. Conclusion A structured model of the language applied in a patent-database according to the guidelines as described in this paper allows the development of new research, protection and innovation –strategies by investigating those technological fields which share the same intrinsic parameters. This methods allows to find patent technological classes according to IPC-R, overcoming the pure meaning of the words. Although the method is still subjected to refinements, it has already been successfully applied to a number of cases showing the high potentiality of the former ideas. The way of performing patent investigation presented in the paper has also been tested by last-year students of the Faculty of Engineering of the University of Bergamo, demonstrating good consistency, ease of use and efficiency of the method. Thanks to the reliability of the results and to the possibility of certifying the approach applied, this method has been accredited to check investment-plans developed by main Preliminary Investigation Bodies for Enterprise Investors. Future activities will aim at providing a wider set of case studies and expertise on the pros and cons of the approach developed so far in order to continuously improve it. Further, the opportunity to develop a tool to enhance automation in the definition of parameters and classes will be studied.

5. References [1] Roberto Nani “Boolean Combination and TRIZ criteria. A practical application of a patent-commercial-Data Base” Triz Future Conference 2005. [2] Simon S. Litvin “New Triz-Based Tool – Function-Oriented Search (FOS)” Triz Future Conference 2004. [3] Boris M. Axelrod “New Search and Problem Solving Triz Tool: Methodology for Action & Problem Oriented Search (APOS) Based on the Analysis of Patent Documents” Triz Future Conference 2005. - Jay Odenbaugh “Models” – Department of philosophy, Lewis and Clark College, 0615 SW Palatine Hill Rd, Portland, Oregon. - Delphion, www.delphion.com - patent collections & searching options on patent databases.

SYSTEMATIC DESIGN THROUGH THE INTEGRATION OF TRIZ AND OPTIMIZATION TOOLS

Gaetano Cascini, Paolo Rissone, Federico Rotini, Davide Russo University of Florence - Department of Mechanics and Industrial Technologies Methods and Tools for Innovation Lab {name.surname}@unifi.it

Abstract Marketing strategies are focusing on innovation as the key for being competitive; as a consequence, product development processes must be improved in order to have a link as close as possible between conceptual design and detailed design activities. Within this context, TRIZ and TRIZ-based methodologies and tools are still poorly integrated with product embodiment means: CAD/CAE systems are not suited for supporting the designer in the conceptual design phase and at the same time inventive/separation principles, standard solutions etc. can hardly be translated into a modification of a CAD model and the only opportunity is to restart the modeling process. A small consortium of Italian Universities is analyzing the opportunity to use Design Optimization tools as a means for linking Computer-Aided Innovation (CAI) tools with Product Lifecycle Management (PLM) systems: www.kaemart.it/prosit. Among the specific objectives of the project, this paper describes how to analyze TRIZ technical contradictions by means of Design Optimization tools, with the aim of translating them into physical contradictions. The suggestions provided by inventive/separation principles are therefore converted into a new Design Optimization problem for the development of a novel solution. Keywords: Systematic Design, TRIZ, Computer-Aided Innovation, Topological Optimization, Shape Optimization

1. Introduction Market competitiveness through innovation is the common strategy of developed countries, even if the concept of innovation is very often abused and misused. Certainly, in order to release novel and valuable products, a crucial aspect for a company is the efficiency of its product development cycle from the so-called fuzzy front end to the detailed design. In other terms companies have to implement not just means for generating new ideas with a systematic approach, but also an integrated environment where effective ideas are efficiently converted into products. Computer applications play a relevant role for increasing the efficiency of the whole process, but up to now systematic innovation methodologies like TRIZ are still poorly integrated with product embodiment means [1]: CAD/CAE systems are not conceived for supporting the designer in conceptual design activities and at the same time the outputs of a TRIZ problem solving tool (e.g. inventive/separation principles, standard solutions etc.) can hardly be translated into a modification of a CAD model and the only opportunity is to restart the modeling process. A few preliminary experiments to integrate TRIZ principles within CAD systems have been attempted with promising, but still not satisfactory, results [2, 3]. Besides, a small consortium of Italian Universities has started the PROSIT project (www.kaemart.it/prosit), “From Systematic Innovation to Integrated Product Development”, with the aim of bridging systematic innovation practices and Computer-Aided Innovation (CAI) tools with Product Lifecycle Management (PLM) systems, by means of Design Optimization tools. Innovation and optimization are usually conceived as conflicting activities; in this project topology and shape generation capabilities of modern design optimization technologies are adopted as a means to speed-up the embodiment of innovative concepts, but also as a way to support the designer in the analysis of conflicting requirements for an easier implementation of TRIZ tools. In this paper the use of Design Optimization tools is proposed for identifying the physical contradictions underlying a mechanical system, i.e. the generalized problem model according to traditional TRIZ theory; then, TRIZ general solutions (i.e. inventive/separation principles etc.) are converted in new optimization problems in order to implement a novel solution. Section 2 reports a brief overview of Design optimization tools; section 3 describes how the logic of ARIZ has been reproduced by means of these tools and the overall procedure is detailed. Section 4 shows an exemplary application while the conclusions are briefly presented in section 5.

2. Design Optimization Designing by optimization techniques means translating a design task into a mathematical problem with the following basic entities: • An objective function, i.e. the performance of the system that the designer wants to reach or to improve. • A set of design variables, i.e. the parameters of the system affecting the objective function. • A set of loading conditions and constraints representing the requirements the system has to satisfy. The optimization algorithm finds the value of the design variables which minimizes, maximizes, or, in general, “improves” the objective function while satisfying the constraints. The use of computers for design optimization is rather common in several fields since 1980’s; besides, during the last years new optimization tools have been developed to solve specific design problems [4]. In the followings the main features of these techniques will be summarized. In a shape optimization process the outer boundary of the structure is modified according to the optimization task. The shape of the structure, modeled through the finite element method, is modified by the node locations: the optimization algorithm, according to the loads and boundary conditions applied to the FE model, changes the coordinates of the nodes which are defined as design variables. The result of the optimization cycle is a deformed geometry of the starting shape structure. The size optimization, is a special type of parametrical optimization in which the design variables are represented by some properties of structural elements such as shell thickness, beam cross-sectional properties, spring stiffness, mass etc. During the optimization process these parameters are modified by the algorithm until the expected goal is reached. Topology optimization is a technique that determines the optimal material distribution within a given design space, by modifying the apparent material density considered as a design variable. The design domain is subdivided into finite elements and the optimization algorithm alters the material distribution within the design space at each iteration, according to the objective and constraints defined by the user. The external surfaces defined as “functional” by the user, are kept out from the optimization process and considered as “frozen” areas by the algorithm. Topography optimization is an advanced form of shape optimization in which a distribution of ribs and pattern reinforcements is generated on a specific design region. The approach in topography optimization is similar to the approach used in topology optimization, but shape variables (node coordinates) are used instead of density variables. The large number of shape variables allows the user to create any reinforcement pattern within the design domain. Moreover manufacturing constraints may be set in order to take into account of the requirements related to the manufacturing process. Sliding planes and preferred draw directions may be imposed for molded, tooled and stamped parts as well as minimum or maximum size of the structural elements (i.e. ribs, wall thicknesses, etc.). However, since the design process has multidisciplinary characteristics, improving one performance of a system may result in degrading another. This kind of conflicts cannot be solved using Design Optimization because these techniques are able to focus the design task only to one specific performance to be improved. More precisely, Design Optimization tools allow management of multiple goals just by defining complex objective functions where a weight must be assigned to each specific goal [5]. Thus, the best compromise solution is generated on the base of an initial assumption made by the designer about the relative importance of the requirements, without taking account of the reciprocal interactions.

3. The logic of ARIZ through Design Optimization tools The typical tool TRIZ newcomers encounter is the Contradiction Matrix. Besides, the effectiveness of such a tool is limited for at least two basic reasons: • The reliability of the matrix is influenced by the technical field of application, since it has been built with patents related to inventions of the 1950-70’s. • Very often, the identification of the most suitable parameters, among the classical 39, to describe a technical contradiction is hard due to their overlapping and fuzzy definitions. More experienced practitioners learn the logic of ARIZ, the algorithm for inventive problem solving, that leads to the transformation of a technical contradiction into a physical contradiction, i.e. opposite requisites for a design parameter. Authors’ experience reveals that such a transformation implies a certain “assimilation” of the methodology by the user. Besides the following step, i.e. the adoption of separation and inventive principles as solution triggers to overcome a well identified contradiction, is much easier. Indeed, TRIZ requires a paradigm change to designers: from a traditional approach focused on the definition of the “optimal” solution, i.e. the best compromise among a set of even fuzzily identified conflicting requirements, to a process aimed at the identification of the conflicting design parameters, in order to generate solutions that overcome those conflicts. Such a paradigm shift is rather hard for technicians involved in architectural/layout design tasks, but it’s even harder for designers operating in the following phases of the product cycle, i.e. when the shape of the “mechanical” parts must be defined. While sitting in front of the screen of a CAD system, a strong inertia barrier is constituted by the CAD interface itself suited for modeling already conceived geometries, but too rigid for supporting the designer in the fuzzy front-end of the process. Therefore, the cultural reluctance vs. changing the design approach from optimization to conflicts overcoming, combined with the design tools rigidity, constitutes a major limit to the introduction of systematic innovation methodologies in these design phases. Besides, as summarized in section 2, these days new design optimization tools are available and they are pretty close to actual designer’s perspective, therefore representing a further push towards “design for compromise” practices. Nevertheless, TRIZ teaches that the harm of the system is the best resource to be adopted for improving the ideality of the system itself. On the base of this suggestion the authors have developed the following procedure, further detailed in sections 3.1-3.3. It is assumed that there are two possible starting situations: 0a. Design of a brand-new component: the designer receives functional requirements and technical constraints; the expected output is a detailed geometry. 0b. Redesign of a component/sub-assembly: an already existing design should provide higher performances and/or new requirements must be satisfied (e.g. reduced energy losses, noise emission etc.). 1. Whatever the starting situation is, a set of specific design goals and constraints should be defined. The first step consists in defining a set of optimization problems, i.e. defining functional surfaces, available volumes, loads and constraints acting on the system. Clearly, there are some differences between 0a) and 0b), since in the first case the designer has a greater freedom, while in the second the functional surfaces and the boundary conditions are inherited from the existing design. 2. If the optimization activities brings to satisfactory results the design task is accomplished; besides, if the optimization problems lead to contradictory solutions a conflict analysis has to be performed. More precisely, the results of the optimization activities are translated into a set of physical contradictions, therefore overcoming the main obstacle for non-TRIZ experts, that is identifying the core of the conflict. 3. The physical contradictions can be approached by means of the separation principles or by a transition to a super/subsystem. These design hints should trigger to the designer a direction for overcoming the existing trade-off. As a consequence, a new Design Optimization problem can be defined. 3.1 From technical contradictions to design optimization tasks The task of defining one or more design optimization problems on the base of the design requirements is not complex since goals and constraints should be already identified. Instead of trying to fit the description of a design problem into a pair of improving/worsening features, it is much easier to define the objective and the boundary conditions of an optimization problem. In Table 1, an exemplary list of external requirements is reported with the corresponding available optimization approaches.

Table 1-exemplary list of external requirements and their representation in optimization tasks B=Bead/Topography Opt., T=Topological Opt., S=Shape Opt., P=Parametrical Opt. External requirements Objective Constraint External requirements Objective Constraint Static stiffness B, T, S, P B, T, S, P Surface pressure S, P S, P Dynamic stiffness B, T, S, P - Thermal flow (cond.) T - Weight – Mass B, T, S, P B, T, S, P Thermal flow (conv.) S, P S, P Stress – Strength S, P S, P Center of mass pos. T T Size – Volume T, B, S, P B, T, S, P Inertia properties T T Draw direction / - B, T, S, (P) Buckling T, B, S T, B, S tool accessibility 3.2 From design optimization tasks to physical contradictions According to the design requirements one or more optimization problems have been defined. It is obvious that if the results of these analyses point to the same direction, the design task doesn’t hide any conflict and a technical solution can be implemented easily. Besides, it may happen that it is not possible to reach satisfactory results according to the following situations: (i) two or more optimization problems bring to opposite directions; (ii) a single optimization problem has been defined, but the algorithm doesn’t meet the objective. In the first case, the designer has to analyze the results of the optimization tasks and list all the contradictory directions assigned to the optimization variables (e.g. the value of a dimension high and low, the direction of a rib, presence/absence of material in a certain region etc.). In the second case the contradiction must be searched between the objective and one or more constraints: thus it is suggested to define a new set of optimization problems where the objective is kept constant, while the constraints are deleted alternatively. After such an analysis the contradiction should be defined in the form: “the geometry should be … in order to respect the constraint … and should not be … in order to reach the objective …”. 3.3 From physical contradictions to a new design task The identification of a physical contradiction brings from a conflict between external requirements to one or more “internal” design parameters with contradictory definitions. Such a paradox can be systematically approached according to classical TRIZ tools: 1. identifying the operational zone (i.e. the region where the contradiction occurs), the operational time (i.e. the interval where the contradiction occurs); 2. check if the contradictory requirements for the design parameter co-exist in the whole operational zone (separation in space), operational time (separation in time); 3. check if those contradictory requirements co-exist under any condition (separation on condition); 4. evaluate the opportunity to overcome the contradiction by means of a transition to the system components (sub-systems) or to its environment (super-system). All these suggestions can be enriched by means of the guidelines provided by the inventive principles that can be associated to each separation principle. As a result, the designer should be able to define a new set of optimization problems according to the following exemplary list of actions: separating and/or segmenting the functional surfaces; dynamizing an assembly in order to have a different mechanical behavior under different operating conditions; moving to another dimension etc. (a complete list of suggestions is not compatible with the available space).

4. Exemplary application of the proposed approach In order to clarify the whole process described in section 3, its basic (but not trivial) application to a sheet metal snips is here reported. Snips are hand shears used in cutting sheet metal, usually up to 0,5 mm thick. The typical layout of a pair of snips is exactly the same of common scissors, even if with appropriate dimensions. Among the requirements sheet metal snips have to satisfy, it is worth to highlight: minimal force requested to the user, maximum length of cut at each operation (as design objectives) and light weight, limited overall size according to ergonomics, limited width for reducing sheet deformation (as design constraints). According to this problem situation two initial optimization problems can be defined (omitted for space limitations); as a result two opposite directions are suggested: the shear length should be small in order to maximize the lever arm (minimize the requested effort), but the shear length should be high in order to cut a long piece of metal with just one operation. Such a physical contradiction can be overcome by means of a separation in time (e.g. with a ratchet mechanism) or by a separation in space (e.g. separating the lever arm, i.e. the distance between the shear edge and the fulcrum, from the shear length). The last guideline led the authors to two conceptual solutions: • moving to another dimension, i.e. designing the snips with a shear edge orthogonal to the lever arm; • increasing the curvature of the edge, i.e. building a circular blade like a can opener. An exemplary embodiment of the first concept is shown in figure 1 (right), obtained at the end of a second optimization problem where the functional surfaces have been defined according to the separation in space/another dimension principle (figure 1, left).

Fig. 1-Design space of a redefined optimization task (left); optimized design of a sheet metal snips (right). 5. Conclusions In this paper, the adoption of Design Optimization tools as a means for bridging systematic innovation methods with CAD systems has been presented: the proposed procedure fits with the standard approach to design, but at the same time leads systematically to the identification of conflicting design parameters, thus overcoming the major difficulty of newcomers to TRIZ, i.e. the capability to describe design problems in terms of physical contradictions related to design parameters and not just conflicting (external) requirements. Due to the space constraints of the manuscript just an exemplary application of the procedure has been shown to demonstrate the validity of the proposed approach. For the same reasons, some details about the definition of the optimization problems as well as the translation of the separation principle into a novel optimization problem have been omitted. During the oral presentation at the conference further details and examples will be presented.

References [1] Cascini G., “State-of-the-Art and trends of Computer-Aided Innovation tools - Towards the integration within the Product Development Cycle”, Building the Information Society, Kluwer Academic Publishers (ISBN 1-4020-8156-1), 2004, pp. 461-470. [2] Chang H.T., Chen J.L., “The conflict-problem-solving CAD software integrating TRIZ into eco-innovation”, Advances in Engineering Software, 35, 2004, pp. 553–566. [3] Leon N., Cueva J.M., Gutiérrez J., Silva D., “Automatic shape variations in 3D CAD environments”, Proceedings of the 1st IFIP TC5 Working Conference on Computer Aided Innovation, November 14-15 2005. Ulm, Germany, ISBN 3-00-017325-0. [4] Saitou, Kazuhiro; Izui, Kazuhiro; Nishiwaki, Shinji; Papalambros, Panos: “A survey of structural optimization in mechanical product development” Journal of Computing and Information Science in Engineering, v 5, n 3, September, 2005, p. 214-226. [5] Spath D., Neithardt W., Bangert C., “Optimized design with topology and shape optimization”, Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, v 216, n 8, 2002, pp. 1187-1191. TRIZ BASED TOOL MANAGEMENT IN SUPPLY NETWORKS

R. Teti, D. D’Addona Department of Materials and Production Engineering, University of Naples Federico II, Piazzale Tecchio 80, 80125 Naples, Italy [email protected], [email protected]

Abstract The Multi-Agent Tool Management System (MATMS), based on intelligent agent technology for automatic tool procurement in supply networks and provided with a Flexible Tool Management Strategy (FTMS) for optimum tool inventory control, is presented in this paper. The MATMS operates in the framework of a negotiation-based, multiple-supplier network where a turbine blade producer (client) requires dressing jobs on worn-out CBN grinding wheels from external tool manufacturers (suppliers). Tool supply cost minimization involves inventory level minimization, whereas stock-out risk prevention entails inventory constrained maximization. The MATMS integrates the inventive problem solving method, known as the TRIZ method, that can suggest invention directions based on heuristics or principles to resolve the contradictions between production needs and cost minimization. Keywords: Tool management, Supply network, Inventive problem, Multi-agents, TRIZ agent

1. Introduction The Multi-Agent Tool Management System (MATMS) [Teti, 2003a], based on intelligent agent technology for automatic tool procurement in supply networks and provided with a Flexible Tool Management Strategy (FTMS) for optimum tool inventory control, is presented. The MATMS operates in the framework of a negotiation-based, multiple-supplier network where a turbine blade producer (client) requires dressing jobs on worn-out CBN grinding wheels for Ni base alloy turbine blade fabrication from external tool manufacturers (suppliers) [Teti, 2003b, c; Fox, 2000]. Its intelligent Resource Agent is responsible for inventory management and its problem solving function is the FTMS. Such environment is affected by significant non-random uncertainty involving fluctuations in tool delivery time and unpredictable tool demand. Moreover, a trade-off between tool supply cost and stock-out risk prevention characterizes the inventory sizing dilemma with reference to reusable tools such as grinding wheels. The formulation of the control task is intrinsically ambiguous, as conflicting control objectives need to be concurrently pursued. Tool supply cost minimization involves inventory level minimization, whereas stock-out risk prevention entails inventory constrained maximization The MATMS integrates the inventive problem solving method, known as the TRIZ method, that can suggest invention directions based on heuristics or principles to resolve the contradictions between production needs and cost minimization. The MATMS includes the Invention Agent (TRIZ Agent), the Coordination Agent (Enterprise Level Agent), and Domain Specific Problem Solving Agents (Resource Agent, Dressing Time Prediction Agent, Order Distribution Agent).

2. Traditional and innovative CBN grinding wheel tool management Turbine blade fabrication is carried out in parallel along several production line, each for an engine model requiring a specific set of CBN grinding wheel types (part-numbers), planned to work a maximum number of pieces. Each time a CBN grinding wheel reaches its end-of-life, a dressing operation is required from an external tool manufacturer in a supply network and the worn-out tool is unavailable for a time defined as dressing cycle time. For each part-number, a sufficient number of CBN grinding wheels (serial-numbers) must be available at all times to prevent production interruptions due to tool run-out. The current strategic plan for CBN grinding wheel part-number (P/N) inventory sizing is based on the selection of an appropriate number of serial-numbers (S/N), for each P/N, sufficient to cover production needs during the number of months required without new or dressed tool supply. This traditional tool management procedure does not always prove satisfactorily reliable in assuring a correct CBN grinding wheel management [Teti, 2003a, b, c]. The turbine blade producer is aware of the drawbacks of the traditional tool management practice and proactively modifies the inventory size trend by modifying the P/N inventory level on the basis of past experience. The results of this policy, founded on skilled staff knowledge, are the historical inventory size trends. In some cases, the expected trend matches the historical one. In other cases, it is underestimated, with risk of stock-out, or overestimated, with excessive capital investment. A Flexible Tool Management Strategy was proposed in [Teti, 2003b, 2005; D’Addona, 2005a, b] as an alternative to the traditional tool management procedure for tool inventory control. The FTMS paradigm is configured as a domain specific problem solving function operating within the intelligent agent of the MATMS, the Resource Agent, holding the responsibility for optimum tool inventory sizing and control, tool supply cost and stock-out risk minimization. The FTMS purpose was to make sure that the P/N on-hand inventory size, I, remains within an interval defined by two real-time control limits (Fig. 1): the P/N demand during the purchase order lead time (lower limit, Imin) and the P/N demand calculated using the dressing cycle time prediction (upper limit, Imax). The inventory level, I, is left free to oscillate within the limits [Imin , Imax], provided neither of them is crossed. Whenever I decreases due to a tool wear-out event, the CBN grinding wheels are sent out for dressing. If the lower control limit is crossed (I < Imin), the turbine blade producer must either provide additional S/N or, if the upper control limit is crossed (I > Imax), reduce the P/N on-hand inventory by suspending the dressing job order allocation to bring the stock level within the control range. The FTMS optimization in terms of the two diverging minimization requirements of stock-out risk and tool supply cost is based on a dynamic management purpose assignment paradigm aiming at preventing panic buying as a result of stock-out risk aversion [D’Addona, 2006]. The attraction band, i.e. the range delimited by both control limits in Fig. 1, plays the role of attractor for the inventory level trend. The system global behaviour is described by an oscillating pattern affecting the inventory level trend in the nearby of the attraction band. The oscillation amplitude mainly depends on the attractor bandwidth as well as on the peaks attained by tool demand rate during the tool management period.

Figure 1: Generic part-number on-hand inventory level, I, vs. time, t. w = tool wear-out event (•); d = tool delivery event (o); h = control limit crossing due to a w-event (*).

3. Architecture of the Multi-Agent Tool Management System The adoption of multi-agent technology is based on the three most important system domain characteristics: data, control, expertise or resources are inherently distributed; the system is naturally regarded as a society of autonomous cooperating components; or the system contains legacy components that must interact with other, possibly new software components [Bond, 1998]. Supply network management by its very nature has all the above domain characteristics [Yuan, 2001]. A supply network consists of suppliers, factories, warehouses, etc., working together to fabricate products and deliver them to customers. Parties involved in the supply network have their own resources, capabilities, tasks, and objectives. They cooperate with each other autonomously to serve common goals but also have their own interests. A supply network is dynamic and involves the constant flows of information and materials across multiple functional areas both within and between network members. Multi-agent technology therefore appears to be particularly suitable to support collaboration in supply network management. The MATMS was developed by making use of agent development tools integrated in a Generic Agent Shell that provides off-the-shelf but customisable services for communication, coordination, reasoning and problem-solving [Fox, 2000]. The Generic Agent Shell provides several layers of reusable services and languages. They are concerned with agent communication services to exchange messages with other agents, specification of coordination mechanisms (shared conventions about exchanged messages during cooperative action with other agents), services for conflict management and information distribution (voluntary or at request information of interest to other agents), reasoning and integration of purpose built or legacy application programs. The glue that keeps all layers together is a common knowledge and data management system on top of which these layers are built. The approach allows for a clear distinction between an agent’s social know-how (communication services, coordination mechanisms, information distribution services and other) and its domain level problem solving capability. Purpose built application programs can make use of this agent architecture to enhance their problem solving and to improve their robustness through coordination with other agent based applications. Pre-existing (legacy) application programs can also be incorporated with little adaptation and can experience similar benefits. This latter point is important because in many cases developing the entire application afresh would be considered too expensive or too large a change away from proven technology. The MATMS activities are carried out according to the multi-agent interaction and cooperation protocols described below. In Figure 2, the block scheme of the developed MATMS, subdivided into three functional levels is reported [Teti, 2003a]: (a) the Supplier Network Level, comprising the external tool manufacturers in the supply network, is responsible for carrying out the dressing jobs on worn-out grinding wheels; (b) the Enterprise Level, comprising the logistics of the turbine blade producer, is responsible for coordinating the MATMS activities to achieve the best possible results in terms of its goals, including on- time delivery, cost minimization, and so forth; (c) the Plant Level, comprising the production lines of the turbine blade producer. The agents of the Enterprise Level and their functions are: (a) the Resource Agent (RA) combines the functions of inventory management, resource demand estimation and determination of resource order quantities; (b) the Order Distribution Agent (ODA) selects the external supplier to which the CBN grinding wheel dressing job orders should be allocated on the basis of negotiations and constraints through its domain specific problem solving function for order allocation implemented in ILOG OPL Studio 3.5 [Van Hentenryck, 1999]; (c) the Dressing Time Prediction Agent (DTPA) carries out the predictions of CBN grinding wheel dressing cycle times, founded on historical data [Teti 2003b, c]; the DTPA employs as domain specific problem solving function a neuro-fuzzy paradigm known as Adaptive Neuro-Fuzzy Inference System (ANFIS) [Jang, 1993, 1995]; (d) the Knowledge & Data Base Agent (K&DBA) collects all the information relevant for tool management activities, including the updating of historical data on grinding wheel dressing cycles; (e) the Warehouse Timer Agent (WTA) takes care of the incoming and outgoing CBN grinding wheels, including the evaluation of actual dressing cycle times; (f) the Invention Agent or TRIZ Agent (TA) uses the TRIZ theory in the invention process in order to suggest principles to improve or solve the contradictions between production needs and cost minimization [Soo, 2005]; (g) the Coordination Agent (CA) coordinates each individual agent’s goal searching activity by balancing different objectives, finds the “all- agreed” solution in a short time period and cooperates with the User and such Domain Agents (DA) as the DTPA, RA, ODA and K&DBA to obtain a feasible solution [Soo, 2005].

4. Theoretical Development

4.1 Triz Agent and Coordination Agent The inventive problem solving method, known as the TRIZ method, is configured as a domain specific problem solving function operating within the intelligent agent of the MATMS, the TRIZ Agent, holding the responsability for suggesting invention directions based on heuristics or principles to resolve the contradictions between production needs and cost minimisation. These contradictions allow, through the contradictions matrix (Mcm*n, where m = 31 and n = 31 for Management Systems and m = 39 and n= 39 for Technical Systems), to identify among the 40 inventive principles established by Altshuller the most adequate for each contradiction [Farias, 2005]. The contradiction matrices are constantly updated, reviewed, and extended to application to many fields (management, food, social) beyond the original applications in engineering and technology [Altshuller, 1994, 1995]. After positioning the contradictions indicated by Althshuler’s contradictions matrix, adapted to the logistics problems, the inventive principles which are most adequate to the logistic scenario are identified. The first step of an invention process is to identify the problem. The TRIZ Agent allows the User to express the problem to be solved in terms of physical contradictions. The User must identify the technical characteristics to be improved and those that will be worsened when the problem is solved. The related attributes and possible conflicts are communicated to the CA through Agent Communication Language (ACL) messages [FIPA97]. In the present tool management case, the engineering parameters to be improved are the tool run-out risk and the tool demand forecast. These parameters are inputted by the User into the TRIZ Agent (first step) that returns the results in terms of 2 principles (second step): (a) Principle #25: Self-service. This principle consists of enabling a system to perform its own functions or to self-organise. In the MATMS, the agents give feed back to the K&DBA allowing updating in real time. (b) Principle #36: Phase transition. This principle proposes that transition phases might be capitalised in order to clarify the real necessities of the MATMS system: the optimal availability CBN grinding wheels. The domain specific problem solving function (TRIZ) of the TRIZ Agent identifies the principles on the basis of contradictions matrix for Management Systems [Farais, 2005]. In the MATMS this phase is carried out through the FTMS paradigm that is configured as a domain specific problem solving function operating within the MATMS intelligent Resource Agent holding the responsibility for optimum tool inventory sizing and control of CBN grinding wheels. In the third step, the User considers principle #25 less relevant to the problem and thus chooses principle #36 to solve the problem. In the fourth step, the TRIZ Agent maps the Domain Ontology to the Patent Claim Ontology instances. The TRIZ Agent suggests to the CA, according to principle #36, that the engineering parameter “Inventory level” is to be modified. The role of the CA is to coordinate and cooperate with the User and such Domain Agents (DA) as the DTPA, RA, ODA and K&DBA to obtain a feasible solution [Soo, 2005]. The DAs contain domain knowledge to find the best solution. They can conduct the engineering analysis to verify if the solutions proposed by other agents violate the domain constraints or the production plans. For example: the CA receives the suggestion to modify the CBN grinding wheel inventory level according to principle #36 from the TRIZ Agent. The CA sends a request ACL message to the RA to verify the inventory level size. After several iterations of communication protocols, by collecting solutions from DAs the CA makes a decision on the CBN grinding wheel inventory level size.

4.2 MATMS functioning In Figure 2, the block scheme of the MATMS functioning is reported. Initially, the CA receives from the TRIZ Agent the suggestion to modify the CBN grinding wheel inventory level according to principle #36. The RA in the Enterprise Level receives information on grinding wheel end-of-life events from the PAi in the Plant Level through Pj-R inform communicative acts, containing the P/N and S/N of the worn-out CBN grinding wheels, and a request act from the CA. For each worn-out grinding wheel, the RA is informed on current monthly production, P/N tool life, and P/N inventory level, I, by regularly interrogating the K&DBA through a R-K subscribe act, followed by a K-R inform reply. Simultaneously, the RA obtains the information about the supplier-independent dressing cycle time predictions from the DTPA through a R-D request act, followed by a D-R inform reply. To issue a dressing cycle time prediction, the DTPA asks for the updated historical dressing cycle times from the K&DBA through a D-K subscribe act, followed by a K-D inform reply. On the basis of supplier-independent dressing cycle time predictions, current monthly production data and part-number tool life values for the relevant part-number, the RA evaluates the demand for CBN grinding wheel dressing to carry out the part-number inventory sizing and control through its domain problem solving function: the FTMS. If the RA does not consider necessary the dressing operation for a certain part-number, it informs the K&DBA through a R-K inform act that the worn-out CBN grinding wheel serial-number is kept on-hold in the enterprise warehouse. If the RA deems necessary to issue a dressing job order, the RA sends a R-O request act to the ODA asking to start a procedure for dressing job order allocation; this request is followed by an O-R agree reply. The task of the ODA consists in allocating the required dressing job order to one of the external tool manufacturers in the supply network on the basis of negotiations and constraints. To this purpose, the ODA needs to gather information necessary for supplier selection from the relevant agents: DTPA, K&DBA, SOAAi. The ODA starts negotiating with the SOAAi in the Supply Network Level to obtain from each of them the dressing price and time (offers) for the required worn-out CBN grinding wheel part-number. The negotiation is initiated by an O-Si call for proposals act to which the SOAAi respond with their offers through Si-O propose acts. Simultaneously, the ODA obtains from the DTPA the supplier-dependent dressing cycle time prediction (one for each supplier in the network) and from the K&DBA the dressing job reference price, due- time, etc., through O-D and O-K request acts followed by D-O and K-O inform replies. On this basis, the ODA ranks the responding suppliers and allocates the dressing job order to the first supplier in the rank through an O-Si accept proposal. The selected SOAAi accepts (refuses) the dressing job order through a Si-O agree (refuse) act. In case of order refusal, the ODA contacts the second supplier in the rank list, and so on and so forth. At the end of this procedure, the ODA informs the K&DBA about the order allocation results through an O-K inform act and requests the WTA to send the worn out CBN grinding wheel for dressing to the selected supplier using an O-W request act, followed by an W-O agree reply. The WTA records the delivery and reception dates of each CBN grinding wheel for actual dressing cycle time evaluation. These data are regularly fed to the K&DBA that makes them available for further requests and interrogations by the relevant agents; this is obtained through a K-W subscribe act, followed by W-K inform replies.

5. Conclusions The Multi-Agent Tool Management System (MATMS), based on intelligent agent technology for automatic tool procurement in the framework of a negotiation-based, multiple-supplier network where a turbine blade producer requires dressing operations on worn-out CBN grinding wheels from external tool manufacturers, was presented. In the MATMS, tool supply cost minimization involves inventory level minimization, whereas stock-out risk prevention entails inventory constrained maximization. The inventive problem solving method, known as the TRIZ method, was integrated into the existing MATMS architecture to suggest invention directions, based on Altshuller’s principles, to resolve the contradictions of the above inventory dilemma. Initial experimental results on he functioning of the TRIZ agent integrated MATMS have been obtained only recently and will be presented in soon to appear publications.

Figure 2. Block scheme of the Multi-Agent Tool Management System (MATMS) integrated with the TRIZ Agent (TA) and the Coordination Agent (TA). 6. References - Altshuller, G., 1994, “The Art of Inventing (And Suddenly the Inventor Appeared)”, translated by Lev Shulyak. Worcester, MA: Technical Innovation Center. ISBN 0-9640740-1-X. - Altshuller, G., Fedoseev, Y., 1995, “40 Principles: TRIZ Keys to Technical Innovation”, Technical Innovation Center, Inc.; Vol. 1, ISBN: 0964074036 - Bond, A.H., Gasser, L., 1988, Readings in Distributed Artificial Intelligence, Morgan Kaufmann - D’Addona, D. Teti, R., 2005a, “Flexible Tool Management Strategies for Optimum Tool Inventory Control”, 1st Int. Virtual Conf. on Intell. Prod. Mach. & Syst. – IPROMS 2005, 4-15 July: 639-644. - D’Addona, D., Teti, R., 2005b, “Intelligent Tool Management in a Multiple Supplier Network”, Annals of the CIRP, Vol. 54/1: 459-462. - D'Addona, D., Bontempi, A., Teti, R., 2006, Emergent Synthetic Approach to the Inventory Dilemma in a Complex Manufacturing Environment, The 6th International Workshop on Emergent Synthesis in cooperation with CIRP Workshop on Engineering as Collaborative Negotiation, August 18-19, 2006, Kashiwa, Japan, pp. 1-8. - Farias, O., 2005, The Logistic Innovation Approach and the Theory of Inventive Problem Solving, CLADEA, Asamblea Anual 2005, 20 – 22 October, Santiago de Chile - “FIPA97 Specification” part 2, ACL. Foundation for intelligent physical agents, October 1997 - Fox, M.S., Barbuceanu, M., Teigen, R, 2000, “Agent-Oriented Supply-Chain Management”, Int. J. of Flexible Manufacturing Systems 12, pp. 165-188. - Jang, J.-S. R., 1993, “ANFIS: Adaptive-Network-Based Fuzzy Inference System”, IEEE Trans. on Systems, Man, and Cybernetics, Vol. 23/3: 665-685. - Jang, J.-S. R., 1995, “Neuro-Fuzzy Modelling and Control”, IEEE, 83: 378-405. - Soo, V., Lin, S., Yang, S., Lin, S., Cheng, S., 2005, “A Cooperative MA Platform for Invention based on Ontology and Patent Document Analysis”, 9th Int Conf on CSCW Design, Coventry, UK. - Teti, R., D’Addona, D, 2003a, “Agent-Based Mulptiple Supplier Tool Management System”, 36th CIRP Int. Sem. on Manufacturing Systems - ISMS 2003, Saarbrücken, 3-5 June, pp 39-45. - Teti, R., D’Addona, D, 2003b, “Grinding Wheel Management through Neuro-Fuzzy Forecasting of Dressing Cycle Time”, Annals of CIRP, Vol. 52/1, pp 407-410. - Teti, R., D’Addona, D, 2003c, “Multiple Supplier Neuro-Fuzzy Reliable Delivery Forecasting for Tool Management in a Supply Network”, 6th AITEM Conf., Gaeta, 8-10 Sept., pp. 127-128. - Teti, R., D’Addona, D., Segreto, T., 2005, “Flexible Tool Management in a Multi-Agent Modelled Supply Network”, CIRP Journal of Manufacturing Systems, Vol. 34/3: 203-218. - Van Hentenryck, 1999, “The OPL Optimisation Programming Language”, Massachusetts Institute of Technology, Boston, USA. - Yuan, Y., Liang, T.P., Zhang, J.J., 2001, Using Agent Technology to Support Supply Chain Management: Potentials and Challenges, M. G. De Groote School of Business, Working Paper N. 453. USING TRIZ AND HUMAN-CENTERED DESIGN FOR CONSUMER PRODUCT DEVELOPMENT

Alan Van Pelt Berkeley Institute of Design University of California, Berkeley [email protected]

Jonathan Hey Berkeley Institute of Design University of California, Berkeley [email protected]

Abstract TRIZ is increasingly being applied to consumer product development, in which products have to solve more than just technical problems, they have to provide compelling solutions to consumer needs. This paper discusses the use of TRIZ together with Human-Centered Design (HCD), a design methodology evolved for consumer product development. Using a case study, we illustrate why understanding user needs in consumer product development is particularly important, and then compare TRIZ and HCD methodologies. To better understand the appropriate use of TRIZ in consumer product development, we present a framework of Use, Usability and Meaning. Design situations where the emphasis is on a product’s Use stand to benefit most from TRIZ methods, whereas for product areas with strong Meaning attached, HCD methods provide the most guidance. We finish by presenting some opportunities for successful integration of the two methodologies. Keywords: TRIZ, Human-Centered Design, Consumer products, Use, Usability, Meaning

1. Introduction TRIZ (Altshuller, 1984) has gained much commercial acceptance over the past decades, and can now be found contributing to product development in such multinational corporations as Samsung, Motorola, Xerox and others. This paper considers this expanding use of TRIZ within consumer product design (for example, Hipple, 2006; Jana, 2006) and its integration with a Human-Centered Design (HCD) methodology. We believe that just as HCD practitioners can benefit from the introduction of TRIZ methods, so too can TRIZ practitioners benefit from the introduction of HCD methods. TRIZ largely focuses on the innovation of physical devices, yet it is human needs that drive people to buy products: the need to go quickly from point A to point B; the need to remain clean; the need to feel unique. While some needs may appear obvious to designers – the need to drink coffee on the move – others will remain hidden or dormant – the need to be perceived as a busy, caffeinated go-getter. Translating from individuals’ needs to TRIZ’s language of functions is a key, but tricky, step. We argue that successful consumer product development requires understanding subtle, below the surface factors such as human values, product meanings, and unspoken needs. TRIZ was not developed to provide a deep understanding of consumer needs. Several TRIZ researchers, therefore, have made efforts to integrate TRIZ with other design methods and tools. Integration of the Voice of the Customer in the problem definition phase has been proposed (for example, Mann 2006), including the use of models such as the Quality Function Deployment (QFD) House of Quality. Others have proposed integration with the Kano model (Runhua 2002) or Neuro Linguistic Programming to understand customers (Mann, 2002). Hipple (2006) interprets many consumer products from the perspective of the TRIZ methodology and Mann (2002) provides an extension of the classical 9-Windows tool to include consideration of behaviour, capability, and beliefs, values, and identity. Each of these integrated methods, however, requires a well understood product space. They do not include a robust means of discovering the needs of a particular customer group and a framework for working with those needs. In contrast, Human-Centered Design is a design methodology emphasizing a deep understanding of users, a prototype often, and a fail-early-to-succeed-sooner mentality. Its “enlightened trial and error” approach contrasts with TRIZ’s emphasis on careful analysis to solve the right problem. Perhaps this explains why few designers actively combine the two methodologies into their design approach. The expanding application of TRIZ to consumer product development and the increasingly competitive nature of product design make their integration especially important now. We begin by illustrating with a case-study how a thorough understanding of user needs during product development is critical to successful adoption. We then provide an overview of HCD and compare the methodologies side by side. Finally, we discuss the integration of the two methodologies by presenting a framework of Use, Usability and Meaning to determine when each is most appropriate. 1.1 Consumer Needs Over Technical Superiority: a Case-Study In early 1975, Sony Corporation revolutionized the home entertainment market by introducing the BetaMax, a personal video cassette recording system that allowed users to record video from cameras or television. Nearly two years later, JVC introduced a competing technology known as VHS. While VHS had a recording time twice that of Beta’s one hour, most considered Beta to be the superior technology with higher video resolution and more compact cassette tapes. The Beta format enjoyed a large lead in the early 80s, but by 1985, the market had turned sharply towards VHS and in 1998, Sony began marketing their own VHS machines, all but abandoning BetaMax. With Sony’s technology advantage, how then did VHS manage to defeat Beta? The answer will continue to be debated, but there appear to be three primary reasons: 1) unsuccessful, but disruptive, legal actions by Universal and Disney Studios in the late 1970's which named only Sony, 2) the lower costs of VHS in the late 70's, and 3) perhaps the biggest factor, failure by Sony to appreciate the advantage in consumers' minds of extended taping time versus recording quality and reduced tape size. Sony misunderstood the technological attributes most important to VCR customers: length of record time versus video quality and tape cassette size, a critical mistake. Unfortunately, mistakes such as these remain common. This case exemplifies the fate of thousands of similar cases of technologically superior products failing to become a success. They allow us to make the following observations: 1) Superior technologies don’t always prevail in the marketplace. 2) Companies often don’t know their customers as well as they think they do. We believe these two observations account for many of the difficulties in applying TRIZ to consumer product development. 2. Comparing TRIZ and HCD

2.1 What is Human-Centered Design? A modern approach to HCD described by Patnaik and Becker (2001) requires that stakeholders be studied at a very deep level. We aim to understand their particular activities, beliefs, preferences, emotions, motivations, troubles, and environments. We want to see users’ messy realities; to understand their mistakes, short-cuts, and work-arounds; and to learn about the personal significance and meaning they attach to their activities and product interactions. When one has a complete picture of consumers, then the most important set of unmet needs, appropriate to both the user and the organization’s business strategy, can be identified and addressed. Once concept solutions have been generated, the rich understanding of a consumer group can be used to evaluate those solutions by criteria most important to consumers, the very people paying for the product. 2.2 HCD Methodology The primary research method of HCD is Ethnography. Ethnography is an anthropological technique in which people are studied in their natural context. It can mean observing consumers doing what they normally do, as if a fly on the wall; talking to consumers about their activities as they do them; or it can mean becoming an active participant in the same activities as the consumers. In all cases, the point is to learn what consumers do in their natural context, what matters to them, and why – to see the world through their eyes by building empathy (Leonard and Rayport, 1997). Contextual research is necessary because various studies have shown that consumers are unknowingly prone to influence from outside factors that can be difficult to identify in a focus group or survey. In one experiment performed by Cheskin Research (Gladwell, 2005), a beverage was placed into two different bottles, one bottle looking rather plain, the other quite fancy. Taste testers reported the contents of the nicer looking bottle as tasting better, despite the fact that the two bottles contained the same liquid. Moreover, what users say that they do, and what they actually do, can vary significantly, especially when they are removed from the relevant context. People have a hard time reflecting on, even noticing, the ordinary and habitual activities of every day life. They report with conviction on what they believe to be true, not necessarily on what is true. They alter responses to sound more competent to or agreeable with interviewers. Superficial research techniques are simply insufficient to uncover deeper customer needs. While much of this research is done prior to development, HCD also requires ongoing learning from stakeholders through the use of frequent prototypes. 2.3 TRIZ and HCD Side-by-Side Whereas problem abstraction is central to TRIZ, HCD takes the specific user and context as its main tenets. The HCD practitioner views needs, arising from subtleties and contradictions in human behaviour, as the focal point for development. The TRIZ practitioner places physical and technical contradictions and potential at the forefront. Both methodologies adhere to similar development frameworks, in which research and analysis phases are followed by solution generation then evaluation phases. Mann (2002) breaks the systematic innovation process into four steps, which repeat iteratively: Define, Select Tool, Generate Solutions, Evaluate. Within problem definition, HCD emphasizes problem finding. While an array of TRIZ tools support the problem definition phase, in our experience, TRIZ practitioners often define the problem space based on insufficient information drawn from management imperatives, marketing perspectives, or their own experience, instead of user research. Doing so places the product’s future success at risk. TRIZ is a highly structured approach to innovation, while HCD follows a less structured approach and, in particular, provides very little structure for generating solutions. Too often, HCD practitioners leave the technological innovation to traditional brainstorms and the experience of the design team. As a result, the “what” and “how” of concept development can suffer. In contrast, TRIZ’s structured approach and leveraging of other successes results in a more robust exploration of the solution space. TRIZ’s development from a historical analysis of design solutions also provides it with strong technology forecasting. While HCD provides no formal means of technology forecasting, an in-depth understanding of consumer needs allows one to evaluate which types of products are more likely to succeed in the marketplace. A summary of several key differences between TRIZ and HCD are shown in Table 1.

Table 1: Comparison table of key differences between TRIZ and Human-Centered Design

TRIZ Human-Centered Design Focus on functionality and technical side Focus on human needs Leverages prior technical successes Leverages anthropological techniques Emphasizes abstraction Emphasizes context Highly structured approach Loosely structured approach Prescribes what and how Describes why

3. Integrating TRIZ and HCD Having described the relative strengths of the two methodologies, we now present the framework of Use, Usability and Meaning (Barry and Patnaik, 2000, adapted from Sanders, 1992) as a means of deciding when each is most appropriate. 3.1 The Use, Usability, Meaning Framework ‘Use’ refers to what users can do with a product and the main functional benefits it provides. In this sense, it is synonymous with the TRIZ notion of function. ‘Usability’ refers to the ways and ease with which users interact with a product. It is more than just how easy a product is to use, but also the senses that are engaged, the contexts in which engagement occurs, and the affordances the product provides. ‘Meaning’ is the most esoteric of the three. A consumer product is more than the sum of the functions it performs; in addition to buying the functionality of a product, users are purchasing a system of meanings either intentionally embedded in the product, or associated by the individual through happenstance. Meaning is created through a product’s context of use and usability, as well as through advertising and branding. But for a product to hold meaning requires that the user create internal associations between the product and personal experiences or widely-held cultural beliefs. The relative importance of each of Use, Usability, and Meaning depends on the product application and the context of its use. The internal workings of a lawnmower engine probably carry little Meaning for users, who simply care that it turns the blades fast enough to cut grass. The sounds that a Harley-Davidson motorcycle engine makes, however, carry much Meaning to its users. Indeed the appearance and sound of Harley-Davidson engines contribute towards a feeling of adventure, power, and one-ness with the road that create customer loyalty matched by few other products. HCD practitioners looks for unmet needs within each of Use, Usability, and Meaning, but also for gaps between the three: For example, does the Usability reinforce the Meaning? Does the Meaning create Use problems? 3.2 When TRIZ and HCD Apply We propose that the importance of studying consumers for the development of a product depends on the amount and type of interaction between the consumer and the technology. The more direct interaction a customer will have with the product, the stronger the need for HCD approaches to understanding and soliciting frequent feedback from customers. Products can be classified according to the relative importance of Use, Usability, and Meaning to the product’s adoption. Products in which Use is the primary driver, and Meaning and Usability are of little importance, are “under the hood” technologies, characterized by little direct user interaction. This is the case with technical problems such as inside an engine, or the mechanics of a refrigerator. These problem-solution spaces can effectively be explored using a minimal amount of user research and thus can rely heavily, if not exclusively, on TRIZ methodologies. Usability issues require studying users to determine where and when users encounter problems, what their workarounds are, and the precise differences between their expectations and what they encountered. But TRIZ also contributes here, particularly if usability issues are physical, as in ergonomics, where, for example, the Trends of Technological Evolution are effective for pointing designers towards improved solutions. When correctly identified, TRIZ is also powerful at solving usability trade-offs such as the Flexibility-Usability trade- off seen in many remote controls: as the flexibility of the controller increases, the increased number of buttons needed makes it more complex to use (Lidwell et al., 2005). Products that require conscious adoption and interaction by users, such as cell phones or clothing, often must fit within a system of user meanings and require a greater emphasis on HCD methods than TRIZ methods. A thorough understanding of consumer attitudes, beliefs, values, and expectations is important, as is an understanding of how changes to Use and Usability will affect these. The three realms are closely intertwined, each impacting the others. Handing an innovative technical solution to marketing to “attach meaning to it” will rarely produce truly successful products. Even products that appear primarily Use focused, such as a hammer, can benefit from HCD techniques; where Meaning does not already exist, there is the opportunity to create it and enhance customer loyalty.

USE MEANING TRIZ HCD

USABILITY

Figure 1: Diagram illustrating the different emphases of TRIZ and HCD

This is not to say that TRIZ cannot be usefully applied to issues related to Meaning. Indeed, for each of the three dimensions of Use, Usability, and Meaning, TRIZ is useful in providing structure to clarify relationships and end goals. The five pillars of TRIZ – Ideality, Functionality/Attributes, Space/Time Interface, Resources, and Contradiction elimination – are all, at least philosophically, useful concepts to approach each dimension with and would benefit from further research. During problem definition, for example, one might consider the following useful questions regarding Ideality: Based on customer research, what are the ideal final results for each Use, Usability, and, Meaning issue? Can we meet a need without any cost or harm? What’s stopping us from achieving the ideal result and why? Functional and Attribute Analysis (FAA) can also apply to non-technical problems (such as in Mann, 2004) to better understand the relationships between components of the issue, whether they be physical and related to Use, behavioural and related to Usability, or psychological and related to Meaning. Visually mapping those relationships allows the designer to see the whole picture and is a powerful means of making sense of information. The TRIZ philosophy of no compromise contradiction elimination is also very powerful for resolving situations where users have conflicting needs, whether they relate to Use, Usability, or Meaning. For example, one’s need for adrenaline rushes, but also safety.

4. Conclusion We recommend that care be taken when using TRIZ methods alone for consumer product development, as TRIZ does not provide tools to understand and learn from consumers, who may have complicated, beneath the surface needs beyond that of simple functionality, particularly with regard to systems of meanings. We believe TRIZ and HCD methods complement each other well and have suggested a means of evaluating the appropriateness of each through the framework of Use, Usability, and Meaning, but believe there is still much opportunity for research in adapting the specific tools and processes of each for the other.

5. References Altshuller, G., (1984), “Creativity As An Exact Science”, Gordon & Breach. Barry, M., Patnaik, D., (2000) ME216a Needfinding, course at Stanford University, Department of Mechanical Engineering. Gladwell, M., (2005), “Blink: The Power of Thinking without Thinking”, Little, Brown, New York. Hipple, J., (2006), “The Use of TRIZ Principles in Consumer Product Design”, TRIZCON 2006, June. Jana, R., (2006), “The World According to TRIZ”, BusinessWeek Online, May. Leonard, D. and J.F. Rayport, (1997), “Spark Innovation through Empathic Design”, Harvard Business Review, 75(6), pp. 102-113. Lidwell, W., Holden, K., Butler, J., “Universal Principles of Design”, Rockport Publishers, Gloucester, MA. Mann, D.L., (2002), “Hands-on Systematic Innovation”, CREAX Press, Ieper. Mann, D.L., (2004), “Hands-on Systematic Innovation for Business and Management”, IFR Press, Clevedon. Mann, D.L., (2006), “Unleashing the voice of the product and the voice of the process”, TRIZ Journal, June. Patnaik, D., Becker, R., (1999), “Needfinding: The why and how of uncovering people’s needs”, Design Management Journal, Vol. 10, 2 (1999), pp. 37-43. Runhua, T.,(2002),“Voice of Customers Pushed By Directed Evolution”, TRIZ Journal, June Sanders, L., (1992), “Converging Perspectives: Product Development Research for the 1990s”, Design Management Journal, Vol. 3, 4 (1992), pp. 49-54. STRUCTURING KNOWLEDGE IN INVENTIVE DESIGN OF COMPLEX PROBLEMS

Denis Cavallucci INSA Strasbourg Graduate School of Science and Technology – LgéCo 24 Boulevard de la Victoire 67084 Strasbourg Cedex, France [email protected]

Thomas Eltzer INSA Strasbourg Graduate School of Science and Technology – LgéCo 24 Boulevard de la Victoire 67084 Strasbourg Cedex, France [email protected]

Abstract Current Research and Development activities in enterprises built on research findings of design engineering studies postulate that complex systems need to be decomposed for an - a priori- useful complexity reduction. However, this assumption and some engaged work have not answered to the problematic of linking problem formulation, problem resolving stages and existing knowledge. In response to this concern, we postulate that in a context aiming at assuming inventive challenges, specific knowledge decomposition and structuring has to be organized for an appropriate and efficient problem solving process to be engaged. This article focuses in particular on the gathering stage of a generic framework for knowledge representation and reorganization. These representations use several grounding hypothesis of TRIZ and OTSM-TRIZ combined with acknowledged rules of artificial intelligence and graph theory. Furthermore, a procedure aiming at conducting the gathering stage of a complex situation’s investigation is described. Keywords: Inventive Design, Knowledge, Complex problem, R&D activities.

1. Introduction Today's industrial world is facing with new challenges regarding its capacity to respond to society’s evolution. To assume the dramatic acceleration of new artefacts' demand, Research and Development activities need to evolve from a capacity to answer quality needs to a capacity to answer innovation problematic. The concerns of quality era were mostly directed towards optimizing existing knowledge, means and procedures within the enterprise. To face with what is imposed now as being innovation era; R&D departments are requested to important changes driven mostly by two new problematics: • Dealing with the increasing complexity of industrial situation (both coming from artefact and organizational complexity); • Dealing with the search, the forecast and the management of knowledge previously unknown by the enterprise in order to efficiently provoke the birth of innovations. 1.1 About complexity challenges in inventive design Complexity can be seen as a recurrent topic in contemporary research, mostly oriented towards complexity reduction (Simon 1967) (Suh 2001) in order to reach a level allowing a possible mastering of it while not loosing any precision of the initial situation’s description. We will not attempt to provide, in this paper, an additional definition to complexity but simply employ the term when qualifying an engineering situation where the amount of components in a system is important so as the amount of technological fields involved in the problematic situation this system covers. An important finding that current research in design theories and methodologies is proposing resides in the formalization of computerized logics in order to face with difficulties that complexity brings to R&D activities (Lee, 2003). These researches are mostly oriented towards an exhaustive axiomatisation of engineering attributes (Suh, 2001), thus, encounters difficulties to be applied in complex industrial situations in search of time saving when representing realities. Our research orientations are aiming at proposing a new way of representing knowledge in a context of R&D activities for an efficient evolution of its capacity to assume innovation era's challenges. In the next section we would like to underline the grounding differences between three different typologies of design activities. Optimizing design A majority of R&D activities are fulfilling the issue of obtaining the best out of the actual elements known to the company (or its immediate surrounding field of competition). By applying laws of physics, following rules imposed by "six sigma-like" formalism, enterprises are trying to reach an optimum efficiency of the ratio between existing means and expenditures for producing added values. In terms of design, the philosophy of this statement is widely supported by known design approaches and theories and numerous research findings. Inventive design Sometimes associated with creative design or breakthrough design, it is commonly said about inventive design that the inventiveness of the activity rely mostly on the creative capacity of individuals. Thus, research efforts supporting this vision are mostly oriented towards finding new approach aiming at favouring creativity of designers, based on the assumption that a more creative person will certainly lead to a more inventive design. A counter-hypothesis to this assumption can be the following: What if creative capacity enhancement would only lead to a more prolific generation of concepts resulting in a more time consuming R&D activity to investigate all proposed possibilities and resulting in efficiency losses? Our research orientations have taken into consideration these statements and propose the use of a knowledge-oriented mean of driving human creative thinking when in search of inventive design. This knowledge oriented model proposes to monitor a complex situation through specific representation means and by using grounded theories, provoke appropriates R&D decisions and activities resulting in an enhancement of its efficiency. The means of measurement of this efficiency will be discussed in a further section. From Inventive to Innovative design The place of inventive design in frame of innovative design is critical to sate. Inventive design is targeting the novelty of a design concept through the fact that new knowledge have been introduced in a solution in order to satisfy inventively (without compromises) driving evaluating parameters of a given R&D activity. The fact that Inventive Design efficiency will have a great chance to participate to the enhancement of Innovative design remains to be proven despite the fact that we can build a positive hypothesis out of it. Innovative design is commonly stated in relation to a successful society’s adoption. Then novelty of the solution proposed by the artefact can be appreciated relatively to a given space of what was previously totally, partially or not done by other artefacts. Our definition of Inventive design enters in contradiction with Innovative design when situations of compromises in existing solutions proposed by a company are successfully adopted by society. As an example, an insulating polyurethane body around a beverage can (in order to preserve coolness) can be recognized as resulting from an innovative design due to its marketing success but in our sense has no inventive contents, thus, cannot result from an inventive design and can simply been obtained using optimization design. 1.2 Knowledge manipulation for driving R&D issues in Inventive Design Knowledge management appears as a key element of research in innovative challenges for R&D practices' evolution. In frame of this research it is clearly stated that level of understanding vary as knowledge evolve from tacit individual experience to accurately defined data. In our research, we focus on all possible elements to be extracted in an initial statement (knowledge, information and data's) useful for filling the structured domains we have established. Three stages of Knowledge treatment are distinguished in our approach: • Gathering: extraction, collect, retrieval of all possible elements from known in order to document the initial statement and start problematic understanding. • Representing: Appropriate storage, completion and validation/refinement of elements gathered in the previous stage. • Reorganizing: Graphical reorganization, layout and display of the elements in order to build the appropriate tools to ease R&D decisions. 1.3 Towards a new way of structuring knowledge Various research findings and theoretical groundings already exist, these findings have been scientifically explored and tested, they becomes grounding elements in our proposed model. • Graph theory: Graph theory has stated that “A graph is a symbolic representation of a network and of its connectivity. It implies an abstraction of the reality so it can be simplified as a set of linked nodes” (Mineau et Al., 93). In our approach, the axiomatic fundamentals of graph theory will enable computerization of data's (attributes) associated to contradictions in order to ease their graphical representation and manipulations. • OTSM-TRIZ: What has been published around OTSM constitute the major grounding of our approach. We have learned from OTSM the importance of multi- disciplinary approach and the notion of problem flow in knowledge management (Khomenko et Al., 2006). • TRIZ: The Theory (Altshuller, 1991) proposes the holistical building of ideal system’s portrait and the solving of contradictions standing on the way its logical evolution. Described by its main axioms (laws of technical system evolution, contradiction, specific conditions), TRIZ is essential to our approach in the sense that contradiction is a mean of expressing a problem through specific formalism allowing targeting an inventive goal. The way knowledge will be formalized allows us to postulate that contradiction formalism will impact significantly on knowledge complexity reduction and ease a clearer general understanding of actors resulting in a more pro-active participation. The axiom of laws of engineering systems evolution will enable us to reorganize and priorize contradictions in the sense that contradictions standing on the way of ideality can be scheduled and their solving appropriate to pursue a specific goal.

2. Key characteristics of our proposed approach Knowledge useful for an efficient inventive R&D activity's driving are numerous and multi-disciplinary. They appear randomly to designers. Our approach proposes to allocate gathered elements of knowledge at various stage of formulation's clarity in appropriate spaces, all spaces being in relation to others. Organizing relation both with knowledge holders and existing documents in order to optimize gathering stage efficiency and impact on reliability of problem formulation is eased by contradiction formalism. Computerization and manipulation of allocated knowledge is ensured by the formalism of graphs. Finally, we postulate that contributing to the efficiency of R&D's inventive challenges through dynamic problems representation and management will favour innovation strategies in organizations. 2.1 Developing the methodology As it has been presented in (Cavallucci et Al., 06) four layers of knowledge belonging to four domains is proposed. In order to synthesize the procedure to be applied in the overall approach, we will use a graphical representation and detail its first stage (gathering). Several definitions needs to be given in order to follow the algorithm presented appendix 1. Documents: They include all possible written Elements where knowledge at different stage of expression can be located. Patents, list of requirements of project and norms are probably dominant in this area but any internal corporate document can also be associated. The objective is to semantically exploit them in order to extract partial data's for contradiction definition's assistance. Elements: They represent any knowledge, data or information gathered both from documents and knowledge holders. Processing: This term defines an operative stage of data's extraction from Documents; a text mining procedure is currently in progress for partial contradiction extraction. Knowledge holders: They represent all possible persons within the area of the company available for questioning sequences operated by the expert. The expectations regarding these persons are of two orders: extracting their tacit knowledge related to the studied subject and transform it into explicit exploitable information (or data's). Using their know-how, conduct with their help reformulations for an appropriate element allocation. Ontology: A clearly defined ontology needs to be achieved for a proper domain definition of terms and interactions. In our case, this ontology concerns the domain of expression of all terms used in problem formulation and solving of complex engineering situations. Elements: They define all knowledge, information and data's at various stages of their definition (from totally fuzzy to clearly state). Eligibility: This operation is enabling to defined weather or not an element is appropriate for allocation in the layers of knowledge. It is a crucial point of the algorithm since depending on the reliability of this operation the overall model of knowledge manipulation and stability is engaged. Partial Solutions: They represent any existing and known solution (both fully and partially solving a given problem) extracted both from knowledge holders or documents. 2.2 Description of the deployment Using flowchart formalism, we have drawn an algorithm illustrating the processing of knowledge when in complex inventive problem solving situations (appendix 1). The expert is conducting the deployment of actions. A computerized procedure is under validation for assisting data extraction from documents and the other phases of our approach (representation, reorganization) will be developed in further publications. The loop expressed in the overall algorithm (new element arrival) concerns the dynamic of the representation, allowing a new element (new patent, new inputs from norms, new technological discovery) to be taken into consideration into the overall framework of representation and iterate the model from Tn to Tn+1. 2.3 Metric of evaluation In worldwide literature, R&D efficiency is traditionally evaluated through company operating profit in the past years divided by company internally used R&D expenditure in these same years (Naoto, 1991). This evaluation approach is widely accepted through managerial concerns but we would like to setup an "engineering-oriented" mean of evaluation, allowing monitoring the impact of our approach at all levels. Assuming that we are aiming at improving the reliability of R&D activity, a factor impacting on the emergence of relevant Inventive Solutions should be visible. Thus, we might observe an evolution from a random to a more evenly distributed appearance of Inventive Solutions. Through a better predictability of these emergences, the mastering of Inventive strategies may be improved and it’s organizing more pertinent. Time saving & cost reduction can also (as a consequence) be measured since more reliable elements are taken into account in R&D activity and less expenditure for dead-end directions will be reduced. R&D activities R&D activities Legend: (Amount of in each typology) Flow of R&D results (Amount of in each typology) : Inventive Solution : Optimization effort : Dead-end

(a) (b)

Time Time Figure 1: Flow of R&D results in traditional mode (a) and in Inventive mode (b).

3. Example of knowledge gathering: Case study of a Standard Fuel System

3.1 Presentation of the initial situation Due to ecological constraint, significant changes are forecasted in the design of future cars and European norms have been planned to reduce polluting waste emission; they are referred as Euro0 to Euro5 (Table 1). These norms are specific requirements that automobile designers need to introduce as an element of knowledge in our representation framework (in our case an accurate data). Norm Euro 0 Euro 1 Euro 2 Euro 3 Euro 4 Euro 5 Azote oxydes (NOx) [g/KWh] 14,4 8 7 5 3,5 2 Carbon monoxyde [g/KWh] 11,2 4,5 4 2,1 1,5 1,5 Hydrocarbures [g/KWh] 2,4 1,1 1,1 0,66 0,46 0,25 Particles [g/KWh] - 0,36 0,15 0,1 0,02 0,02 Applied for cars sold in period 1988-92 1993-96 1996-00 2000-04 2005-07 2008-09 Table 1: Extract of the European Norms XX

Fuel systems for automotive applications have risen in complexity in the last decade in integrating a lot of components, and fulfilling a lot of sub-functions (figure 2). One of the sub-problems linked with the R&D activity of Fuel tank is the following: First as hydrocarbures diffuse in polymer material, a protecting layer is added in order to avoid excessive hydrocarbure emission. Then, hydrocarbure emissions are facilitated by any discontinuity in the wall of the plastic shell. Hence, the norms induce strong constrains on the plastic shell manufacturing process: if the shell is made of two glued halves, hydrocarbures will easily diffuse in the glue; on the other side, blowing a fuel system plastic shell is difficult due to the size of such part. To address this problem, competing area is covered by patents aiming at reducing hydrocarbure emissions. As an example, a method for fixing a component onto a plastic shell is represented (Figure 2). The main problem addressed is hydrocarbure emissions through accessories mounting flanges (electrical connections, venting lines, recirculation to the top of the filling pipe, etc.). A large hole is cutted in the plastic shell to introduce mounting flanges not to damage the molten plastic material but Figure 2: Patent WO 2006/032672 provokes leakage due to seal aging.

3.2 Exploiting the documents and company’s know-how to partially fill the four layers Figure 3 illustrates the extracted elements from the situation described in the previous section. They have been allocated in the appropriate space in the four domains. Pb1: Polluting waste emissions should reach at least a PbD normalized value. Pb2: Hydrocarbure diffuses through polymer materials. Pb3: Clearance between two joints in welded plastic PsD shells increase hydrocarbure emission. Problem Domain Pbn: … Ps1: Place a protecting layer over the plastic shell Ps2: Make the plastic shell of a single blowed part Psn: … Partial Solutions Domain TC1: Plastic shell’s openings should be absent (to CD(visualization) CD(computerization) avoid any hydrocarbure emission) AND present (to allow substance and information flow outside the plastic shell). TC2: Plastic shells should be separated (to ease manufacturing precision) AND unified (to reduce hydrocarbure emissions). Contradiction Domain PaD TC3: Holes in the plastic shell should be present (to introduce mounting flanges) AND absent (to avoid any leakage due to seal aging). TCn: …(TC stands for Technical Contradiction) Pe1: Emitted hydrocarbures; Pe2: Quality of Parameter Domain information flow; Pe3: Leakage importance; Pe4: Mounting capacity; …Pen. (Pe stands for Evaluating Parameter.) Figure 3: Elements gathered and allocated 4. Discussions The key points of the benefits of our approach can be expressed through different aspects. First knowledge elicitation (clarity and levers for R&D decisions) is enhanced since we are engaged to evolve from random and intuitive decisions to purposeful orientations of R&D activity (in the context of Inventive design). Second, harmonization of organization' strategy and R&D activities is enhanced through the elicitation of links between engineering parameters and key business drivers of a given market. We also postulate that graphical representations and complexity reduction will lead to a more proactive behaviour from knowledge holders and company's actors to share viewpoints. Thus resulting in a more robust shared model of knowledge representation due to actors’ acceptance. Finally, a knowledge storage and representation mean easing (accelerating while increasing pertinence) the teaching to newcomers is essential to company's know-how. This situation will also be useful for holding company's know-how after turnover or key actors. Limitations and future research of the proposed approach can also be summarized the following way. Redundancies between knowledge need to be observed since they can constitute a useless noise in the accuracy of the representation. To address this problem, tools and methods extracted from Artificial Intelligence are currently investigated and may be partially used. Then, the importance of necessary time to establishing state of the art of a competing area in a specific situation (documents and know-how extraction, analysis and summarizing) is encountering difficulties of acceptance in enterprises. To evolve toward a computer assistance of this stage (resulting in time reduction), some research works related to patents have been achieved by (Cascini, 2004) but needs to be pursued and completed. Finally, it has been proven that cognitive proximity favors innovation in organizations (Boschma, 2004). Nevertheless, proximities of key actors in worldwide companies is not evident, associating them in a common thinking using accepted and shared models for knowledge representation is part of a difficult but necessary cultural change that logically goes with any paradigm shift in our industries.

4. References Altshuller G.S., 1986, “To Find an Idea: Introduction into the Theory of Inventive Problem Solving”, Nauka, Novosibirsk (in Russian). Cascini G., Neri F., 2004, “Natural Language Processing for patents analysis and classification”, Proceedings of the TRIZ Future 4th World Conference, Florence, 3-5 November, Firenze University Press, ISBN 88-8453-221-3. Cavallucci, D. Khomenko, N., 2006, “From TRIZ to OTSM-TRIZ: Addressing complexity challenges in inventive design”, Int. J. Product Development, in progress. Khomenko N. De Guio R and Cavallucci D., 2006, “Enhancing ECN's abilities using OTSM-TRIZ”, IJCE, in progress. Larson A., 2004, “How can R&D strategy be shaped, integrated and monitored to support corporate strategy?” Doctoral Thesis, University of Lule, Sweden, ISSN : 1402-1773. Lee Taesik, 2003, “Complexity Theory in Axiomatic Design”, Ph.D. Thesis, MIT, 182p. Mineau W. G., Moulin B., Sowa J. F., 1993, “Conceptual Graphs for Knowledge Representation”, First International Conference on Conceptual Structures, Iccs '93 Quebec City, Canada, August 4-7. Minsky M., 1975, “A Framework for Representing Knowledge”, Winston, P. (ed.), The Psychology of Computer Vision, McGraw-Hill, New York. Naoto Jinji, 1991, “Essay on Product Quality and Strategic Policies in Open Economies”, Ph.D. Thesis, University of British Columbia, 153p. Ron Boschma, 2004, “Does geographical proximity favour innovation?”, 4th congress on Proximity Economics, Marseille, June 17-18. Simon, H. (1969). Sciences of the Artificial. MIT Press, Cambridge, MA. Sowa, J.F., 1984, “Conceptual Structures: Information Processing in Mind and Machine”, Addison-Wesley, Reading, MA. Suh N P (2001) Axiomatic Design - Advances and Applications, Oxford University Press, New York

Apendix 1: Gathering stage through basic flowchart diagram

Begin Gathering stage at Tn , Tn+x

Knowledge holders Patents Requirements !

Insulate Problems, Questionning Partial Solutions, Knowledge Document Parameters and holders Template of processing Technical questionning Contradictions when possible

Compile element

Elements Is element No Refine formulation ontologies eligible ? Yes

Allocate element

Store in Yes Is element a Problems layer problem ?

No

Store in Partial Yes Is element a partial Use Solutions layer solution ? Contradiction formulation template No Is the Is element a Yes technical Yes Express technical technical explanation contradiction explanation ? clear ? Store in Technical No No contradiction layer Express Influence between parameters Store in Parameter Four layer domains Yes partial filling

Is there a new element ?

No

End of Gathering stage at Tn , Tn+x

TRIZ FOR SYSTEMS ARCHITECTING

G. Maarten Bonnema Laboratory of Design, Production and Management, Department of Engineering Technology, University of Twente, The Netherlands [email protected]

Abstract TRIZ has gained interest over the past decades, as among others expressed by this confer- ence. It is widely used and acknowledged for dealing with technical issues on component level. However, decisions on system level have a much greater impact than those on compo- nent level. Thus it is worthwhile to investigate applying TRIZ early in the design process. The article explores the benefits of and possibilities for applying TRIZ in the architecting phase. For this, system architecting is treated in short. An architecting approach presented earlier will be treated and elaborated upon. This approach connects the customer’s key drivers with functions to be performed. This approach provides leads for integrating TRIZ in the system architecting phase. These will be discussed in detail as the main subject of the paper. Examples, conclusions and ideas for future work complete the paper. Keywords: System architecting, Function, Key driver, Method, Coupling matrix, TRIZ

1. Introduction System architecting will become ever more important as new products have to be created in ever shorter cycles, accompanied by higher functional requirements and increasing multidis- ciplinarity. Also, the chances of failure have to be reduced from the outset of the project. Therefore more attention in the conceptual phase is required. As French [1985, p.3] states about conceptual design: ‘It is the phase where engineering science, practical knowledge, production meth- ods, and commercial aspects need to be brought together, and where the most important decisions are taken’. However, currently system architecting is not well supported by tools and methods. Most system architects have acquired the knowledge and competences during their carreer as a (sys- tem) designer [Muller 2004]. No particular education or specific tools are used by the system designers. Goal of the present research project is to devise a method (and preferably implement that method in a tool) that aids the system designer in creating and evaluating system architec- tures. In Bonnema [2006] an approach for supporting system architects is presented that uses a coupling matrix C, to connect the system functions to the customer key drivers described by Muller [2004]. This method, still under development, will be elaborated upon in section 3 as it provides opportunities to connect to TRIZ, after first having looked at what system architecting is in section 2. The strategy is shown in figure 1. This figure shows the general TRIZ approach of generalising the problem and finding a generalised solution. The first step: coming from a specific problem, as described by the architecting method, to a generalised problem, that can be solved by TRIZ, is what we will concetrate on in section 4. Section 5 contains examples of application of the method. Section 6 draws conclusions and provides an outlook on future research. Use contradiction matrix Apply TRIZ or priority matrices to principle. select suitable principles.

Generalised Generalised Translate key Problem Solution Adapt to specific drivers to TRIZ TRIZ parameters. situation. Specific Specific Problem Solution

“Normal” (system) design route

Figure 1: Overview over the approach to be presented. The main contribution is in the first step: generalising the specific problem and selecting suitable TRIZ principles, based on the system ar- chitecting information captured with the coupling matrix C.

2. System architecting In designing complex systems, architecting is an essential step [Maier and Rechtin 2000, Muller 2004]. Complex systems by definition perform many functions. Ensuring proper fit, balance and cooperation between sub-systems in moderately complex systems design can be supervised by one person. For present day and highly complex systems this is impossible. A team is required because the only way to create these systems successfully is by divide and rule. The determination of the division-lines is what systems architecting is about. We therefore proposed the following definition [Bonnema 2006]: System architecture defines the parts constituting a system and allocates the system’s func- tions and performance over its parts, its user, its supersystem and the environment in order to meet system requirements. And thus system architecting is the process of defining a system architecture. Paradoxically, system’s functions can be allocated to either its supersystem or the envi- ronment. The cooling of a hard disk drive is not performed by the drive itself, but by its supersystem: the computer system it is part of that has a cooling sub-system. This is familiar in TRIZ as “use available resources”.

3. FunKey: an architecting method using functions and key drivers. As mentioned, we have proposed in [Bonnema 2006] a method for system architecting. This method uses a coupling matrix C to connect functions to key drivers. Therefore, we will call the method FunKey from now on. Functions, well known in TRIZ, are tasks to be performed by the system: expose wafer, transport sand, create image. Key drivers are generalised requirements that express the customers’ interest [Muller 2004]. Where it should be noted that the customer can be the end-user or the company downstream in the supply-chain. Examples of key drivers are image quality for a medical imaging device, load capacity and cost per ton per kilometre for a truck. The FunKey architecting procedure is as follows (see figure 2): Architecture 1 kd1 kd2 kd3 Subsystem A Subsystem B f1 × Subsystem C f2 × Utility A f3.1 × × f3 f3.2 × × f3.3 ×

f4 ×

Figure 2: The FunKey architecting method. To the right the coupling matrix C is shown that connects the functions in the block diagram to the key drivers kdi. On the left, one architecture is shown. The subsystems are marked in the coupling matrix. On the top level, functions can also be assigned to the user, the environment and the supersystem.

1. Identify the functions and the key drivers on system level. 2. Create a table with the functions as rows and the key drivers as columns. 3. Check every cell whether the function contributes to the key driver. 4. Create architectures by naming subsystems and assign functions to subsystems. 5. Create system budgets. 6. Repeat for next hierarchical level. After the initial matrix C has been filled with crosses or ones (when there is a contribution from the corresponding function to the key driver), we proposed to quantify the contributions using either numbers, or symmetrical triangular fuzzy numbers (STFN) [Chan and Wu 2005]. To facilitate the coupling to TRIZ in an early stage, the crosses or ones can be replaced by +es or –es to indicate useful or harmful contributions, respectively. This will be used in section 4. The FunKey procedure visualises implicit architectural decisions. Therefore, it is a valu- able tool for a team of architects and for communicating architectural decisions between archi- tect and specialist and/or detail designer. For more information, and particularly the relation between the presented method and other methods like Axiomatic Design [Suh 1990] and QFD [Chan and Wu 2002, and references therein], the reader is referred to the earlier mentioned reference [Bonnema 2006].

4. Connecting TRIZ to FunKey The information in the FunKey matrix provides information on relations between functions and performance of the entire system. Two strategies will be presented that couple the system information to TRIZ: (1) Using a priority matrix [Ivashkov and Souchkov 2004]; and (2) Using useful and harmful or insufficient and excessive contributions. Each strategy will be elaborated in separate subsections. For both strategies the key drivers have to be related to the 39 parameters of a technical system, as defined by Altshuller [1997]. These are generalised properties of a system. The key drivers in the FunKey matrix are generalised requirements. Therefore a match between the two seems to be feasible. Problem is that the 39 TRIZ parameters are well established and Table 1: Priority matrices in FunKey. Table 2: Useful/harmful in FunKey. i. Determine which key drivers have to be i. Identify useful/harmful contributions of improved; functions to key drivers in the FunKey ii. Use the priority matrices PM + and/or matrix C. PM − to identify applicable innovative ii. Identify contradictions: a useful contri- principles; bution to one key driver and a harmful iii. Apply the principles to the correspond- contribution to another key driver. ing functions, or the system. iii. Use the contradiction matrix to identify applicable innovative principle(s); iv. Apply the principle to the function.

fixed. Key drivers will be redefined for every new project. Some reuse might occur, but the method may not rely upon that. Two solutions can be seen: technology or experience. As for technology, one can think of using Artificial Intelligence to recognise one or more TRIZ parameters corresponding to a key driver. This kind of technology is available; see for instance http://www.visualthesaurus.com/. On the other hand, as all TRIZ practitioners know, contradictions are not defined in terms of the 39 parameters from the outset. It takes some experience and associative power to find the parameters that apply directly to the problem at hand. In this aspect, that will not be treated further apart from the examples provided, FunKey does not differ from normal use of TRIZ.

4.1. Using the priority matrix Ivashkov and Souchkov [2004] proposed an interesting use of TRIZ in early stages of design, when the amount of available analytical tools is limited. From the well-known contradiction matrix, the number of times a given innovative principle IP k is proposed to improve parameter pi is determined by counting the number of times the principle IP k is mentioned on the row for parameter pi. This yields the score sik. All scores together, presented in a priority matrix PM + indicate the relevance of each of the innovative principles for each parameter. Thus, + PM directs the designer to the most successful principle IP k for improving parameter pi. Extending this, we can also define a priority matrix for worsening features: PM −. If the aim is not to improve an already positive feature, but to minimize the impact of a worsening feature, we can create analogously to Ivashkov and Souchkov [2004] a priority matrix for worsening features: Instead of counting the number of occurences of IP k in a row of the contradiction matrix, we count the number of occurences in the column of parameter pi. These two matrices can then be used in cooperation with the FunKey approach in the following manner. After the key drivers have been coupled to functions using the coupling matrix C, each key driver identified is connected to a TRIZ parameter. The positive (PM +) or negative (PM −) priority matrix is then used to select one or more promising inventive principles. Each of these principles is then applied to the functions the key driver is associated with in the coupling matrix C, or to the entire system. A generalised solution and then a specific solution (figure 1) is found as in “normal” TRIZ. 4.2. Using useful/harmful or insufficient/excessive contributions As mentioned in section 3, the coupling matrix C is initially filled with crosses. Before analysing the matrix and providing numbers, one can decide to check every cross whether it is a useful (+) contribution, or a harmful (–) contribution. If an identified function fi has a useful contribution to key driver kdj and a harmful contribution to key driver kdk, a con- tradiction can be formulated between kdj and kdk. Associating each key driver with a TRIZ parameter creates a reference to a cell in the contradiction matrix. The principles given there can be applied to function fi. Alternatively, one can examine whether a cross in the FunKey matrix corresponds to an insufficient or excessive contribution of the given function to the given key driver. These can be marked with an i or an e in the matrix, respectively. An i for key driver kdj, that corresponds to TRIZ parameter pi, directly points to the row for that parameter in the positive priority matrix PM +. The corresponding IP can then be applied to the functions that have insufficient contribution to kdj. Analogously an e directly points to a row in the negative priority matrix PM −. The procedure in table 2 has to be modified accordingly. The procedures mentioned above can be implemented in a computer support program. If the FunKey matrix is created using a computer tool, the computer can suggest appropriate innovative principles to the architect.

5. Examples 5.1. Wafer scanner A wafer scanner is the most critical part in a Table 3: Connecting functions and key drivers chip manufacturing line. The scanner images for the wafer scanner case. Only the throughput an original (called reticle) many times on the key driver is shown. wafer. The image is reduced in size by a factor Function Throughput of 4. One of the key drivers of a wafer scanner is throughput. In table 3 the main functions Single Twin of the wafer scanner are shown. Also one col- Load wafer × umn of the FunKey matrix is filled out: the Prealign wafer × throughput key driver. There are two versions Wafer to expose chuck × shown: the single case, which was the starting Align wafer × point, and the twin case: the result that can be Expose wafer × × achieved using TRIZ with FunKey. Maintain focus One can easily see that most functions in- Position stage × × fluence throughput in the single case. Based Unload wafer × on that scheme, one can conclude that to im- prove throughput, the system architecture has to be modified so that several functions do not contribute to the throughput key driver any more. Let us apply the procedure in section 4.1 to improve throughput. Throughput relates to the TRIZ parameter 39: productivity. The priority matrix then suggests to apply innovative principle 10: prior action with a score of 20. We can perform all functions but expose wafer and position stage in advance. This is realised in the TwinScan systems by using two simul- taneously moving wafer tables [Loopstra et al. 1999]. One performs measurements, the other one exposes a wafer. This results in the FunKey matrix shown in the last column of table 3, which is clearly easier to partition. 5.2. Personal Urban Transporter Table 4: FunKey for the Personal Urban Trans- As second example, part of the Personal Ur- porter (PUT). (conv.: convenience) ban Transporter (PUT) introduced in [Bon- nema 2006] will be analysed. A PUT is a Function $/km safety conv. small, safe and economical vehicle for com- Maintain posture – + + muting. Space prohibits a detailed elabora- Create light tion of the example. Based on an initial analy- – on road – + o sis of the system, several functions have been – to other traffic – + o assigned to the PUT. In table 4 part of the Steer – + – FunKey table is filled with +es and –es. We can associate key driver $/km with TRIZ parameter 19: use of energy by moving object, safety with 30: object affected harm- ful factors, and convenience with 33: ease of operation. For the function maintain posture the contradiction between parameter 30 and 19 is identified, leading to innovative principle 24: mediator. This leads to an airbag around the user, to be used when he is about to lose his posture (=fall over). For the contradiction between parameters 30 and 33 for steer, one of the TRIZ principles is 25: self-service. This leads to using the edge of the road to steer the PUT.

6. Conclusions and future work We have presented a way to connect TRIZ to the architecting method FunKey. This may help the system architect both in finding implementations for his functions (as in example 2), and in simplifying the system (as in example 1). A simpler system, of course, is easier to partition. Also, the principle appears to be easy to implement in a computer tool. Main issue is how to connect the key drivers with the TRIZ principles. This can either be achieved with artificial intelligence, a database of related terms, or by using the experience of the designers. Latter solution is preferred as for now. Both the FunKey method and the linking to TRIZ are currently being tested in industrial cases. Results of these cases will be published in due time.

References Altshuller, G. S. (1997). 40 Principles - TRIZ Keys to Technical Innovation, Volume 1 of TRIZ Tools. Worcester, MA: Technical Innovation Center. Bonnema, G. M. (2006). Function and budget based system architecting. In TMCE 2006, Ljubljana, Slovenia, pp. 1306–1318. Chan, L.-K. and M.-L. Wu (2002). Quality function deployment: A literature review. European Journal of Operational Research 143(3), 463–497. Chan, L.-K. and M.-L. Wu (2005). A systematic approach to quality function deployment with a full illustrative example. Omega 33(2), 119–139. French, M. J. (1985). Conceptual Design for Engineers. London: Springer-Verlag. Ivashkov, M. and V. Souchkov (2004). Establishing priority of TRIZ inventive principles in early design. In Design 2004, Dubrovnik. Loopstra, E. R., G. M. Bonnema, H. v. d. Schoot, G. P. Veldhuis, and Y. B. P. Kwan (1999). Litho- graphic apparatus comprising a positioning device having two object holders. European Patent Office; EP0900412. Maier, M. W. and E. Rechtin (2000). The art of systems architecting (2nd ed.). Boca Raton: CRC Press. Muller, G. (2004). CAFCR: A Multi-view Method for Embedded Systems Architecting. PhD-thesis, Delft University of Technology. Suh, N. P. (1990). The Principles of Design. Oxford Series on Advanced Manufacturing. New York, Oxford: Oxford University Press. TRIZ FOR SOFTWARE ARCHITECTURE

Daniel Kluender Embedded Software Laboratory RWTH Aachen University [email protected]

Abstract A key element to designing software architectures of good quality is the systematic handling of contradicting quality requirements and the structuring principles that support them. The theory of inventive problem solving (TRIZ) by Altshuller offers tools that can be used to define such a systematic way. This paper describes the idea and preliminary results of using inventive principles and the contradiction matrix for the resolution of contradictions in the design of software architectures. By rearchitecting a flight simulation system these tools are analysed and their further development is proposed. Keywords: software architecture, design principles, patterns, quality attributes, contradiction matrix

1. Introduction While there’s still some discussion about the definition of the term software architecture1 it is commonly accepted to be defined as the structure or structures of a system, which comprise elements, the externally visible properties of those elements, and the relationship among them [2]. The process of designing a system’s software architecture is shown in figure 1.

requirements elicitation

technically oriented business oriented

functional non-functional requirements requirements

requirements analysis analysis analysis

functional spec driving qualities = optimization = optimization constraints criteria

architecture design = optimization Figure 1. Two Paths of requirements analysis according to [2].

1 See http://www.sei.cmu.edu/architecture/definitions.html for a discussion. During requirements analysis functional and non-functional or quality requirements concerning a software intensive product are distinguished. Quality requirements are desired properties that surpass correct functionality like reliability, integrability, maintainability, testability or modifiability. Non-functional requirements are frequently neglected because they are experientially harder to analyse but crucial for the success of software-intensive systems. Since the non- functional requirements are derivated particularly from a product’s business goals they can not be analysed by a pure technically oriented inspection. During requirements analysis the functional specification is written down and the driving qualities are identified. Driving qualities represent the hard to implement but yet most important stakeholder interests in a product. Because of their important impact on the architecture the driving qualities are also called architectural drivers. Architecture design can be seen as an optimization problem with the driving qualities being the optimization criteria and the functional specification being the optimization constraints. It has long been recognized that a system’s software architecture has a major impact on the non-functional properties of a system like dependability, performance or modifiability [20]. Designing software architectures of good quality is therefore central to software engineering as is the evaluation of architecture quality. Structuring principles which support certain qualities help the architect in finding the optimal architecture. These structuring principles can be architectural tactics or styles [2] like information hiding or architectural patterns [6] like a client-server architecture. These are generalized solutions for frequently occurring problems. Most structuring principles affect several qualities, either enabling or inhibiting them; e. g. information hiding supports the maintainability of a system but is impairing its performance. While the architect can choose from a set of well documented principles (see e. g. the work of Booch on a handbook of software architecture [4]) the merging and consolidation of different principles is by and large still an ad hoc and largely unsystematic process to date. Conflicting quality requirements like performance and maintainability or conflicting structuring principles are compounding the design of a system's software architecture. The resolution of these conflicts relies heavily on the architect's experience and knowledge of the structuring principles. Software architecture represents the tradeoffs between the conflicting qualities that are acceptable for all stakeholders. The systematic handling of contradictions between quality requirements or their according structuring principles can ideally result in the elimination or resolution of the conflict. The theory of inventive problem solving (TRIZ) by Altshuller et al. [1] can help to define such a systematic way. This paper describes the idea and preliminary results of using the TRIZ tools inventive principles and contradiction matrix for the resolution of contradictions in the design of software architectures. The rest of the paper is structured as follows: the next section gives a short introduction into TRIZ and previous work on its use in software engineering. In sections 2 and 3 the aforementioned TRIZ tools and their application to software architecture design are analyzed. These tools used to rearchitect a flight simulation system as an example in section 5. Section 6 concludes the paper and gives and outlook to future work.

2. TRIZ for Software TRIZ has been developed by Altshuller et al. since 1946. By analyzing patents they found that [1]: • Innovations emerge from the application of a relatively small set of strategies, so called inventive principles. • The strongest solutions actively seek out and destroy the conflicts or contradictions most design practices assume to be fundamental. • They also transform unwanted or harmful elements of a system into useful resources. • Technology evolution trends are predictable. The application of TRIZ to software engineering is a relatively new field, hence publications are only few. Rea discusses analogies to the inventive principles in software [16,17] and uses them to obtain several patents [19]. These analogies are extended by Fulbright [7] and Tillaart [21]. Most of them are not directly applicable to software architecture and will be further discussed in section 3. Nakagawa is reviewing topics in software engineering such as structured programming to reason about them using TRIZ [13], Rea reviews concurrency [15]. Rawlinson discusses the application of contradictions between speed, reliability, energy and complexity [14] but does not go into the implications for software architecture. A more general review of the application of TRIZ to software can be found in [18]. Hartmann et al. emphasize the practicability of TRIZ for software architecture [8], Muller classifies TRIZ as a possible architecting method [12]. Mann's summary [10] gives a short insight into his upcoming book [11]. He analyzed 40,000 patents in software and developed a newly tailored contradiction matrix, slightly modified inventive principles, trends of evolution and other TRIZ tools. Since the paper just gives a short overview, the book that is not available at the time of this writing needs to be awaited for a more detailed discussion.

3. Inventive Principles The aforementioned observation that innovations emerge from the application of a relatively small set of strategies lead TRIZ researchers to the formulation of 40 innovative principles. These are the generalized descriptions of 40 solution strategies that were identified by analyzing patents. Despite their generality not all of the 40 innovative principles are directly applicable to software architecture design, some are apparent mismatches. Nevertheless these principles subsume solutions to conflicts and contradictions that are successfully applied in other domains, hence a mapping into software architecture terms seems promising. Since TRIZ was developed in hardware-based technology fields this mapping is not a straight forward task. Others have tried to find analogies [16,17,7,21] but these analogies are concerned with multiple phases of software engineering. Most of them are close to implementation issues of specialized domains and as such not usable for application to software architecture. Mann has analyzed software patents [10,11] but as said before his results have not yet been published. Some ideas can also be found in his paper about buildings' architectures [9]. Formulating a new set of innovative principles for software architecture in the same way it was done during the development of the TRIZ theory is hard because these principles originate from the analysis of patents but there are few patents on software architectures. Instead they can be formulated using patterns. Architectural patterns are a good source for principle mining because they are generalized solutions for frequently occurring problems. As such they contain the heuristics of successful solutions. The analysis of correspondences between inventive principles and patterns shows the following: • Inventive principles are more general than patterns and as such often comprise several patterns. For example the principle segmentation is a generalized description of patterns like the layered architecture pattern or the principle copying comprises several redundancy patterns. Hence inventive principles are no replacement for patterns but combined with the contradiction matrix can serve as a navigation aid for selecting patterns or finding new ones. • Correspondences between architectural patterns and inventive principles can mainly be found in the clusters contradiction resolution in space, time and structure but not in material. • For some principles there are no correspondences in architectural patterns. For example accelerated oxidation describes the principle to replace common air with oxygen-enriched air. Most of these principles are part of the material class.

4. Contradiction Matrix The inventive principles can be used stand-alone to search for solutions or in form of the contradiction matrix for a more directed search in solution space. This matrix allows users to detect side-effects the 40 innovative principles can have on 39 technical parameters like e. g. reparability, reliability or temperature. From an architect's point of view these parameters can be seen as the quality attributes of the system to be designed. The quality attributes of a system are generally arranged in a so called utility tree as shown in figure 2. The figure also shows possible corresponding technical parameters of the contradiction matrix. Quality attributes can be seen as translations of the technical parameters when applying the contradiction matrix to software architecture. Some of these translations are straight forward like reliability, availability (durability of an object), adaptability or maintainability (repair friendliness). Others can not be easily translated into software architecture terms like mass, length, area or volume of a moving object. Although not all inventive principles and all technical parameters of the contradiction matrix can yet be translated into software architecture terms the remaining extract can be useful for software architecting because it offers a navigation help when searching for an architectural solution to contradictions in contrast to common trial-and-error methods that are solely based on the architect's experience. In the software design process the contradiction matrix can be of help if two driving qualities or their supporting architectural principles contradict each other. The architect looks up the corresponding technical parameters in the matrix and tries to apply the listed innovative principles and architectural patterns that belong to them. Just as well the matrix can help to choose an architectural pattern to support a driving quality that has no contradicting quality requirement by supporting information on the effect of a pattern on other qualities. That way the matrix allows a systematic approach to choosing patterns and resolving contradictions between quality requirements which can easily be integrated into mature architecture design methods like e. g. attribute driven design [3].

5. Example: Flight Simulator To analyze the applicability of the two TRIZ tools and give an example of their usage the well documented requirements and architecture of the flight simulation system introduced in chapter eight of [2] is used. Rearchitecting the existing system allows to examine whether the inventive principles and contradiction matrix can help designing the system's architecture. The question of interest is whether the general principles for resolving contradictions found in other areas can also be applied to software architecture.

Figure 2. Quality attributes and possible corresponding technical parameters. During requirements elicitation and analysis the driving qualities are identified. Afterwards their corresponding technical parameters from the contradiction matrix are denoted: • the system's performance corresponds to technical parameter 9: speed • modifiability to accommodate changes in requirements and scalability of function correspond to technical parameter 35: adaptability • integrability corresponds to technical parameter 32: manufacturability • testability corresponds to technical parameter 37: complexity of control and measuring In general some of these architectural drivers contradict each other. For the flight simulator improving modifiability could impair the system’s performance. To resolve this contradiction the contradiction matrix suggests using the inventive principles dynamicity, prior action or copying. In fact the architecture design suggested in [2] uses a partitioning that maintains a close correspondence between the aircraft partitions and the simulator virtually copying parts of the aircraft. Other inventive principles that can be found in the suggested architecture include segmentation, extraction, mediator and nesting. The example shows that the inventive principles are no replacement for architectural tactics or patterns but rather an extension that helps selecting merging and balancing them. As said before some parts of TRIZ seem to make no sense for software architecture, e. g. the suggested usage of the inventive principle changing the state of aggregation.

6. Conclusion and Future Work Contradicting quality requirements and the merging of their supporting architectural strategies are core problems in software architecture design and make it a task that is heavily dependent on the architect’s experience and knowledge. Using the TRIZ tools inventive principles and contradiction matrix can help directing the search in the solution space into a heuristically promising direction. Hence these tools can be seen as an extension to architectural tactics and patterns. This paper displays the author’s approach of finding correspondences between inventive principles and architecture patterns on the one hand and technical parameters and quality attributes on the other. Although not all 40 principles and 39 parameters have a corresponding pattern or attribute the remaining can be useful in architecture design. In fact some of the general principles for resolving contradictions found in other areas can also be applied to software architecture. Despite the found correspondences it does not seem possible to translate the whole contradiction matrix into software architecture terms. However, it seems promising to formulate a domain-specific matrix by rearchitecting successful architectures.

References [1] G. Altshuller, H. Altov, and L. Shulyak: And Suddenly the Inventor Appeared: Triz, the Theory of Inventive Problem Solving. Technical Innovation Ctr., 1996. [2] L. Bass, P. Clements and R. Kazman: Software Architecture in Practice; SEI Series in Software Engineering. Addison-Wesley Professional, 2. edition, 2003. [3] L. Bass, F. Bachmann and M. Klein: Quality attribute design primitives and the attribute driven design method. In Proceedings of the 4th International conference on Product Family Engineering, Springer Verlag, Berlin, Germany 2002 [4] G. Booch: On architecture. IEEE Software, 23(2), March / April 2006. [5] L. Dobrica and E. Miemelä: A survey on software architecture analysis methods. IEEE Transactions on Software Engineering, 28(7):638—653, Juli 2002. [6] B. Douglass: Real-Time Design Patterns. Addison-Wesley, 2003. [7] R. Fulbright: TRIZ and software fini. TRIZ Journal, August 2004. [8] H. Harmann, A. Vermeulen and M. v. Beers: Application of TRIZ in software development. TRIZ Journal, September 2004. [9] D. Mann: 40 inventive (architecture) principles with examples. TRIZ Journal, July 2001. [10] D. Mann: TRIZ for software? TRIZ Journal, October 2004. [11] D. Mann: TRIZ for Software Engineers. IFR Press, to appear soon. [12] G. Muller: CAFCR: A Multi-view Method for Embedded Systems Architecting. PhD thesis, Technische Universiteit Delft, 2004. [13] T. Nakagawa: Software engineering and TRIZ – structured programming reviewed with TRIZ. In Proceedings of TRIZCON, Altshuller Institute, April 2005. [14] G. Rawlinson: TRIZ and software. In Proceedings of TIRZCON, Altshuller Institute, March 2001. [15] K. Rea: Using TRIZ in computer science – concurrency. TRIZ Journal, August 1999. [16] K. Rea: TRIZ and software – 40 principles analogies part 1. TRIZ Journal, September 2001. [17] K. Rea: TRIZ and software – 40 principles analogies part 2. TRIZ Journal, November 2001. [18] K. Rea: Applying TRIZ to software problems. In Proceedings of TRIZCON, Altshuller Institute, May 2002. [19] K. Rea: TRIZ for software: Using the inventive principles. TRIZ Journal, January 2005. [20] M. Shaw and D. Garlan: Software Architecture: Perspectives on an Emerging Discipline. Prentice-Hall, 1996. [21] R. v. d. Tillaart: TRIZ and software – 40 principle analogies, a sequel. TRIZ Journal, January 2006.

NATURAL WORLD CONTRADICTION MATRIX: HOW BIOLOGICAL SYSTEMS RESOLVE TRADE-OFFS AND COMPROMISES

Darrell Mann Director, Systematic Innovation Ltd, UK [email protected]

Abstract The paper describes updates to the TRIZ Contradiction Matrix tool. The tool has been constructed following an extensive programme of research to uncover and codify the strategies used by biological systems to overcome conflicts, trade-offs and compromises. The paper is divided into three main sections. In the first section, we discuss the dynamics of contradiction emergence and resolution in nature. The aim in this section is to define a set of heuristics to help determine when and where nature is likely to experience and therefore have to resolve contradictions. The second section of the paper goes on to present a number of examples of conflict resolution in nature. The third and final section of the paper then moves on to examine the main similarities and differences between the strategies used by nature to resolve trade-offs and compromises and those used by human designers. The basis of this comparison is the Matrix 2003 tool developed from our parallel studies into trade-off resolution in human engineered technical systems. Keywords: discontinuous, evolution, conflict, breakthrough, nature

1. Introduction – Nature The Great Optimizer 60 years of TRIZ research has clearly demonstrated the importance of contradiction emergence and resolution as the primary driving force of evolutionary advance in man-made systems. Comparable studies of evolutionary advance in nature, although rarely using the term ‘contradiction’, highlight a remarkably consistent message (Reference 1, 2). As with human engineered systems operating in a competitive environment, the primary driving forces in nature revolve around ‘survival of the fittest’. As often told in the clichéd joke involving two men being chased by a tiger, the problem is not about whether humans can run faster than tigers, but whether one human can run faster than another. In other words, we only have to be slightly better than our immediate competitors in order to be the one to survive to live another day. Ultimately, of course, someone invents a shotgun to shoot the tiger, thus solving a contradiction and forever changing the game in favour of the human. At this point in time, the tiger has not been successful in countering the bullet threat. The point of mentioning this story is that the invention of a game-changing strategy tends not to happen that often. Far more normal in nature is that the overall eco-system sets up natural balances that will cause a given sub-system to ‘optimize’ itself at a given level. In the case of the tiger- versus-human story, if humans were the only prey of tigers and the tigers got really good at killing humans then the number of tigers would grow while the number of humans would shrink. Ultimately then the number of humans would be insufficient to feed all the tigers and so some parts of the tiger population would starve and the number of tigers would ‘naturally’ drop. These balances are everywhere in nature. Nature is a great optimizer. Nature represents the ultimate ‘self-correcting’ system. Figure 1, for example, presents a reproduction of the classical population dynamics study in nature – the lynx versus hare (Reference 3).

Figure 1: ‘Self-Correcting’ Lynx-versus-Hare Population Dynamics

Amazing though such self-correcting systems are, they have little to teach us when we are looking for examples of ‘breaking’ contradictions. This oscillating boom-and-bust population cycle is the ‘natural order’. In the research to identify how and where nature solves contradictions, these cycles have little to tell us. What we are looking for are the moments when the game is changed. We are looking for the natural world’s equivalent of inventing the shotgun. More specifically, we are looking for situations where a natural system makes a discontinuous shift from one way of doing things to another.

1.1 When Nature Changes The Game Not quite a hare, but a good example of the sort of thing we are looking for is the Skomer rabbit. In most peoples’ minds the rabbit’s most famous characteristic is its ability to create other rabbits. Largely driven by a Figure 1 like cycle, the ‘normal’ rabbit response to the predation threat is very simply to keep on producing as many new rabbits as possible – with typically up to 8 breeding cycles per year, and each cycle potentially producing 10 offspring. This population growth can only happen for so long however since sooner or later the amount of food available to feed the rabbits becomes insufficient to sustain the population. The rabbit population thus goes through its own cyclic periods of boom and bust. On Skomer, a small island off the coast of Wales, however, the rabbit population does not exhibit this boom-bust oscillation. The rabbits on Skomer will typically breed only once a year and each pair will typically only raise three offspring per year (Reference 4). When we see an evolution jump away from the ‘norm’ like this, we can be reasonably certain that nature has successfully found a way of solving a contradiction. The contradiction in the case of the Skomer rabbit is a desire to avoid the ‘waste’ of boom-bust cycles, which is traditionally prevented by an inability to predict how much food (or how many predators) will be around in the future. The ‘discontinuous jump’ solution to this problem now present in the Skomer rabbit population is the identification and incorporation of a feedback loop. The urge to breed in the Skomer rabbit population has been linked to population density. In other words, if a Skomer rabbit looks around and sees lots of other Skomer rabbits, the ‘breed now’ signal somehow gets switched off. From a biological stand-point, this description is somewhat over-simplified of course. But it does offer us the essence of the research task at hand when we are attempting to codify what nature does when faced with contradictions. What has happened here with the Skomer rabbit case is what we have to do with all of the other discontinuous jumps we find: firstly we work out what the core contradiction is, and then we reverse engineer how it has been solved. In this case, then, the entry into our knowledge database would show this problem as a waste (‘loss of substance’) versus inability to predict future food/predation (‘amount of information’ and/or ‘ability to detect’) conflict, the resolution to which involved the addition of a new feedback loop, Principle 23.

1.2 When The Game Changes On Nature Sometimes the contradiction nature has to solve emerges due to a shift in the external environment. Another classic evolutionary biology case study highlighting this effect is the Peppered moth (Figure 2):

Figure 2: Pair Of Peppered Moths – One Carbonaria One White

The basic story of the Peppered moth is straightforward (Reference 5): “The typical form of this species has whitish wings, speckled with black. In 1848, a black form, named carbonaria, was recorded in Manchester. The carbonaria form increased in frequency rapidly, so that by 1895, 98% of Mancunian Peppered moths were black. The melanic form spread to many other parts of Britain. By examination of old collections, Steward mapped the spread of carbonaria, concluding that all British carbonaria probably derived from a single mutation.” The explanation for the rapid transition from white to black moths in Manchester was very simply the Industrial Revolution. A habitat of the Peppered moth that was traditionally pale in colour ‘suddenly’ became covered in layers of soot. Pale coloured moths that would normally have been camouflaged thus became increasingly visible to predators. Dark- coloured moths therefore came to have a distinct evolutionary advantage and hence their population grew. The basic contradiction here is one between the desire to hide from predators and the fact that the environment changed. Not surprisingly, the Inventive Principle required to solve the contradiction in such a case was number 32, ‘Colour Change’.

1.3 When The Environment Changes Too Quickly… As we shall see later, the colour change ‘strategy’ – or mutation – is one that is relatively easy for natural systems to deploy. This plus the fact that moth populations tend to be quite large in terms of numbers means that there was sufficient time for the Peppered moth to respond to the changing environment. In other situations, however, nature is not so lucky. Particularly since mankind has added ‘technology’ to the evolutionary toolkit. Another classic evolutionary biology case study relevant here is the Dodo shown in Figure 3.

Figure 3: The Now Extinct Dodo

The Dodo used to inhabit the island of Mauritius in the Indian Ocean. Mauritius has a jungle-like habitat and so the dodo tended to feed by scrubbing around the roots of trees. The need to fly wasn’t great and so, over time, the bird re-deployed its resources from wings to thighs and as a consequence became flightless. The Dodo also had no natural enemies and so evolved no natural defenses. When explorers came, they changed the system. Hunting a flightless, lazy, oblivious-to-danger Dodo was simple, and because they tasted pretty good too, it didn’t take long before the explorers wiped out the whole population. There was no time for the bird to evolve a solution to the human explorer problem and consequently the contradiction was never solved. Now that man is changing the world seemingly ever more rapidly, we appear to be seeing large numbers of other species becoming extinct before they could solve the contradiction of their changing environment.

1.4 Evolution At Biological Speed… Take mankind and technology out of the equation and we start to find somewhat larger numbers of biological examples of successful contradiction solving. The most obvious ones involve so-called evolutionary arms-races. These occur sporadically throughout evolutionary time, and usually involve a changing game between predator and prey. What we are looking for when we see these arm races are the strategies that either predator or prey use to shift the natural balance that previously existed. The Bombardier beetle (Reference 6) is a wonderful example of a whole progression of evolutionary jumps that have now created a formidable poison-throwing solution to the prey killing task of the beetle. Evolutionary arms races shift the ecological balance from one stable position to another. As in the tiger versus two men example, the system will tend to re-stabilise at a new balance point rather than cause extinction. Basically when we are looking for contradictions we are thus looking for situations where we see this kind of discontinuous shift. Arms races aren’t the only contradiction-finding opportunity however. Arms races tend to take place over many generations of a life-form. We can find many cases of solved contradictions when we zoom-in to look at the fine details of changing environments. A good place to look is at a transition of one life stage to another. When a crab is safe inside its shell it has found a good solution to its predation problem. Crab-in-shell is what we might think of as the ‘normal’ environment. But shells don’t tend to grow, while crabs do. Sooner or later during its growth cycle, the crab needs to solve this contradiction. It either needs to find a way of making a hard exo-skeleton grow or it needs to find a way to make the transition from one shell to another as safely and rapidly as possible. Marine crabs tend to solve the contradiction by employing already existing appropriately shaped structures (the Hermit crab for example) or creating a temporary exoskeleton by pumping their outer layers full of seawater. A land crab on the other hand cannot use either of these strategies. A newly moulted black-back land crab (Gecarcinus lateralis) (Figure 4), on the other hand, has found a solution to the contradiction (Reference 7). It traps air within its gut and squeezes, firming up its entire body. Besides being the first known example of a gas-powered skeleton, the innovation may have been a key step in the evolution of land- based crustaceans.

Figure 4: Blackback Land Crab Uses Principle 29 To Solve Its Moulting Contradiction

1.5 Summary of Factors Driving Discontinuous Evolution All in all then, nature tends to opt for optimization over discontinuous breakthrough change. Put more simply, nature has not found ways of solving large numbers of contradictions. Technology, for example, allows mankind to transport 400 tonnes of Boeing 747 across the ocean, whereas nature still hasn’t found a way of lifting more than 22kg off the ground. Nature, nevertheless, does make discontinuous jumps. Our job in this research is to find them and to then reverse engineer them. Unlike the world of technology and the fiercely competitive and highly transient commercial environment, the discontinuity rate in nature is relatively low. The most fruitful areas to look for nature solving contradictions we have found are (in no particular order): - evolutionary arms races - nature responding to a dramatic shift in local environment - nature in transition from one steady state to another (birth, growth, mating, giving-birth, attack, defence) - nature at the micro and nano scale This latter topic is driven largely by evolution in bacteria. Evolution rates in this phyla can be tremendously high due to the ability to rapidly ‘share genes’ between different bacteria. This is an important area both in terms of man’s never ending race to find cures for ever-evolving disease, but also because nature is currently a far more practiced nano-engineer than even the best of mankind’s capability.

2. A Few Mini Case Studies Every month we publish one of the biological case studies emerging from our research programme (Reference 8). The aim is to put together a more comprehensive and more scientifically valid version of the primary Russian TRIZ resource on biological systems (Reference 9). Here are a few random examples not appearing in that source. Let’s start with an ‘arms-race’ example:

2.1 Jellyfish - are generally thought to be soft and squishy creatures. So how does what appears to be little more than a fluid filled bag attack prey that might happen to live inside a tough shell? They must shoot their stinging cells at crustaceans with enough power to puncture the animals' shells. Normal high speed cameras aren't fast enough to catch the strike, so researchers used an ultra-high speed camera, which captures 1.4 million frames per second (Reference 10). The results reveal that the stinging cells discharge in 700 nanoseconds, reach an acceleration of 5.4 million g, and strike with the force of some bullets. The lightning assault – which scientists currently believe is driven by a release of energy from stored collagen in the stinging cells' walls – is one of the fastest movements in the kingdom. Here we see a conflict operating on two different levels. At the highest level it is all about how a soft thing pierces a hard thing. At the more detailed level it is all about how the jellyfish manages to create the enormously high acceleration rate needed to solve the higher level problem. The high-level problem can be viewed as a ‘Force’ (required to pierce the shell of the prey) versus ‘Other Harmful Factors Acting On System’ (i.e. the hard shell will damage the soft jellyfish) conflict. The strategy deployed by the jellyfish can then be viewed as an example of Principle 21, ‘Hurrying’ in action – i.e. perform the action quickly enough and the skin will be pierced. At the more detailed level, the conflict is about the desire to achieve a high enough acceleration (‘Force’) with the lowest amount of energy and the simplest possible system. At this level, the Reference reveals the solution to involve the prior storing of energy in the collagen, (Principle 10).

2.2 Collared Lizard - when male collared lizards (Crotaphytus collaris) – Figure 5 – square off, they make sure their rival knows what they're packing. And it’s not just teeth on display. Each opens his jaws wide enough to reveal his chomping machinery: Jaw muscles that reflect ultraviolet light hint at just how hard the lizard can bite. This dramatic display of ‘weapon quality’, reported in Reference 11, is a classic ‘look fierce without wishing to commit the resources to actually be fierce’ contradiction. In more formal TRIZ terms it represents a conflict between Security and the Amount Of Energy required to achieve the desired effect. In taking advantage of UV and reflection, the collared lizard has made a discontinuous jump that can be mapped to applications of Principle 32 (Colour Change – which includes specific mention of UV) and 13 (‘The Other Way Around’).

Figure 5: Collared Lizard 2.3 vicina – while venom is the weapon of choice for most , the creation and storage of such noxious substances comes with a relatively high resource cost. Philoponella vicina avoids this problem by utilising an alternative existing resource – web silk. This (Figure 6) wraps its prey in meters of silk to make a shroud. As the silk dries it shrinks slightly delivering a crushing force many times the spider's own weight, and enough to break legs and collapse compound eyes (Reference 12). The study is the first to show that wrapping can damage or even kill prey, instead of merely immobilizing it. Lacking poison to finish the job, Philoponella then regurgitates another existing resource - its digestive fluid - into the shroud, thus creating a self-contained liquid meal.

Figure 6: Philoponella vicina

As with the jellyfish story, the Philoponella vicina contradiction story works on two levels. The basic level contradiction is how to kill prey when the spider doesn’t have a poison resource; the more detailed level problem is then how to get the identified silk resource to create a sufficient crushing force. The higher level problem is a Productivity (desire to kill prey) versus Amount of Substance (the spider has no poisoning capability) conflict. It is resolved by finding a new use for the web silk (Principle 25B, Self-Service). At the more detailed level, the conflict centres around the need to create a high crushing Force without changing or impeding the Stability and Strength of the silk. The breakthrough solution then lies in the combined strategy of many turns of the silk using its natural propensity to shrink as it dries (Principles 5, Merging and 8, ‘Prior Counter-Action’).

2.4 Gannet – the gannet is a large seabird renowned for its diving behaviour. The bird is known to dive to depths of over 20m by making a vertical dive into the sea from high altitude. An obvious problem here is the enormous forces inflicted on the bird’s skull as it enters the water. The obvious solution is to re-enforce the skull so that it can withstand such forces. But the weight of a heavily re-enforced skull takes up valuable material resource and makes it difficult to achieve balance during non-diving flight. The gannet changed the force- versus-weight game. It has a skull that is no heavier than birds of comparable size thanks to a design full of air pockets that cushion the brain and dissipates the impact shock-waves. The gannet evolution story presents a classic illustration of the Principle 31, ‘Porous Materials’ strategy.

2.5 Leaf-Cutter Ant – from South America, Atta sexdens, has incredibly resilient cuticle at the cutting edges of its mandibles (Figure 7). Atta lives in the tropics, where it harvests vast amounts of vegetation to cultivate an edible fungus. It has some of the toughest teeth in the whole of the biological world. Recent studies (Reference 13), we have begun to reveal the remarkable composition and chemistry which underlies the unique toughness and durability of the teeth. A measured six-fold increase in hardness at the cutting-edge of the jaw can be traced to impregnation with zinc and manganese at levels up to 10% by weight.

Figure 7: Close-Up View Of Leaf Cutter Ant Mandibles

Here is another classic material system trade-off involving the thickness of the tooth – we want it to be thick for strength and durability and thin for ease of cutting and thus lowest use of energy. The inclusion of zinc represents the use of a Principle 40, ‘Composite Material’ strategy.

3. Putting It All Together According to Reference 14, the original intention as far as our biological research programme is concerned was to create and publish a completely new Matrix. In that paper we discussed the Matrix Parameters – actually a subset of the classical TRIZ tool – that were most relevant in the biological context. That paper was originally published in 2004. At that time it was felt that the strategies used by nature to resolve contradictions were considerably different to those used by engineers, and that that was therefore the justification for publishing a new version of the tool. Almost everything that has happened since that paper was first published has served to highlight two important issues: 1) at some time or another nature has sought to challenge technical contradictions involving all 48 of the Parameters found in the updated TRIZ Matrix, ‘Matrix 2003’ (Reference 15). The value of selecting a natural world subset of Parameters was thus diminished. 2) When we map solution strategies used by nature onto that Matrix, the correlation between the strategy used by nature and the Inventive Principles observed in the world of technology is very high. In fact as a global average across all of the boxes in the Matrix, the correlation is about 95%. It is 95% likely, in other words, that the Matrix will already contain a strategy found in nature. Bearing in mind that Matrix 2003 was shown to be 95% accurate when tested against patents granted in 2004 (Reference 16) and is still 93% accurate when a similar exercise was conducted in 2005, the justification for creating a separate tool based on the biological findings becomes rather less compelling. This being said, a 95% average correlation does not mean that every box in the Matrix carries a similar level of accuracy. The worst correlation in fact occurs in the Amount Of Substance versus Strength box reported in Reference 14. The correlation between Matrix and nature for this conflict pair is 60% - Matrix 2003 containing 3 of the 5 most frequently used strategies in biology. As far as the case studies described in this paper are concerned, only one – the Skomer rabbit – conflicts with the recommendations already found in Matrix 2003. And having said that, the only reason Matrix 2003 does not contain a Principle 23 recommendation for the conflict pair solved by the rabbit is that adding feedback is a strategy that is now used so frequently that we cease to think of it as inventive in many situations. The low cost with which feedback can be added to almost any technical system means that almost all such systems have feedback as a matter of course. Here is an area, in other words, where human engineers have little to learn from a solution evolved in nature. When the time comes to review and update Matrix 2003 (we promised ourselves that it would happen when its accuracy dropped below 90%) we will undoubtedly incorporate the findings from our biological research. While it is not yet clear precisely what form such an integration will take, we are clear on two issues that can be described here:

3.1 Some Jumps Are Bigger Than Others… Nature, the great optimizer, has to produce viable life-forms. When making a discontinuous jump from one way of doing things to another, there is no middle ground. This is classic ‘you can’t cross a chasm using many small steps’ territory. Engineers, on the other hand, don’t mind too much if an experiment fails. We can make mistakes and those mistakes frequently serve the useful purpose of accelerating our discovery of the big jump solution. A classic example of a ‘big jump’ in a technical system would be one involving Principle 28, Mechanics Substitution. Replacing a mechanical wheel with a magnetic levitation system, gives a good illustration of a high magnitude chasm-like jump. Such jumps are rare if found at all in nature. More typical of a natural world evolutionary ‘jump’ is the curlew. This example, described in more detail in Reference 17, attempts to explain why some wading birds have bills that point up, while others have bills that point down (Figure 8) Both the Curlew and the Godwit have at some point in their evolution that there is an advantage in making use of curvature (Principle 14) in improving their feeding efficiency. Transforming a straight thing into a curved thing is a relatively easy thing to do in evolutionary terms. Once that jump has occurred the rest – the direction and degree of curvature for example – is pure optimization to suit the prevailing circumstances. Likewise, changing colour is a relatively easy mutation to make. Especially if, as in the case of the earlier Peppered moth description, there is already a mutation that has created the capability to produce black spots on the normally pale coloured wings.

Eurasian Curlew (Numenius arquata ) Bar-tailed Godwit (Limosa lapponica)

Figure 8: Upward And Downward Curved Bills In Wading Birds

Nature, in other words, provides us with a mechanism to identify which of the Inventive Principles found in TRIZ are likely to give bigger advances than others. 3.2 Nature Knows How To Deploy Some Principles Better Than Engineers… The converse of the Principle-weighting story is that there are certain breakthrough strategies that nature has deployed far more successfully than the best of mankind’s efforts. By some considerable distance in front in this regard are Principle 31, ‘Porous Materials’ and Principle 25, ‘Self-Service’. With Principle 31, the breakthrough potential comes with the profound knowledge of how to get the most functional benefit out of the least amount of material. Making use of empty space is a trick that nature has learned time and time again. Human engineers are only just beginning to learn how nature uses this strategy (and also how nature manages to combine it with other strategies – like Asymmetry). Principle 25 is important for similar reasons. Human engineers (especially ones working in the West!) have traditionally been very wasteful of resources. In nature, there is no such thing as waste. Everything gets used. If not by one part of an eco-system then by another. There are examples of nature evolving wonderful solutions using both of these strategies in just about every box in the Contradiction Matrix. While we wait for the details of precisely where and how, a simple but potentially significant way to adopt the best of nature into our desire to create breakthrough engineering systems is to always be on the look-out for opportunities to deploy these two Principles in nature-like ways.

4. References 1) Margulis, L., Sagan, D., (2000), ‘What Is Life? The Eternal Enigma’, University Of California Press. 2) Mackenzie, A., Ball, A.S., Virdee, S.R., (1988), ‘Instant Notes In Ecology’, Bios Scientific Publishers, Oxford. 3) Breitenmoser, U., Slough, B.G. and Ch. Breitenmoser-Würsten, (1993), ‘Predators Of Cyclic Prey: Is The Canada Lynx Victim Or Profiteer Of The Snow Shoe Hare Cycle?’ Oikov 66 issue 33:551-554. 4) Bellamy, D., ‘Skomer Rabbits’ Dyfed Wildlife Trust pamphlet, undated. 5) Majerus, M., (2002), ‘Moths’, The New Naturalist, HarperCollins, Chapter 9. 6) Mann, D.L., (2004), ‘Beetles, Chains And Radar Plots’, TRIZ Journal, March 2004. 7) Systematic Innovation e-zine, (2006), ‘Blackback Land Crab’, Issue 52, July 2006. 8) www.systematic-innovation.com 9) Timokhov, V., (2002), ‘Natural Innovation: Examples Of Creative Problem Solving In Biology, Ecology and TRIZ’, CREAX Press, (original publication in Russian, 1995). 10) Nüchter, T., Martin Benoit, M., Engel, U., Özbek, S., Holstein, T.W., (2006), ’Nanosecond-Scale Kinetics Of Nematocyst Discharge’, Current Biology, Volume 16, Issue 9, May 9. 11) Lappin, A.K., Brandt, Y., Jerry F. Husak, J.F., Macedonia, J.M., Kemp, D.J., (2006), ‘Gaping Displays Reveal And Amplify A Mechanically Based Index Of Weapon Performance’, The American Naturalist, University of Chicago Press, July 2006. 12) Eberhard, W., Barrantes, G., Weng, J.L., (2006), ‘Tie Them Up Tight: Wrapping By Philoponella Vicina Spiders Breaks, Compresses And Sometimes Kills Their Prey’, Naturwissenschaften, May;93(5):251-4. 13) University of Southampton, Bio-Composites http://www.soton.ac.uk/~pw/research/bytes/bytes.htm 14) Mann, D.L., O Cathain, C., (2006), ‘Better Design Using Nature’s Successful (No-Compromise) Strategies’, TRIZ Journal, May 2006. 15) Mann, D.L., Dewulf, S., Zlotin, B., Zusman, A., (2003), ‘Matrix 2003: Updating The TRIZ Contradiction Matrix’, CREAX Press. 16) Mann, D.L., (2004), ‘Comparing The Classical And New Contradiction Matix, Part 2 – Zooming- In’, TRIZ Journal, July 2004. 17) Systematic Innovation e-zine, (2006), ‘Curlew’, Issue 49, April 2006. INNOVATION AND CREATIVITY ON LOGISTICS BESIDES TRIZ METHODOLOGY

Odair Oliva de Farias Catholic University of Santos (Unisantos) / Technology Faculty (Fatec), Brazil [email protected]

Getúlio Kazue Akabane Catholic University of Santos (Unisantos) / Santo Andre University (UNIa), Brazil [email protected]

Abstract Logistics activities have been receiving special considerations from scientific management today due to the present growing demands of the global economy. To achieve different goals among different participants of on going complexities of logistics networks, constitute the challenge facing the construction of new paradigms of 21st century. The main initiatives on supply chain management, today, have to consider widely spread models and concepts used in the solution of contemporary logistics problems. Logistic systems as technical systems can be identified by its original matrix of contradictions associated by similarities to inventive principles, models and related technologies. Solutions on this field can be rearranged in agreement with fundamental logistics variables as time, information and resource. Most frequent logistics principles, not related to ordinary solutions, are identified in this paper as important potential for innovative and creative new solutions. In this way, TRIZ model applicability have been confirmed here for the field of operation management, especially to the best use of logistic system resources, new models applicability and technological innovations in this area. Keywords: Logistics, Supply Chain, Complexity, TRIZ, Innovation, Creativity

1. Introduction As a complex activity, logistics comes constantly across the challenge of assisting specific demands according to several parameters of marketing, sales, production and others. These activities repeat in each player of the supply chain, and besides having synchronized such activities they should contemplate each participant's in balanced and maintainable objectives. In several different contexts people can consider the term “complex” with the connotation of something of difficult solution or complicated. However, the real sense of word “complex” can be found in its Latin origins as “complexus” or as something woven together (Morin, 1999). In order to understand contemporary complexities one may define logistics through five key concepts. Logistics describes the entire process of materials and products moving throughout the firm, including inbound logistics, material management and physical distribution making goods outward from the end of assembly line to the costumer. Finally, supply chain management is a somewhat larger concept than logistics itself, because it deals with managing both the flow of materials and the relationships among intermediaries channels from the point of origin of raw materials to the final consumer. To assist these demands of contemporary networks it is necessary to use methodologies that have been in development since the appearance of general systems theory, and among them a special emphasis is given today to TRIZ based in the own inventive capacity of humans. The trend towards increasing globalization and highly intensive competition have in recent years forced companies to look for solutions by which they can further develop their competencies to increase their competitive edge. These competencies are the collective learning which guides business owners and workers in the best way to coordinate diverse production skills and integrate multiple streams of technologies. It involves many levels of people and an entire array of functions (Hamel and Prahalad, 1990). When more accumulation of knowledge was no longer viewed as valuable, primary importance was attached to renewing and generating new ideas. New ideas are considered to be the main source of national wealth determining economic, cultural and military potential. Following Salamatov (1999), this became the reason for active search towards finding new ways to intensify the flow of new ideas. As result of this study the main principles were commented within logistic scope, subdividing in specific groups and pointing alternatives that eventually can be adopted in new challenges and presenting potential applicability with earnings in competitive advantage and costs saving in logistics.

2. Logistics Management Competitive pressures and resource constrains of today’s operating environment have elevated logistics management to an important strategic level within many firms. The function of a logistics network should be to maximize profits and provide least total cost system, and also to achieve certain desire customer service levels. In addition to that, the Council of Supply Chain Management Professionals (CSCMP) defines logistics as planning, implementing, and controlling the efficient and effective flow and storage of goods, services, and related information from the point-of-origin to the point-of-consumption in order to meet customer requirements. For Dornier (2000), logistics refers to management of flows between business functions. A modern definition of logistics encompasses a wider range of flows than it did in the past, including several manners of product transportation and information assessment. Wars have been won or lost on basis of logistics. Similarly, supply chain management has the power to build companies and the ability to destroy them (Kesteloo et al., 2005). A different point of view to explain logistics as a holistic approach to the management of material and information flows play a significant role in satisfying the customers’ needs and requirements. In detail, the three basic components of logistics are: resources, information technology, and time of logistic response. To innovate in this dimension, a company can streamline the flow of information through the supply chain, change its structure or enhance the collaboration of its participants. For example, one can consider how the apparel retailer Zara in La Coruña, was able to create a fast and flexible supply chain by making counterintuitive choices in sourcing, design, manufacturing and logistics.

3. Logistics Innovation Countless researches have shown that logistic innovation arises from deep insights originated in the continuous relation with customers (Arroniz et al., 2005). Interpretation and dissemination of information about those insights lead to the apprenticeship of new practices related to these innovations, an also make innovation processes themselves more familiar. Logistics innovation is relevant only if it creates value for customers and therefore for the firm. Thus creating “new things” is neither necessary nor sufficient for business innovation. Customers are the ones who decide the worth of an innovation by voting with their wallets. It makes no difference how innovative a company thinks it is. What matters is whether customers will pay. Successful innovation requires the careful consideration of all aspects of a business. A great product with a lousy distribution channel will fail just as spectacularly as a terrific new technology that lacks a valuable end-user application. The logistic value is manifested primarily in terms of time and place. Products and services don't have value unless they are able to reach the customers when (time) and where (place) they intend to consume them (Ballou, 2004). Another important demand in terms of logistic innovation is the cotangential needs of surviving to the unthinkable. With possible seasonal fluctuations of demand or even through the interruptions caused by strikes, material shortage or natural causes as hurricanes, contingency plans should be elaborated in a creative and efficient way enough to give consistency to the risk management. That is what some authors call “Knowable unknowns”. Given the complexity of supply chains managing, the nets of business world have spent approximately nineteen billion dollars annually in information technology for supply chain management accord with International Dates Corporation (Laseter and Oliver, 2005). However much of disappointment among supply chains practitioners today comes from companies’ failure on internalizing principles as: setting of strategic polices, “Trade-offs” holistic analysis and cross-functional support. This argument reinforces the applicability of inventive methodologies as TRIZ.

4. Methodological Procedures and Comments To brake with what is called psychological inertia, where the solutions being considered are within one's own experience, professionals can use TRIZ methodologies and psychological tools like brainstorming, intuition, and creativity, remembering that psychological tools like experience and intuition are difficult to transfer. The study began with the observation of main practices of innovation in logistics, in some especial supply chains and in some of the largest Brazilian retailers and distributors. Such practices were evaluated through theoretical review of publications and case studies in this area of interest as well as through the study of respective patent registrations. This empiric study explored in depth resources offered by the methodology, trying to offer a wider vision of contemporary logistics and its potential evolution by Theory of Inventive Problem Solving. Following the systematic steps proposed by Altshuller was used a general model for problems solution as starting point of inventive problems solutions. The framework of steps for the present research is presented as following: a) Definition of main problem in logistics related with each one of the 39 parameters of the system, observing it as a technical system (Logistics chain). b) To identify tree of most frequent principles used to improve each parameter in the original contradiction matrix. c) To identify between usual solutions (technologies / models), similar solution or correlated inventive patterns. d) To identify new solutions as possibilities related with two other inventive principles. e) To consider these solutions for the capacity of protection of each parameter that can turn to be worse. It was observed during the research that trade-off analyzes of logistic system was easier with the application of TRIZ contradictions matrix. To drive this kind of methodological procedure it was of fundamental importance the study accomplished by Zhang (2003) on service application of TRIZ. The dynamic and non lineal features of logistic systems value the consideration of probabilities and possibilities. The main results of these research and pertinent considerations are tabulated in the table1. Among non exemplified principles new solutions can be created. For parameter 7 (volume of moving object), principle 4 (asymmetry) can suggest the creation of one kind of movable distribution center that may turns the important distribution function as multilateral activity, facilitating usual practices as "Milk-run".

Table 1 - Models and technologies on logistics (part 1)

Most Principles analogue Logistic Improvement Improving Feature Innovative Logistic model frequent description variables opportunity W eight of moving Pre-arrange objects or systems Supercontainer - Japonese 1 10, 26, 29 Resource Copying and intangibility. object without losing time. project of consolidation. W eight of stationary Preparing to the activity with Simulation -To check the Automate operations and 2 10, 28, 35 Information object anticipated results. warehouse performance. parameter changes. Length of moving Compensate the weight merging TOFC - Trailer on flatcar or Intermediary and another 3 8, 17, 24 Resource object with other objects. Piggyback. dimension. Length of stationary Change the sequence of stock FIFO - First in - first out 4 14, 26, 35 Time Spheroidality and copying. object moving. stocking models. Area of moving Increase the asymmetry of BIBO - Bulk-in / Bag-out for Spheroidality and flexible 5 4, 14, 30 Time object supply system. sugar transport. shells. Area of stationary Take out steps of stocking WMS - W arehouse Mechanical vibration and 6 2, 18, 35 Information object process. management system. parameter changes. Volume of moving Divide the system into Multimodality -Use of several Asymmetry and 7 1, 4, 10 Resource object independent parts. transport means. preliminary action. Volume of stationary Removing non value steps from Cross docking -To cross the Preliminary action and 8 2, 10, 35 Resource object logistic process. warehouse without stocking. parameter change. Invert the action used to solve On-demand - Moving on Anti-weight and machanic 9 Speed 8, 13, 28 Time the problem. demand rhythm. substitution. Change from static to movable e-Services - Enable the real Mechanical vibration and 10 Force (Intensity) 18, 28, 37 Information fields. time communication. strategic expansion. Pre-arrange objects or systems SMI- Supplier managed Parameter changes and 11 Stress or pressure 10, 35, 36 Resource without losing time. inventory. phase transition. Design a process to change to Milk-run - Take advantage of Preliminary action and 12 Shape 10, 15, 32 Time be optimal. delivery route to collect. transparency. Stability of the Merge-in-transit - W hen Parameter changes and 13 13, 35, 39 Make fixed parts movable. Time object's composition consolidate within transport. use inert parts. Change systems from uniform to Value chain - Multiples Preliminary action and 14 Strength 10, 14, 40 Resource composite. departments adding value. Spheroidality. Duration of action of Put the system's function in e-Commerce - To shorten the Periodic actions and use 15 3, 19, 27 Information moving object most suitable conditions. purchase cycle. of inexpensive Duration of action by RFID- Radio frequence Partial or excessive 16 6, 16, 35 Use standardized features. Information stationary object identification. actions and parameter Use pauses between impulses Calming - Decrease urban Taking out and use 17 Temperature 2, 19, 22 Time to improve the performance. traffic in strategic points. harmful factors. Increase the degree of control Dashboard - Increase Periodic action and 18 Illumination intensity 1, 19, 32 Information segmentation. perform ance visibility. transparency. Use of energy by Changes in operation conditions Fuel Cells - To minimize costs Mechanical vibration and 19 12, 18, 19 Resource moving object to eliminate needs. and polution. periodic action.

Table 1 - Models and technologies on logistics (part 2)

Use of energy by Central of cargo - W eb form Universality and use of 20 6, 27, 35 Change system flexibility. Inform ation stationary object service contracts. inexpensive components. Use of periodic or pulsating PDCA - Plan - Do - Check - Preliminary action and 21 Power 10, 19, 38 Inform ation action. Act. Cycle of quality. boosted interaction. Include activities one inside Drop - hook - Replace cargo Preliminary action and 22 Loss of Energy 7, 10, 15 Resource others to save resources. during the deliver period. dynamics. System permissive for many Reverse logistics - To profit all Preliminary action and 23 Loss of substance 10, 31, 35 Resource functions. the transport. parameter change. Use harmful factors to achieve a CRM - Customer relationship Preliminary action and 24 Loss of Information 10, 22, 26 Inform ation positive effect. management. copying. Perform the requirements before JIT - Just-in-time. Putting cargo Asym metry and 25 Loss of Time 4, 10, 28 Time their are needed. when needed. mechanics substitution. Quantity of Spheroidality changing linear Cyclic counting - Stocks Local quality and 26 3, 14, 29 Resource substance process. continuous checking. intangibility. Perform the required change VMI - Vendor managed Local quality and 27 Reliability 3, 10, 11 Time before it is needed. inventory. beforehand cushioning. Measurement Copying and 28 6, 26, 32 Use of standardized features. Information BSC - Balance Score card. accuracy transparency. Manufacturing Improving the systems Kanban - To change the Taking out and mechanics 29 2, 28, 32 Inform ation precision transparency. system colors. substitutions. Object-affected Change the sequence during the Postponement - To leave on Use of harmful factors and 30 22, 33, 35 Time harmful factors operations. some actions. transparency. Object-generated Use of harmful factors to Yoshikawa - Schema to find Segmentation and inert 31 1, 22, 39 Inform ation harmful factors im prove the environment. the main causes. atmosphere. Divide systems by their MRP - Material requeriment System inversion and 32 Ease of manufacture 1, 13, 28 Inform ation requirements. planning. mechanics substitutions. Mantain only necessary parts of Lean logistics - Elim inate System inversion and 33 Ease of operation 2, 13, 34 Resource the systems. unnecessary steps. discarding. TPM - Total Productive Segmentation and 34 Ease of repair 1, 11, 35 Prevention for low reliability. Resource Maintenance parameter change. Adaptability or CPFR - Collaborative planning Segmentation and 35 1, 29, 35 Change for agile strategies. Time versatility forecast replishim ent. Intangibility. Make part of system perform Routing -Use GPS to locate Partial or excessive 36 Device complexity 6, 16, 26 Resource multiple functions. and support m-commerce. actions and copying. Difficulty of detecting To substitute m echanical Dashboard - Replace audit Partial or excessive 37 16, 26, 28 Inform ation and measuring resources for automated with real time control. actions and copying. To explore the automation e-Bid-To buy the supply Mechanics substitution 38 Extent of automation 13, 28, 35 Inform ation systems in other ways. through reverse auction. and parameter change. Change the velocity according Preliminary action and 39 Productivity 10, 28, 35 Time TOC - Theory of constrains. with the bottlenecks. mechanics substitution. For parameter 15 (duration of action on moving object), principle 19 (periodic actions) and 27 (inexpensive objects) can suggest respectively, the periodic use of transport surplus and tariffs adaptation in agreement with their demand. For parameter 27 (reliability), principle 11 (cushion), can give origin to new models of risk management. Certainly, other examples could be explored with potential effect of innovation.

Costs Process Quality Customer

Brainstorm TPM BSC Value chain JIT 5S TRIZ Six Sigma CRM Toyota Systems Lean Production ISO certifications Real time solutions

1950 1955 1960 1965 1970 1975 1980 1985 1990 1995 2000 2005 2010

Figure 1 - 60 Years of Conceptual Evolution of Logistics

Among models and solutions developed to solve logistic problems along past years, the most important were those associated to improvements of engineering parameters. Within this process, new creative models may appear in the future in agreement to the conceptual evolution of logistics itself putting emphasis on costs, processes, quality and even on customer satisfaction.

5. Conclusion New inventive patterns were assumed here in order to optimize variables as time, information and resources, allowing to identify principles strongly associated to many different logistics problems solutions. To solve time constrains, the most frequent principles was 10, 13 and 35, for information 19, 28 and 32, and for resource 2, 10 and 14. In the current phase, the larger acceptance of these new inventive patterns, facing the complexity of contemporary logistic systems acts simultaneously on all variables, combining principles and simulating human intelligence through business intelligence, granular computing and evolutionary algorithms. It is how Altshullers’ thinking may be materialized in new technologies. This study proposes important solutions within the logistic scope, identifying with clarity new inventive movements towards some specific demands. Besides contributing for a larger understanding of innovation process on logistics, the present article presents new patterns where new models can be elaborated to solve new challenges. Although, it has been limited by empirical characteristics, new future studies can be driven from the present work, exploring new inventive potentials for specific contradictions through multi-case studies.

7. References Arroniz, I.; Sawhney, M. and Wolcott, R.C. (2005). The 12 Different Ways for Companies to Innovate. MIT Sloan Management Review. Vol.47 nº3, pp.76. Ballou, R. H. (2004). Business Logistics / Supply Chain Management. 5 ed. Pearson Education, pp. 33. Dornier, P.; Ernest, R.; Fender, M.; Kouvelis, P. (2000). Logística e operações globais. São Paulo, pp.122. Hamel, G. and Prahalad, C.K. (1990). “The Core Competence of the corporation”. Harvard Business Review, pp. 79-91. Kesteloo, M.; Shorten, D.; Engel, H. (2005). The Missing Link – The high performance supply chains. Strategy + business, Booz Allen & Hamilton, pp. 45. Laseter, T.; Oliver, K. (2005). When will Supply chain management grow up? Idem, pp. 92. Morin, E. (1999). O Pensar Complexo: Edgar Morin e a crise da Modernidade. Rio de Janeiro, pp.47. Salamatov,Y. (1999). TRIZ: The Right Solution at the Right Time. Insytec BV, pp. 19. Zhang, J; Chai, K. and Tan, K. (2004). 40 Inventive Principles with Applications in Services Operations Management. Triz-Journal, pp. 2-15.

CONTRIBUTIONS OF TRIZ AND AXIOMATIC DESIGN TO LEANNESS IN DESIGN: AN INVESTIGATION

Rohan A. Shirwaiker Penn State University, Industrial and Manufacturing Engineering Dept., Leonhard Bldg., PA 16802 [email protected] Gül E. Okudan Penn State University, School of Engineering Design, 213 Hammond Bldg., UP, PA 16802 [email protected] Abstract Lean applications, which focus mostly on manufacturing, are deemed important contributors to industrial success. Today companies are striving for leanness in other functional areas such as product design and development. In this paper, we review the state of the art on lean design, and the appropriateness of two tools for lean design applications: Theory of Inventive Problem Solving (TRIZ) and Axiomatic Design (AD). The literature review section reveals the need and scope for more research on lean design. We also enunciate how the lean design approach fits within the traditional product design and development process, and then evaluate TRIZ and AD for their contributions to leanness. Our evaluation reveals a close correlation between these tools and the lean design metrics. The paper concludes by proposing the use of a synergistic problem solving approach based on TRIZ and AD to increase efficiency and quality of the process while also helping to achieve lean design goals for a company. Keywords: Lean Design, TRIZ, Axiomatic Design

1. Introduction Lean is an improvement methodology which includes a set of techniques that focus on eliminating inefficiency and wasteful processes (Cave, 2003). National Institute of Standards and Technology Manufacturing Extension Partnership’s Lean Network defines lean as a systematic approach for identifying and eliminating waste during the flow of the product at the pull of the customer in pursuit of perfection. Definition of waste includes anything other than the minimum amount of equipment, materials, space and time that are essential to add value to the product (Russell and Taylor, 1999). Barb (2003) classifies waste into seven types: defects, overproduction, waiting, transportation, nonvalueadded processing, excess inventory, and excessive motion. To this categorization underutilization of resources (Kilpatric, 2003) and complexity (Nicholas, 1998) are also added. Principles of lean thinking have been applied successfully across many disciplines such as manufacturing (VerDuft, 1999), construction (Freire and Alarcón, 2002) and software (Middleton et al., 2005). Its philosophy is rooted in 1) improving quality 2) reducing total costs 3) reducing lead times and 4) improving utilization of resources (human and material). Among many application areas of lean philosophy, manufacturing has been dominant. While lean manufacturing has been successful in improving productivity and quality of products to certain extent, further improvement is still desired. Currently, the manufacturing industry in the U.S. is passing through a recovery phase after the 200203 recession. Since the recession, manufacturing output has lagged that of earlier economic recoveries (Popkin and Kobe, 2006). This might point to the fact that applying lean techniques only during the manufacturing phase of the product lifecycle is insufficient. According to a report by A. T. Kearney. Inc., current lean practices focus exclusively on manufacturing and supply chain productivity and ignores product design altogether. Secondly, in face of rapidly growing competition and globalization, companies are forced to call into question the efficiency of their design methods to keep their competitive edge and ensure their survival (Cavallucci et al., 2002). All these factors necessitate the incorporation of lean concepts upstream into the product design and development stage. Lean design, emerging out of this need, is the application of lean principles, which promote the elimination of waste and nonvalue adding activities in processes, to engineering and design (Freire and Alarcón, 2002). This paper reviews various aspects of lean design and explores the potential of TRIZ and AD as tools for lean design. It concludes with recommendations for future research.

2. Review of Lean Applications and Techniques in Product Development Freire and Alarcón (2002) focus on the use of lean principles for improving the design process in construction projects. According to the authors, lean design promotes different views to model, analyze, and understand the design process. A methodology for lean design is proposed on the basis of concepts and principles of lean production. The methodology has been validated by applying to four projects of a design company mainly dedicated to the engineering of civil, mining, and industrial projects. The only value added activity in the projects was design, but only 16.2% of the cycle time was used for it. The application of the proposed methodology resulted in an increase in the overall productivity by 31%. However, the proposed methodology focuses on a broader engineering management perspective while neglecting the prime design phase consisting of problem definition, analysis and the solution. Haque (2003) describes the applications of lean thinking in aerospace engineering. The paper is inspired by the fact that principles of lean thinking as suggested by Womack and Jones (1990) are successfully applied in manufacturing and operations but are conspicuously absent in ‘engineering’ or new product introduction processes. The term ‘engineering’ encompasses activities employed by all engineering firms including conceiving, designing, developing, testing and product launching. Based on the basic lean principles, the authors suggest five lean principles applicable to ‘engineering processes’ and demonstrate them through three case studies. However, similar to the methodology proposed by Freire and Alarcón (2002), this methodology was used at different levels of the product development phase but never focused on the basic design process. Poppendieck (2002) discusses the application of lean principles within the software development framework to reduce the wastes associated with the software designing process. Foyer (1995) advocates the notion of applying lean principles in the design phase itself. The author suggests that improvements in the basic product design concurrently with facilities improvement can yield enormous benefits to both customer and producer. The basic idea that the paper brings forth is valuable; lean and agile manufacturing without reference to the product design can yield significant savings in overhead costs, but often leaves prime material, processing and other overhead costs unaltered (Foyer, 1995). The report by A. T. Kearney. Inc. stresses the importance of lean design in an organization. As quoted, together, lean design and lean manufacturing provide companies with a complete arsenal to attack waste, both in how the product is produced and in what product gets produced. Through an example of a railway vehicle, the author explains how 58% of the vehicle cost is associated with design wastes and that the total cost of the vehicle could be easily reduced by 30% by just attacking those wastes. A three step lean design approach for elimination of the wastes associated with design is proposed. A report by the ETI group (2005) also suggests an integrated approach to new product development, primarily focusing on the detailed product design stage. They make a note of the fact that designs arriving late at the factory, with poor production yields, major manufacturing problems, and unresolved engineering problems, undermine the benefits of lean manufacturing. A seven step methodology for product development based on standard lean tools is proposed. Thus, it can be seen that many techniques propose the application of lean principles in product design and development. To be effective, the techniques must be applied at the appropriate stages in an integrated product development process. While most of them have demonstrated their effectiveness in enhancing the productivity and quality, it might be easier to accomplish these objectives by applying lean design principles at the grass root level. The following subsection discusses how lean design principles can be incorporated into the traditional product design and development framework.

2.1 Lean Design within the Product Design and Development Framework 80% of the product costs are established in the design phase (Foyer, 1995). Hence the need to focus on lean design. The simplified notion of lean design is to remove waste (in time, material, complexity and underutilization of resources) from all aspects of the product development process before it ever gets to the manufacturing floor. The nature of the design process is complex; it involves thousands of decisions, sometimes over a period of years, with numerous interdependencies, and under a highly uncertain environment (Freire and Alarcón, 2002). The main stages in this process where inefficiencies creep in are the problem definition and analysis phase, the concept generation phase and the concept selection phase. In many cases, products fail in the market because their designs were wrongly defined in the first place. If the design problem is improperly defined, the latter steps in the design process will only lead to addition of activities whose final output differs from what was actually desired. This renders the valueadded activities futile, leading to waste of time and money. In spite of tools like QFD and Kano model being powerful media to account for the customer requirements and expectations, other tools might be necessary to convert generic specifications into a technical problem definition while achieving leanness. Once the problem is accurately defined, a pool of concepts is generated and the optimum solution is selected in the concept selection stage. However, most of the concept generation techniques like brainstorming, are very time consuming and a very few ideas from the generated pool are feasible solutions. From the lean perspective, a better way to accomplish the concept generation phase would be to develop a group of few but accurate potential solutions rather than a wide pool of divergent concepts. Once a pool of potential concepts is generated, the best concepts need to be selected before the product can go for manufacturing. Okudan and Shirwaiker (2006) have reviewed a number of concept selection methods with regards to their methodologies, advantages and disadvantages. An important inference from that paper is that the size of the pool of potential concepts affects the efficiency of the concept selection phase. Hence it is essential that the results of the concept generation stage be laconic and practical.

3. TRIZ and AD as Tools for Achieving Lean Design TRIZ and AD are two widely used problem solving tools which could also be very beneficial to help achieve lean design. TRIZ is an algorithmic approach for solving technical and technological problems based on the study of more than two million worldwide patents. TRIZ gains its fame as a tool to guide designers to solutions for conflicts in an existing system (or design) (Kim and Cochran, 2000). Along with innovation, it brings efficiency into the process in that the suggested TRIZ principles provide definite guidelines for the engineers to think. However, TRIZ is a stronger conceptual tool than analytical tool. According to Hipple (2003), although TRIZ can be used as problem definition tool, its greatest strength lies in resolving contradictions and solving problems defined by other techniques. Thus TRIZ, though a tool suitable for lean design, only partially fits into the ideal product design framework. AD aims at making human designers more creative, reduce random search process, minimize iterative trial-and-error process, and determine the best design among those proposed. It is based upon two axioms. The independence axiom checks that all functional requirements (FR) are satisfied independent of each other and the information axiom selects solution with the least information content. While the AD methodology of analyzing the problem by decomposing it into hierarchies of FRs and design parameters (DP) helps in detailed definition and analysis of the problem, the information axiom helps in eliminating waste by selecting the least complex design. However, AD guidelines concentrate more on problem definition rather than solution generation. Although creating and optimizing solutions is a step in the AD methodology, it does not propose any specific techniques for generating accurate and efficient solutions.

3.1 The Synergistic Problem solving approach based on TRIZ and AD To overcome the drawbacks associated with using TRIZ and AD individually, methods have been suggested in literature based on the synergistic use of these tools, like the ones proposed by Zhang et al. (2004) and Shirwaiker and Okudan (2006). Both the approaches make use of TRIZ within the AD framework. While Zhang et al. (2004) use TRIZ only during decoupling the design matrix if it is coupled, Shirwaiker and Okudan (2006) use it even while mapping between the functional and physical domains of the AD hierarchy. Additionally, Shirwaiker and Okudan (2006) suggest the use of the AD information axiom to evaluate all options at the end. This paper focuses on use of the synergistic approach suggested by Shirwaiker and Okudan (2006) as a lean design tool. We assert that this approach contributes to leanness in design. From the lean design perspective, AD helps to eliminate the inefficiencies associated with complexity and time. By decomposing the main complex problem into smaller sets of problems, the designers can specifically focus on issues related with each specific set. This eliminates any complexity associated with the basic problem. It is also possible to integrate another concurrent engineering concept into this methodology – the concept of set based design. Once a problem is decomposed into the FR hierarchy, design teams can work in parallel on each individual FR to create its respective DPs. The advantages of set based approach in design in Toyota Production systems have been discussed by Ward et al. (1995). Along with increasing the efficiency of the process, it also assists in efficient utilization of human resources. The use of TRIZ concepts while developing the DPs and solutions to smaller individual sets of problems will also eliminate waste in time associated with traditional techniques like brainstorming. Further, some of the TRIZ parameters are specifically oriented towards eliminating certain wastes. Weight of object (1 and 2), volume of object (7 and 8), waste of substance (23) and amount of substance (26) are parameters associated with material wastes. Waste of time (25) and productivity (39) are parameters related to time efficiencies. Complexity of device (36), complexity of control (37) and manufacturability (32) are related to complexity waste. At the end of the concept generation stage, the AD information axiom would assist in selecting the final design that has the least complexity. To summarize, from a lean perspective, the TRIZ and AD synergistic approach has the following advantages: (1) Accurate problem definition: The approach eliminates the inefficiencies associated with wrongly defining the problem (as discussed in section 3). (2) Detailed and faster analysis: The zigzagging and mapping of FRs and DPs help to account for even the minutest requirement of the design. (3) Accurate and Faster Solution generation: Since TRIZ is used to generate the DPs and decouple the design matrix, the generated concepts are few but practical. (4) Elimination of wastes: Different tools used in the approach help in eliminating all wastes associated with time, material, complexity and utilization of resources.

3.2 Applications of the Synergistic Approach in eliminating wastes New concepts are emerging in construction technology. In a recent innovation, houses have been designed using scrap tires as building materials instead of the conventional wood or bricks, which is very useful from sustainability perspective. However, changing the building material necessitates a new design and building process. Using first step of the synergistic approach, the problem needs to be defined and analyzed in terms of its basic functional requirement. While utilizing the scrap tires is the basic functional requirement in this case, the design parameter in the physical domain is building a house. This DP can be zigzagged into the functional domain in terms of three second level FRs – foundation, walls and roof. Here, we will review the development of the ‘walls’ DP from a lean perspective. The original idea for wall construction included stacking tires in a pyramid form, filling the voids with a concrete mix, and finally covering with a stucco layer for better aesthetics. However, the appearance of the walls would not be very attractive and the process would be labor intensive considering that that binding materials had to be filled between every tire. A recommended solution for improving the aesthetics was replacing the entire tire with strips of tire tread instead. However, this did not solve the problems associated with inconvenience of construction. The adhesive and concrete mix would still have to be filled between every strip of tire tread to increase wall strength. This was not only a waste of time and material but also resulted in improper utilization of human resources. In TRIZ terms, a physical contradiction exists. The concrete mix and adhesives must be used to bind the strips together. However, the concrete mix and adhesives should not be used to improve the convenience of use. Expressing as a technical contradiction: “As strength (14) of the wall increases (improving parameter), convenience of use (33) decreases (worsening parameter)”. The contradiction matrix suggests optical changes (32), composite materials (40), self-service (25) and separation (2). A unique design concept based on self-service (25) is being tested at Penn State University. Costly adhesives are replaced by precompression of tire layers in the new design. The frictional forces created between the layers during compression accomplish the binding of the tread layers. Thus, the need for any external binding agents is completely eliminated. This is indeed a leaner design. Savings have been achieved in time and materials and hence, cost. From TRIZ perspective, the design is a step closer to ideality because a function is achieved without use of any external resource. From an AD perspective, the design is acceptable since the functional requirement is satisfied with the least complexity. Similarly, the other DPs can be developed using the synergistic approach.

4. Conclusion Our review shows that the research on lean design is in its preliminary stages and more work is desired. The paper discusses how TRIZ and AD are seen to be powerful means to attain lean design. Through an innovative case study, it can be seen how a synergistic approach based on TRIZ and AD results in achieving leanness in design by eliminating wastes associated with time, material, complexity, resources utilization and hence, productivity and cost.

5. References

Barb, R., 2003, http://www.isixsigma.com/dictionary/7_Wastes_Of_Lean466.htm Cavallucci, D., Lutz, P., and Thiebaud, F., 2002, “Methodology for bringing the intuitive design method's framework into design activities”, Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, Vol. 216, no. 9, pp. 13031307 Cave, A., 2003, “Lean for all”, Metal Working Production, Vol. 147, no. 6, pp. 15 ETI Group, 2005, “Lean Design”, http://cpd.ogi.edu/LeanDesign.pdf Foyer, P., 1995, “Smart design for lean manufacture”, IEE Colloquium (Digest), no. 151, pp. 4/14/4 Freire, J., and Alarcón, L., 2002, “Achieving Lean Design Process: Improvement Methodology”, Journal of Construction Engineering and Management, Vol. 128, no. 3, pp. 248 – 256 Haque, B., 2003, “Lean Engineering in Aerospace Industry”, Proceedings of the Institution of Mechanical Engineers, Part B: J. Engineering Manufacture, Vol. 217, pp. 1409 – 1420 Hipple, J., 2003, “The Integration of TRIZ Problem Solving Techniques with other Problem Solving and Assessment Tools”, The TRIZ Journal, www.trizjournal.com Kilpatrick J., 2003, “Lean Principles”, http://www.mep.org/textfiles/LeanPrinciples.pdf Kim, Y. S., and Cochran, D. S., 2000, “Reviewing TRIZ from the perspective of Axiomatic Design”, Journal of Engineering Design, Vol. 11, no. 1, pp. 7994 Middleton, P., Flaxel, A., and Cookson, A., 2005, “Lean software management case study: Timberline Inc.”, Proceedings of Extreme Programming and Agile Processes in Software Engineering: 6th International Conference, pp. 19 Nicholas, J. M., 1998, Competitive Manufacturing Management, New York: Irwin McGrawHill Okudan, G.E., and Shirwaiker R.A., 2006, “A MultiStage Problem Formulation for Concept Selection for Improved Product Design”, PICMET’06, accepted Popkin, J. and Kobe, K., 2006, “US Manufacturing Innovation at risk”, Report prepared for the Council of Manufacturing Associations and The Manufacturing Institute Poppendieck, M., 2002, “Principles of Lean thinking”, www.poppendieck.com/papers/LeanThinking.pdf Russell, R. S., and Taylor, B. W., 1999, Operations Management, 2nd edition (Upper Saddle River, NJ: Prentice Hall) Shirwaiker, R. A., and Okudan, G. E., 2006, “TRIZ and Axiomatic Design: A review of casestudies & their compatibility”, PICMET’06, accepted VerDuft, J. L., 1999, “Lean Manufacturing Enterprise”, Annual Quality Congress Transactions, pp. 375378 Ward, A. C., Liker, J. K., Sobek, D. K. II, and Cristiano, J. J., 1994, “.Setbased concurrent engineering and Toyota”, American Society of Mechanical Engineers, Design Engineering Division (Publication) DE, Vol. 68, pp. 7990 Womack, J. P., and Jones, D. T., 1996, Lean Thinking: Banish Waste and Create Wealth in Your Corporation, (Simon and Schuster, London). Zhang, R., Tan, R., and Cao, G., 2004, “Casestudy in AD and TRIZ: A paper machine”, TRIZ Journal, http://www.trizjournal.com CONCEPTUAL DESIGN USING AXIOMATIC DESIGN IN A TRIZ FRAMEWORK

Madara Ogot Engineering Design Program, and The Department of Mechanical and Nuclear Engineering The Pennsylvania State University, University Park, PA 16802 [email protected]

Abstract This paper explores the symbiotic relationship that can be established between axiomatic design and TRIZ, capitalizing on each method’s strengths and simultaneously minimizing their weaknesses. Through a contextual example the paper illustrates how axiomatic independence axiom principles can be utilized to select appropriate standard solutions once a physical contradiction has been identified. It concludes by showing ways to use the same AD principles to qualitatively evaluate generated designs. Keywords: Axiomatic Design, Conceptual Design, TRIZ. Nomenclature AD Axiomatic Design TRIZ Theory of Inventive Problem Solving DP Design Parameter FR Functional Requirement

1. Introduction A broad view of the engineering design process reveals five broad steps that are generally followed: Problem identification, problem formulation, concept generation, solution evaluation, and embodiment design. This paper extends earlier work, that reviewed use of TRIZ within an AD Framework (Ogot, 2006), by looking at the reverse: reviewing the use of axiomatic design (AD) within a TRIZ framework and making implementation suggestions based on application similarities and differences found in the literature. The strength of AD lies in the problem identification and formulation steps, while TRIZ’s main strengths are problem identification (making sure you are solving the correct problem) and concept generation. With reference to Figure 1, AD divides the design process into four major domains: customer, functional, physical and process. Our discussion will focus on the relationship between the function domain and the physical domain, each defined by the design functional requirements (FRs) and design parameters (DPs), respectively. Based on two design axioms (the information and the independence axioms), AD provides an effective approach to problem formulation and clarification. The AD zig-zagging process is used to identify the FRs and the corresponding DPs relevant to the current problem. Ogot (2006) illustrated how the TRIZ system operator in conjunction with EMS models can be used to achieve this task. Once complete AD design matrices (DM) that show the relationship between the FRs and the DPs can be constructed.

[FRs]=[DM][DPs] (1) for brevity the following abridged notation will be used throughout the paper,

|FRs||DM|DPs| (2)

The Independence Axiom states that the AD design matrix should be at least triangular, but ideally diagonal. The later represents the ‘ideal’ design where each FR is controlled by a single DP.

Figure 1. Axiomatic Design domains

AD, however, does not provide ample guidance on how to achieve the conceptual solutions to solve the design problem. For example, once the problem has been formulated in terms of function requirements and design parameters, and if the resulting relationship between them is found to be coupled (bad) or too complex, AD does not provide ideas on how the design could be uncoupled or simplified, respectively. TRIZ on the other hand provides numerous conceptual generation tools that could be used for this purpose. From the TRIZ point of view, often a very large number of conceptual solution directions can be generated for the same problem using TRIZ’s numerous tools (for example, trends of evolution, technical and physical contradictions). The number of solutions and solution paths often overwhelms TRIZ users. Although proper use of the Algorithm for Inventive Problem Solving (ARIZ) can significantly help reduce this number, methods such as AD can help prioritize the suggested TRIZ solutions by giving higher priorities to those solutions that would help decouple the AD design matrix, or reduce information in the system. A holistic view of both methods shows that they have their strengths and weaknesses, but also provide avenues where, collectively, they can significantly enhance early conceptual design. In the literature most authors address consider the use of TRIZ within an AD framework, primarily to decouple/uncouple AD design matrices or develop new designs when AD design constraints are not met. For example, Ruihong et al. (2004) proposed an AD framework that incorporates TRIZ, illustrating the approach via the design of a paper machine. Similarly, Kan (2004) presents a discussion of uncoupling, coupled and decoupled AD design matrices by recasting the coupled FRs as technical or physical contradictions. The redesign of a pile driver is used to illustrate the approach. Kim and Cochran (2000) provide a comprehensive comparison of AD and TRIZ especially between ideality and AD design axioms, TRIZ contradiction concepts and their applicability in AD and su-field modeling and the AD zig-zagging process. Similar comparisons can be found in Karr (1998) and Norlund (1996). . Yang and Zhang (2000) provide a tabular comparison of the common elements between the two methods. Hu, Yang and Taguchi (2000) illustrate how the two methods can be used within a Taguchi framework to enhance robust design. Very little is discussed in the literature, however, about the use of AD within a TRIZ framework. The contribution of this paper therefore is to illustrate how AD can be used to narrow down solution choices in TRIZ when confronted with a physical contradiction (much like the way contradiction tables support technical contradictions), as well as serve as a qualitative evaluation tool for generated TRIZ concepts. For a detailed description of axiomatic design, the reader is referred to Suh (2001). Similar Savaransky (2000), Orloff (2003) can provide a detailed introduction to TRIZ

2. TRIZ Framework – AD Supporting Role

2.1 Simplified Steps for Application of TRIZ Tools A simplified flow chart illustrating the use of several common TRIZ tools is presented in Figure 2(a). It is by no means meant to be comprehensive, but used to provide a simplified framework to illustrate how AD tools can be used within TRIZ. The steps in the simplified algorithm are as follows: 1. Analyze the problem by defining the contradiction zones and creating an energy- material-signal (EMS) model (Ogot, 2005). This ensures that you understand the problem at hand and that you end up solving the right problem. In addition, at the end of this step you should have determined whether you have a physical or a technical contradiction(s). Define your Ideal Final Result. 2. If you believe you have a technical contradiction(s), formulate it in terms of the generalized engineering parameters. Once complete, use the contradiction matrices to seek the most probable design principles to solve the problem. Recall that the contradiction matrices list the most probable solutions to solve your problem. If none of the recommended design principles work, go through each of the other principles in search of a solution. If you find a solution, you are done. 3. If no solution is found from the previous step or if the problem cannot be formulated as a technical contradiction, define the problem in terms of a physical contradiction. Redefine your Ideal Final Result in terms of the physical contradiction. 4. Apply the 76 Standards/Condensed Standards (Ogot, 2003) to seek a solution. If a solution is found, you are done. 5. If not, use the separation principles to separate the physical contradictions. Apply the 76 Standards/Condensed Standards to solve the new form of the problem. If a solution is found, you are done. 6. If not, revisit Step 1, and ensure the problem was defined correctly. Seek alternate forms of the contradictions and repeat all steps until a solution is found.

(a) (b) Figure 2. (a) Flow diagram of a simplified algorithm for use of three common TRIZ tools and (b) with the addition of AD tools

Figure 2(b) shows where AD tools can be integrated into the TRIZ framework. Details of the integration form the thrust of the paper.

2.2 The Condensed Standards. The 76 standard solutions are to a large extent, based on the substance-field modeling method. In an effort to effectively use the EMS models, the 76 standard solutions were modified and articulated in terms of EMS models. In addition, as several authors have noted the significant degree of repetition amongst the standard solutions suggested their own reduced versions (Soderlin, 2002; Orloff, 2003), Ogot (2005) also proposed a reduced set of 27 Condensed Standards. In addition to reducing the number of solutions, the Condensed Standards, (a) use the language and jargon typical in engineering design, and (b) replace the substance-field models found in the original 76 solutions with the EMS models. Further, the condensed standards reduce the five classes of classical 76 standard solutions from five classes (Improving the system with little or no change, Improving the system by changing the solution, System transitions, Detection and measurement and Strategies for simplification) to three: 1. Condensed Standards I: Improving the system with little or no change 2. Condensed Standards II: Improving the system by changing the solution 3. Condensed Standards III: Detection and measurement The complete list of condensed standards incorporating the EMS models can be found in Ogot (2005).

2.3 Contextual Example-Computer Hard Drive An area of concern arises when the computer is off and receives a hard external knock. Without the hard drive disk spinning, the head can be knocked off its rest position and data on the disk destroyed. In the rest position, the head is typically held in place by a magnetic latch. When the computer is powered on again, the airflow from the disk motion raises the head, and a permanent magnet/electro-magnet system situated at the arm axes of rotation (the pin) generates enough force to release the arm from the magnetic latch and move the head to wherever data needs to be written or read (Royzen, 1999). Starting with the first step in Figure 2, an EMS model of this scenario can be developed and is illustrated in Figure 3 (Ogot, 2003). In the model, one can track the sequence of events (the flow) from when the computer chassis receives a hard knock to the point where there is damage (harmful effect) to the disk surface by the read/write head. In the figure, the magnetic field is shown to be insufficient, and therefore an area of the concern that would be addressed. In addition, several resources that would be available in the system are included in the diagram. An obvious solution is to use a stronger magnetic latch. This, however, may present its own problem by making it difficult for the arm to be released during start-up. The two scenarios are modeled in Figure 4, where the top slot in the multiple scenario symbols represents the hard knock scenario, and the lower slot the computer start-up (Ogot, 2003). Note that by increasing the magnetic strength of the latch, a desirable effect is achieved in response to reduction of damage from external knocks, but it also produces an undesirable effect during system start up. With reference to Figure 2, a technical contradiction may not be as appropriate as the next step, physical contradictions. One could state that: Physical contradiction #1: The strength of the magnetic field in the magnetic latch needs to be strong to hold the latch in place when the computer is turned off to prevent the read write/head from damaging the disk surface, yet the field should be weak to allow arm release on computer start-up. With reference to Figure 4, one could simply increase the strength of the voice coil (the electro-magnet) that controls the arm. This would provide enough force to overcome the larger magnetic field in the latch. The downside of this solution is that (1) a larger voice coil adds significant cost to the hard drive, and (2) reduces the sensitivity/performance of the coil – required to accurately, and rapidly position the arm (and by extension the play head) at different positions over the disk surface. This solution yields yet another physical contradiction: Physical contradiction #2: The strength of the voice coil needs to be high to overcome the stronger magnetic latch, but needs to be low to provide desired sensitivity and performance. Two solution paths are therefore available: (1) search the standard/condensed solutions or (2) use the separation principles first, and then if necessary search the standards. A cursory look at the standards may be intimidating at first class due to the large number of choices and possibilities. Unlike the 40 design principles, there is no contradiction table to provide a small sub-set of ‘best/likely’ solutions. Can AD principles therefore be used to serve this purpose?

2.4 Opportunities for AD in the TRIZ Framework For the contextual example, three functional requirements can be defined: 1. FR1 – Passively hold arm in place when power off 2. FR2 – Release arm on power on 3. FR3 – Control arm during computer use

Figure 3. EMS model of hard drive when the Figure 4. EMS model of hard drive with a computer is turned off. A hard knock on the stronger magnetic latch computer dislodges the arm resulting in the head damaging the disk magnetic surface

Further, two design parameters can be identified: 1. DP1 – Strength of permanent magnet (PM) 2. DP2 – Electro-Magnet (EM) in arm controller A possible AD design matrix can then be constructed with the form

FR1—Passively hold arm X DP1—PM Strength FR2—Release arm X X DP2—EM in arm controller (3) FR3—Control arm X

The ‘X’ s indicate which DPs influence which FRs. For example from Equation 3, the following observations can be made: 1. FR2 is influenced by two DPs 2. DP1 influences two FRs (FR1 and FR2) 3. DP2 influences two FRs (FR2 and FR3) 4. The matrix is not square, i.e., it can never be diagonal or triangular and therefore never satisfy the independence axiom is left in its present form. From an AD perspective, the design matrix must first be converted to a square matrix by either adding a DP or by removing an FR. Removal of an FR would adversely affect the desired functionality of the device, making the addition of a DP necessary. The resulting expression would therefore take on the form

FR1—Passively hold arm X? ? ? DP1—PM Strength or ?? FR2—Release arm X? X? ? DP2—EM in arm controller or ?? (4) FR3—Control arm ? X? ? DP3—??

Where ?? indicates that an existing DP can be changed or a new one sought where none exits, and ? indicates that an existing relationship can be changed or a new one established where none exists. In addition to making the matrix square, AD seeks to de-couple FRs and DPs. Recall that from a TRIZ perspective when addressing the problem, we were confronted with a large number of Condensed Standards from which to find a possible solution. With reference to the AD design matrix in Equation 4, and considering the need to add a DP to the problem description, a smaller subset of five standards (see Table 1) related to placing an additive in the system and that are most closely related to the desired design task is extracted and explored for possible solutions. Use of AD methodology can therefore cull the full list of standards to a smaller subset for physical contradiction design problems, much the same way the contradiction table works for technical contradictions. The placement of this AD intervention within the TRIZ framework was presented in Figure 2b.

2.5 Possible Solutions Concept #1 – Based on Condensed Standard 1.3, a possible solution to physical contradiction #1 is to place the arm controller (and voice coil) on a spring-loaded limited rotation platform. The platform would also have an electromagnet that would work against that spring force. With reference to Figures 5, when the computer is powered off the arm is moved to the rest position by the arm controller where the strong permanent magnet holds it securely in place. On turning the computer back on, the electromagnet in the limited rotation platform is activated and moves the entire arm assembly (controller and arm) away from the permanent magnet – the electromagnet is strong enough to overcome the larger PM latch field – releasing the arm from the secured position. The arm controller then moves the arm to the active position. At the same time, the electromagnet on the rotation platform is turned off, with the spring returning the platform to its rest position. The spring also prevents any movement of the platform during the hard drives active state – that would interfere with the data transfer to the disc surface – and as well as when the hard drive is turned off. The AD design matrix for this configuration based on Equation 2 is

Table 1. Reduced set of condensed standards that may be applicable to hard drive example # Solution 1.1 Without changing the system add a temporary or permanent, internal or external additive that may or may not be present in the system. 1.3 If a moderate amount of energy is insufficient, but higher energy is damaging, apply higher energy to an additive that acts on the original system. 2.1 Apply an additional energy source to the system 2.2 Replace or add to energy existing in the system that is difficult to control with energy that is easier to control. From the Laws of Evolution, in order to improve controllability: mechanical to thermal to chemical to electric to magnetic to electromagnetic energy. 2.8 Add ferromagnetic materials (objects or liquids) and/or electric generated magnetic fields (dynamic, variable or self-adjusting).

FR1—Passively hold arm X DP1—PM Strength FR2—Release arm X X DP3—EM/Spring Platform (5) FR3—Control arm X DP2—EM in arm controller

A comparison of Equations 3 and 5 shows a significant improvement (in an AD sense) in that (1) the design matrix is now square, and (2) whereas DP1 and DP2 influenced two FRs each in the original design, only DP1 influences two FRs. From the product point of view, the new design prevents damage of the disk surface due to hard knocks (strong permanent magnet), yet allows arm to be released on start up, while maintaining sensitivity of voice coil. Concept #2 – Condensed Standard 2.2. The permanent magnetic latch, though passive as required, is always on even when the arm is required to be released. In trying to decouple the latching (passive) and unlatching actions a possible design would replace the permanent magnetic latch with a combination electromagnetic activated spring-loaded pin and latch mechanism, a possible manifestation illustrated in Figure 6. With reference to the EMS model in the same figure, when the computer is turned off, the electromagnet is activated, raising the pin. The arm then moves to the rest position at which point the electromagnet is turned off, resulting in the spring loaded pin returning to its original extended position, but this time resting in the hole in the latch attached to the arm. As a result the arm is passively held in place by the pin. On computer start-up, the electromagnet raises the pin allowing the arm controller to readily move the arm to the active position. The electromagnet is then turned off and the pin returns to its rest position. Based on Equation 4, the design matrix can be stated as

Figure 5. Schematic and EMS model of concept #1: limited rotation stage

Figure 6. Schematic and EMS model of concept #2: spring-loaded pin and latch mechanism

FR1—Passively X DP4—Pin/Spring/Latch mechanism hold arm FR2—Release arm X X DP3—EM in Pin mechanism (6) FR3—Control arm X DP2—EM in arm controller

Similar to the first concept, a comparison of Equations 3 and 6 shows a significant improvement (in an AD sense) in that (1) the design matrix is now square, and (2) only DP2 influences two FRs.

2.6 Use of AD for Qualitative Evaluation Although significantly more concepts could be developed from the condensed standards in Table 1, generated concepts #1 and #2 are used to illustrate how AD tools can be further used to perform a qualitative evaluation of generated designs. Several points where the qualitative evaluation could occur within the TRIZ framework are shown in Figure 2b. A comparison of Equations 5 and 6 corresponding to Concepts 1 and 2, respectively, shows that the two designs are similar from the AD independence axiom perspective. If one however qualifies the relationships between the DPs and FRs as either helpful (positive) or detrimental (negative) to the product function, Equations 5 and 6 are recast as Equations 7 and 8, respectively.

FR1—Passively hold + DP1—PM Strength arm FR2—Release arm - + DP3—EM/Spring Platform (7) FR3—Control arm + DP2—EM in arm controller

FR1—Passively + DP4—Pin/Spring/Latch mechanism hold arm FR2—Release + + DP3—EM in Pin mechanism (8) arm FR3—Control + DP2—EM in arm controller arm

In Equation 7 (Concept #1), DP1 (PM strength) works counter to DP3 (EM/Spring platform) with respect to FR2 (release arm). The latter provides a negative contribution to FR2. In Equation 8 (Concept #2), both DPs contribute positively to FR2. From a qualified AD independence axiom perspective, therefore, Concept #2 represented by Equation 8 is a better design. Note that one would have to take into account all other design considerations (e.g., size, cost, complexity, etc) before settling on a final design.

3. Conclusion Axiomatic design and TRIZ are two principle-based innovation methods that have recently gained popularity. This study illustrates how AD independence axiom principles can be used within a TRIZ framework to (1) narrow down the list of possible standard solutions to apply if a physical contradiction is present, and (2) once a set of solutions is obtained, the same principles can be used as an evaluation tool.

Acknowledgements The author would like to thank the reviewers for the helpful comments and the US National Science Foundation for partial support of this work through grant number CCLI-Educational Materials Dev 0442944.

References Karr, T., 1998, Synthesis of the Principle-based and other Product Development Approaches with Emphasis on Concept Generation and Evaluation. Diploma Thesis of Cand.-Ing Aachen RWTH-Aachen in the WZL. Kim, Y-S and Cochran, D.S., “Reviewing TRIZ from the Perspective of Axiomatic Design”, Journal of Engineering Design, vol. 11, no. 1, pp. 79-94. Norland, M., 1996, An Information Framework for Engineering Design based on Axiomatic Design”, Ph.D., Thesis, Stockholm: Royal Institute of Technology. Ogot, M. (2005) “Problem Clarification in TRIZ using Energy-Material-Signal Models”, Journal of TRIZ in Engineering Design, Vol. 1, No. 1, pp. 27-39. Ogot, M., (2006) “A Framework for Conceptual Design with Axiomatic Design and TRIZ”, TRIZCON2006, Milwaukee, WI. Orloff, M., (2003) Inventive Thinking through TRIZ: A Practical Guide, Springer, Berlin. Royzen, Z., (1999), “Tool, Object, Product (TOP) Function Analysis'”, TRIZ Journal, no. 9. Ruihang, Z., Runhua, T. and Guozhong, C., “Case Study in Axiomatic Design and TRIZ: A Paper Machine”, TRIZ Journal no. 3, 2004. Savaransky, S., (2000) Engineering of Creativity: Introduction to TRIZ Methodology of Inventive Problem Solving Boca Raton: CRC Press. Soderlin, P., (2002), “TRIZ The simple way” TRIZ Journal, no. 2. Suh, N.P. (2001) Axiomatic Design: Advances and Applications New York: Oxford University Press. Yang, K. and Zhang, H., (2000) “A Comparison of TRIZ and Axiomatic Design”, TRIZ Journal, no. 8.

LAW - ANTILAW Vladimir Petrov The TRIZ Association of Israel, President [email protected]

Abstract The basic thesis of this paper is that any system develops in two opposite directions. Therefore, each law of system development should be considered in two opposite directions. Keywords: TRIZ, Laws of Development, problem-solving

1. Introduction Theories, like Chinese Yin-Yang philosophy, teaching that any object exists in two contradicting states and develops by balancing its opposite characteristics, are known for centuries. Similar ideas for the development of technical systems are suggested by the laws of dialectics [1]. Sinectics uses comparable symbolical analogies like «transparent opacity» or «constant inconstancy». Thus, it is necessary to consider whether a problem solver can utilize not only a principle, but also its anti-principle whilst working on system’s evolution. Many authors were able to identify evolution tendencies opposite to predicted by the TRIZ laws of development. The paired principles (principle and corresponding anti- principle) have been researched by the author over 20 years ago [2]. B. Zlotin has demonstrated that the law of coordination has its antipode – the law of mismatch [3]. E. Zlotin in her work “Laws of development of musical forms” concluded that new musical forms arise through elimination of the existing forms. The law of redundancy, developed by the author, can be considered as the opposite to the law of ideality. The law of transformation of a system’s scale predicts descending hierarchical development of a system – from super- system to its sub-systems. This direction is opposite to the upwards trend in system’s hierarchy, defined by the original law of transition from sub-system to super-system level. As stated by the author in the "Tendencies of System Development in Space" [8], a system can develop from a point to a line, to a plane, to "Matrioshka" or virtual space – as well as in the opposite direction: from volume to a plane, line or point. A. Lyobomirsky arrived at the same conclusion in "Trend: Point – Line- Surface – Volume" [9]. The updated set of Standards contain both the law of system transition to macro-level and the law of transition to micro-level [10]. These examples identify likelihood that every law of development can be matched with an antilaw – its antipode.

2. The law-antilaw system The pairs of laws with corresponding antilaws can be systemized in a structure presented in Table 1. Table 1: The law-antilaw pairs Law Antilaw Increasing of the degree of ideality (Ideality) Decreasing of the degree of ideality (Anti- ideality). Redundancy Increasing in a degree of dynamism Decreasing of a degree of dynamism (Dinamization) (Stabilization) Coordination Mismatch Transition to super-system Transition to sub-system. Change of scale Transition to micro-level Transition to macro-level Increasing of system controllability Decreasing of system controllability Increasing of the degree of Substance-Field Decreasing of the degree of Substance-Field Interactions Interactions Decreasing of the degree of coupling, Increasing of the degree of coupling, Increasing of the degree of fragmentation Decreasing of the degree of fragmentation Increasing of information saturation Decreasing of information saturation

Proposed systemic structure of laws – antlaws, in the author's opinion, expands current understanding of laws of development and establishes the need to examine system development in accordance to both laws and antilaws. The laws of development of technical systems are widely described in the TRIZ literature. Therefore, the rest of this paper will be devoted to examples of applications of antilaws.

3. Anti-ideality The law of Anti-ideality requests that a number of functions fulfilled by a system is reduced to none while utilizing infinite time and recourses for the purpose. An anti-ideal system can be harmful. A special case of the law of anti-ideality is the law of redundancy. Redundancy – a pattern, according to which, about 20% of the system's functions, elements and interfaces, perform 80% of the work. Upon building workable systems, it is necessary to consider the fact that in addition to the core 20% of functions, elements and interfaces, necessary to perform the required work, another 80% of complementary, facilitating functions, elements and interfaces are needed, even though the latter fulfill to only 20% of the required work. Therefore whilst building a system, it is necessary to plan for some additional resources of material, energy and information. 20% of these resources will assure main functionality. The rest 80% ascertain that additional and supporting functions will be accomplished. Similar ratio is typical to any form of work. 20 percents of time and resources result in 80 to 90 percent of functionality. Complete functionality requires investment of additional 80 percent of time and resources. Redundancy is particularly high, in systems designed for special purposes. This is typical in security systems, life saving equipment, medical equipment, military technology, devices for scientific research, sport equipment, luxury items, etc. Normally, such systems have redundant elements and utilize significant excess in capacity like energy, medical preparations, ammunition, etc. Ideality is aimed at reducing redundancy. Anti-ideality leads to super-redundancy. Striking examples of extreme use of anti-ideality can be found in politics and military. Unique objects, luxury and prestige items, besides their designated purposes, can be regarded as examples of anti-ideality, when considering material resources and efforts invested in their creation. Creating production facilities harming the environment can also be considered as an example of anti-ideality.

4. Transition to Macro-level The law of system transition to macro-level relates to the tendency of increasing system parameters in value. Development of some systems is directed at increasing the value of certain parameters. Let us consider some examples. The first television had a 2 inch screen. Today we can purchase a Samsung TV with a 102 inch screen. The dimensions of street television screens are significantly greater. In January 2005 the biggest screen in world: 457.2 m long, and 13.7 m wide has been installed at one of shopping center of Las Vegas. The general law of television screen development is shown in Figure 1.

Figure. 1.

The described examples use two opposite tendencies: transition to macro-level – leads in growth of the screen dimensions; transition to micro-level – rules the development of the screen functionality. Today's super-liners are significantly bigger than the RMS Titanic. Table 2 compares the latter with Queen Mary 2.

Table 2. RMS Titanic versus Queen Mary – 2

Parameters RMS Titanic Queen Mary-2 Load displacement, tons 52 310 150 000 Length, meters 259.83 345 Width, meters 28.19 41 Height, meters 55 72 Draught, meters 10.54 10 Number of decks 8 21 Passenger capacity 2201 2620 Service personnel 494 1254

Cities continuously grow in size and in population. A city-tower project has been planned for the height of 1228 m and 100,000 dwellers (See Figure 2). Figure. 2.

The world highest suspension bridge was inaugurated in December 2004 at Millau, France, over the Tarn river valley, on the Paris-Barcelona road (fig. 3). An earlier route at the same location required a lengthy and tiresome detour across the river valley. With suspension length of 2.460 km, 32 m width, the bridge's automobile route is positioned 270 m above the ground. The longest bridge pylon is 343 m high, that is, 20 m higher than the Eiffel tower. The bridge is built on 7 columns, each 87 m high.

Figure. 3.

5. Transition to sub-system The law of transition from system to sub-system is the antipode of the law of transition from system to super-system. The author renamed this law as – the law of change of scale. The change in scale of a technical system is realized by either transition – from a super- system to a system level, or from the system to a sub-system level and further to matter. This antilaw is depicted in Figure 4

Change of Scale

Super-system

System

Sub-system

Matter

Fig. 4

This transition can be demonstrated by the development of electronics. First electronic systems, built on bulbs, had massive chassis for holding the bulbs. All electrical connections were established by means of separate conductors (hinged installations). Circuit components (capacitors, inductors, resistors, etc.) and a power supply module were of substantial dimensions and weight. Big bulbs have been replaced by finger bulbs. The latter required less power, which could be handled by smaller circuit components. This vastly reduced the size of the entire system. Next step was a revolutionary transition from finger bulbs to semiconductor devices (transistors, diodes, etc.). This enabled to significantly reduce the sizes of circuit elements and the power used. Yet, the hinged installation remained. The hinged installation gave up to the printed circuit board, which contained all the required circuit connectors. Large systems were replaced by small-sized, lightweight modules. All these steps correspond to transition from super-systems to system, or from system to sub-system. Transition from transistors to integrated and hybrid circuits enabled to place various circuit elements as well as connectors within a single semiconductor chip. Blocks were replaced by integrated circuits (transition from sub-system to matter). The invention of VLSI enabled complete replacement of a computational unit, previously occupying significant space, by a single semiconductor crystal. Thus, the super-system completed its transition to matter.

6. Stabilization The Law of Stabilization is the opposite to the law of increasing a degree of dynamism. A system or some of its parameters should remain unchanged in time or space. In order to achieve this it is necessary to maintain conditions assuring that all the system parameters remain unchanged. The concept of dynamic stability can be considered as a subset of this law. Several illustrations of the law can be found in maritime technology. An anchor keeps the vessel at a fixed location. Vessels are stabilized against rolling by a variety of means including passive means (ballast) and active measures (valve and pump- adjusted cisterns, automatically sliding horizontal rudders, reactive water jets, gyroscopes, etc.)

7. Increasing in a degree of coupling The law of increasing of the degree of coupling is the anti-law with respect to decreasing in a degree of coupling, specifically, the law of increasing level of fragmentation. Degree of coupling increases as depicted in Figure 8: from field (1), to gas (2), to aerosol (3), to liquid (4), to gel (5), to particles (6), to flexible objects (7) and to monolith (8).

Change in degree of coupling 9 Combination

1 2 3 4 5 6 7 8

Airo- Gas Liquid Gel Field sol

Рис. 8

For example, reinforced concrete is widely used in construction. This technology was also used for creating pylons for Millau Bridge on a route of motorway Paris - Barcelona (fig. 9).

8. Decreasing of system controllability The law of decreasing of system controllability identify evolution towards simple machines with no automation. This law is the opposite of the law of increasing of system controllability. For example, an instrument for cleaning oranges (fig. 10), consists of a single plastic-cast element. It is small, simple, and convenient to use. Such instruments have appeared in the past (for example – a can opener and a bottle opener), and will appear in the future. The law of decreasing of the degree of Substance-Field interactions and the law of decreasing of information saturation can be considered as sub-laws of decreasing of system controllability. The law of decreasing of the degree of Substance-Field interactions is aimed to be used for the objects comprising a single substance or a single field. First of all the law can be applied to simplest things, represented by a single item. A single-piece cast plastic or metal items are good example. Natural resources can also be used. The law of decreasing information saturation is the opposite of the law of increasing information saturation. The latter aims at eliminating of humans from technical systems (that is, mechanization, automation and computerization of technical systems). The history of development of technical system, though, gives many examples of manual labor processes existing alongside computerization. Moreover, various tools used by manual laborers are being improved, simplified and gradually become more convenient in use. This can also be looked at as a result of the law of ideality. Ideal system should consist of few elements. Hand-held tools, such as a hammer, a saw, a knife, a crowbar, a shovel, represent good examples of such simple systems.

9. Conclusion The system of laws and anti-laws described in this paper complements the existing system of laws and enables forecasting system development more accurately, including opposing pathways for system development and combinations of these pathways.

10. References 1. Petrov V. M., Laws of dialectics in technological creativity. Leningrad., 1978 (manuscript, Russian). This system of principles was first published in Zhukov, R. F., Petrov V. M., Modern methods of scientific and technological creativity (Using the ship-building industry as example).. – Leningrad.: IPK SP, 1980. – p. 88, pp. 53-57. Petrov V., Laws of dialectics of technical system development . Tel-Aviv. 2002. (Russian) http://www.trizland.ru/trizba/pdf- books/zrts-03-dialekt.pdf 2. Petrov V. M., Paired principles, Leningrad. 1974 (manuscript, Russian). This system of principles was first published in Zhukov, R. F., Petrov V. M., Modern methods of scientific and technological creativity (Using the ship-building industry as example). – Leningrad.: IPK SP, 1980 – p. 308» - http://www.trizminsk.org/e/212002.htm 3. Petrov V. M., Laws of dialectics in technological creativity. Leningrad., 1978 (manuscript, Russian). This system of principles was first published in Zhukov, R. F., Petrov V. M., Modern methods of scientific and technological creativity (Using the ship-building industry as example).. – Leningrad.: IPK SP, 1980. – p. 88, pp. 53-57. Petrov V., Laws of dialectics of technical system development. Tel-Aviv. 2002. (Russian) http://www.trizland.ru/trizba/pdf- books/zrts-03-dialekt.pdf 4. Searching for new ideas: From inspiration to technology (Theory and practice of solving inventive problems)/ G. S. Altschuller, B. L. Zlotin, A. B. Zussman, V. I. Filatov – Kishinev: Kartya Moldoviyanske, 1989, p. 381 (Russian). 5. Zlotin E. S. Musical form development patterns. Leningrad.: 1985. Technology creativity Journal №1, 1999. (Russian) http://www.trizminsk.org/e/245003.htm 6. Searching for new ideas: From inspiration to technology (Theory and practice of solving inventive problems)/ G. S. Altschuller, B. K. Zlotin, A. B. Zussman, V. I. Filatov – Kishinev: Kartya Moldoviyanske, 1989, p. 381 (Russian). 7. Zlotin E. S. Musical form development patterns. L.: 1985. Technology creativity Journal №1, 1999. (Russian) http://www.trizminsk.org/e/245003.htm 8. Petrov V. M., The law of redundancy L., 1981 (manuscript, Russian). Petrov V. The laws of technical system organization. – Tel Aviv, 2002. (Russian) http://www.trizland.ru/trizba/pdf- books/zrts-06-organ-ts.pdf 9. Petrov V. M., The law of scale change. Leningrad., 1981 (manuscript, Russian). Petrov V. The law of system transition from Macro to Micro-level. – Tel Aviv, 2002. (Russian) http://www.trizland.ru/trizba/pdf-books/zrts-12-microlevel.pdf 10. Petrov V. Tendencies of system development in space. L., 1987 (manuscript, Russian). Petrov V. M system of laws of technical development. – Tel Aviv, Israel 2002 pp. 6-7. http://trizland.com/trizba.php?id=108 11. Lyubomirsky, A. L., The trend of Point – Line – Plane – Volume. – TRIZ – readings, Proceedings of the "MATRIZ Fest 2005" conference. – SPb. 2005. (Russian) http://www.metodolog.ru/00514/00514.html 12. Petrov V. M. An extended system of standards. - Proceedings of the "MATRIZ Fest 2005" conference. –SPb. 2005 http://www.metodolog.ru/00462/00462.html 13. Petrov V. M., Patterns of technical system development. Methodology and methods of creativity. Novosibirsk, 1984. – pp. 52-54. (Russian) Petrov V., Technical system change of scale – Tel-Aviv, 2002. (Russian) http://www.trizland.ru/trizba/pdf-books/zrts-15-masshtab.pdf 14. Petrov V. M., Increasing of the degree of fragmentation. L. 1974 (manuscript, Russian) Petrov V. Increasing of the degree of fragmentation. – Tel Aviv, 2002 http://www.trizland.ru/trizba/pdf-books/zrts-13-droblenie.pdf 15. Petrov V. M., The law of increasing system controllability. L., 1982 (manuscript, Russian). Petrov V. The law of increasing system controllability – Tel Aviv, 2002. http://www.trizland.ru/trizba/pdf-books/zrts-18-upravl.pdf 16. Petrov V. M., The law of increasing information saturation - L. 1982 (manuscript, Russian). Petrov V. The law of increasing system controllability – Tel Aviv, 2002. (Russian) http://www.trizland.ru/trizba/pdf-books/zrts-18-upravl.pdf 17. Petrov V. M., Patterns of technical system development. Methodology and methods of creativity. Novosibirsk, 1984. – pp. 52-54. (Russian) Petrov V., Technical system change of scale – Tel-Aviv, 2002. (Russian) http://www.trizland.ru/trizba/pdf-books/zrts-15-masshtab.pdf Krvwonos keyword page number axiomaticdesign 1, 123,117 breakthrough 99 communication 23 complexproblem 77 complexity 109 computer-aidedInnovation 57 conceptualdesign 123 conflict 99 consumerproducts 71 contradiction 31,93,1 couplingmatrix 87 creativity 109 CRPG 9 decoupling 1 design 23 designprinciples 93 discontinuous 99 evolution 99 fractal 31 function 87 human-centereddesign 71 innovation 109,37 invention 23 inventivedesign 77 inventiveproblem 63 investment 37 key driver 87 kineticmodel 45 knowledge 77 laws of development 133 leandesign 117 logistics 109 meaning 71 method 87 multiagents 63 nature 99 networkof problems 37 OTSM 37 patentdatabases 45 patent investigation 45 patterns 93 potentialmodel 45 problem-solving 133 qualityattributes 93 R&D activities 77 rhetoric 23 role-playinggame 9 self-similari$ 31 shapeoptimization 57 sheet metal cutting 1 softwarearchitecture 93 supplychain 109 supply network 63 systemarchitecting 87 system evolution 9 systematicdesign 57 systematizationof appliedknowledge 31 toolmanagement 63 topologicaloptimization 57 usability 71 use 71 user 23

142 LIST OF AUTHORS

Authors Page Volume

AxelrodBoris 191 2 Barb6 Ben 243 2 BatsleerJohan 239 2 Belskilouri 35 2 BerdonosovVictor 31 1 BonnemaMaarten 87 1 BoratynskaAnna 41 2 BorremansJos 261 2 Brown Stephen 149 2 BusovBohuslav 183 2 CasciniGaetano 57 1 Cath6inConall6 23 1 CavallucciDenis 77 1 Cordova Lopez Edgardo 119 2 CrubleauPascal 9 1 D'AddonaDoriano 64 1 de FariasOdair Oliva 109 1 De Saeger lves 227 2 DewulfSimon 15 1 DewulfWim 1 1 DombEllen 29 2 DuflouJoost 1 1 EltzerThomas 77 I EstradaPatino lrma 119 2 EvstigneevAlecksey 65 2 FresnerJohannes 105 2 Geers Marcus 225 2 Goo-YunChung 175 2 Grabher-MeyerArno 71 2 HaagCristophe 53 2 HelevenMarc 247 2 Henry Rebecca 149 2 HeyJonathan 71 1 HippleJack 129 2 HoeboerRudy 1 2 HsiaoYung-Chin 91 2 lkovenkoSergei 143 2 Jae-HoonKim 175 2

143 JantschgiJuergen 105 Joon-MoSeo 175 Kaikovlgor 37 KataiOsamu 59 KawakamiHiroshi 59 KazueAkabane Get0lio 109 KhomenkoNikolai 37 KluenderDaniel 93 KoganSam 143 KraevValery 77 KurelaMichal 9 LangevinRichard 77 LapointeSerge 249 LegerJasmine 71 Lin Ying-Tzu 91 Lu Stephen 51 Luger Siegfried 71 MannDarrell 99 MarionStefan 167 MillerJoe 29 Mitchelllan 149 MorihisaMitsuo 59 NakagawaToru 45 Nani Roberto 45 NaumFeygenson 111 Ogot Madara 123 OkudanGtil 117 PahlAnja-Karina 19 PavelJirman 183 PavlovValery 211 P6rez GuadalupeLeija Maria 119 PetrovVladimir 133 PinyayevAlex 203 RegazzoniDaniele 45 RissonePaolo 57 Rotini Federico 57 Russo Davide 57 SamierHenry 9 SawaguchiManabu 9 SchenkEric 37 ShioseTakayuki 59 ShirwaikerRohan 117 SluchaninovNikolay 65

144 Smith Larry 219 2 SouchkovValeri 1 2 Teti Roberto 64 1 Van PeltAlan 71 1 Van Zutphen Mathijs 1 2 VerhaegheFilip 99 2 VratislavPerna 183 2 WaitzenbockJan 167 2 WellensiekMarkus 53 2 Young-YuKang 175 2 Zlotin Boris 159 2 ZusmanAla 159 2

145