Using OpenEXR and the Color Transformation Language in Digital Motion Picture Production August2,2007 FlorianKainz IndustrialLight&Magic onbehalfof TheAcademyofMotionPictureArts&Sciences’ ScienceandTechnologyCouncilImageInterchangeFrameworkCommittee

Abstract We present a framework for the interchange of digital images among multiple companies working on a motionpictureproduction.Theframeworkmakesitpossibletounambiguouslydescribewhatthedigital imagesareintendedtolooklikeonthescreen,withoutbakingthisinformationintotheimages’pixelsearly intheproductionpipeline.Thismakesitpossibletomakedecisionsaboutamovie’s“look”asearlyason thesetoronlocation,whileretainingthecapabilitytorevisethislookextensivelylater,forexample,inthe DigitalIntermediateprocess.Visualeffectsworkcanbeperformedon“scenelinear”images,ensuringthat 2D compositing and 3D rendering calculations are performed on physically meaningful data. This frameworkrequiresthatcompaniesexchangetwokindsofdata:images,typicallyintheformofOpenEXR files, and color transforms, represented as programs written in a Color Transformation Language. The framework described here is one possible implementation of concepts that have been developed by the AcademyofMotionPictureArtsandSciences’ImageInterchangeFrameworkcommittee.

Introduction Inlate2004the ScienceandTechnology Councilofthe AcademyofMotionPicture ArtsandSciences formedanImageInterchangeFrameworkcommittee.Thegoalofthecommitteeistodefineaconceptual framework, file formats, and recommended practices related to color management and the exchange of digitalimagesduringmotionpictureproductionandarchiving.Thisworkisongoing,andthecommittee hasnotyetmadeanyofficialrecommendations. Inaddition,theScienceandTechnologyCouncilhasrecentlypublishedanopensourceinterpreterforthe ColorTransformationLanguage(CTL),aprogramminglanguagethathasbeendesignedfordescribingand implementingtheoperationsthatoccur whenimagesaremovedfromonecolorspacetoanotherduring digital motion picture production. A reference manual for CTL has been submitted to the Society of MotionPictureandTelevisionEngineers(SMPTE)forpublicationasaRegisteredDisclosureDocument. Inthepresentdocumentwegiveanoverviewoftheconceptualframeworkthathassofarbeendeveloped bytheImageInterchangeFrameworkcommittee,andweoutlinehowtheseconceptsmightbeimplemented usingtheOpenEXRfileformattoexchangeimages,andCTLprogramstoexchangecolortransforms. Atthistimethecommittee'sworkisnotcomplete,andseveralaspectsofwhatisdescribedbelowarenot fully resolved. This document does not attempt to define a standard. The goal of this document is to encourageinterestedpartiestoexperiment withCTLandits useinactualproduction workflows,andto providefeedbacktotheauthor.However,thetextbelowshouldnotberegardedasadraftofwhatthe Image Interchange Framework committee may recommend in the future. The committee's final recommendationsmaydiffersignificantlyfromwhatisdescribedhere. Source code for OpenEXR file I/O libraries, an interpreter for CTL, and CTLenabled image viewing programs,aswellasdocumentationforOpenEXRandCTL,areavailableonline.FordetailsseetheURLs attheendofthisdocument.

1 Scope Theproductionofamotionpicturetypicallyinvolvesmultiplecompaniesandindividualswhoexchange filmanddigitalimageswitheachother.Agoodimageinterchange frameworkshouldatleastdefinea commondigitalimagefileformat,methodsforconvertingimagesbetweenfilmanddigitalrepresentations, andacolormanagementschemetoensurethatthemeaningofthepixelsinimagefilesiswelldefinedand that when an image is displayed it looks the same, no matter where it is displayed. In addition, the participantsinamovieproductionhaveotherrequirementsthatarespecifictothetaskstheyperform:

On the Set and on Location Whileshootingamotionpicturewithadigitalcamera,thedirectorofphotographymustbeabletoestablish thelookofthemovie,andtothislookonamonitor.Atthesametime,thedirectorofphotography must notbeforcedtomake finaldecisionsaboutthelookontheset.It mustbepossibletoreviseand changethelook,perhapsdrastically,duringthelaterstagesofthemovieproductionpipeline.

Digital Intermediate Process During the Digital Intermediate (DI) process images are colorcorrected, scaled, and cropped. Simple compositingmayalsooccur.TheoutputoftheDIprocessisthefinal movie;itlooks asdesired when viewed on the right display and with proper color rendering. The DI process should be as simple as possible.Thepixelsintheimagesareadjusteduntiltheylookright,resultinginthefinishedproduct.

Visual Effects Visualeffectsproductioncombinesmultipleimageelementsintocompositeimages.Someelementsare computergenerated;othersare“practical,”thatis,theyhavebeenshotwithafilmordigitalcamera.A workable interchange framework must support the production and exchange of computergenerated elementsaswellascombiningelementsfrommultiplesourcesintobelievablecomposites. VisualeffectsproductiontypicallyhappensbeforetheDIprocess.Thefinallookofamovieisknownonly approximately,anditmaystillchange.Destructivecolorcorrectionsthatwouldrestrictwhatkindoflook canbeachievedlaterintheDIprocessmustbeavoided.

Animation Ananimatedmotionpicturemaybeentirelygeneratedvia3Dcomputergraphics.Theoutputofthe3D rendering software may be output directly to film or to a Digital Cinema Distribution Master (DCDM), withoutaseparateDIprocess.Thisworkflowmustbesupported.

Film Input and Output Adigitalmotionpictureproductionprocessmustbeabletoexchangeimageswithatraditionalfilmbased productionprocess.Conversionbetweenfilmanddigitalimagesmustbesupported.

Video Input and Output A digital motion picture production process must beabletoimportandexportvideoimagesaswellas variousexistingdigitalimagefileformats,forexample,JPEG.

Digital Archival Whenthefinaldigitalimagesforamotionpicturearearchived,itmustbepossibletoreconstructtheirlook fromthepixeldataalone,withoutreferringtocolormanagementmetadata.Thismeansthattheremustbe afixedagreeduponrelationshipbetweenpixelvaluesinimagefilesandcolorsonawellknownreference display.(Therationaleforthisrequirementisthatarchivemaintenanceproceduressuchasconversiontoa differentformatmaynotpreservemetadatarelatedtocolormanagement.Itshouldbepossibletodisplay theimagesasoriginallyintendedevenifmetadataarelost.)

2 An Image Interchange Framework

Output-Referred Images Imageinterchangewouldbeassimpleaspossibleifthepixelsinadigitalimagefiledirectlyrepresented thecolorsthatareshownonareferencedisplay,forexample,onatheaterscreen.Imagefileswouldbe selfdescriptive,andnometadatarelatedtocolorreproductionwouldhavetobeexchanged.However,two requirements make such an "outputreferred" image representation undesirable for motion picture production: • Whileitshouldbepossibletomakedecisionsaboutthe"look"ofamotionpictureearly,forexample, whilerecordingoriginalimagesontheset,itshouldalsobepossibletorevisethosedecisionslater,for example,duringtheDigitalIntermediateprocess. • Many movies include visual effects whose production requires 3D computer graphics and 2D compositing. Those operations should be performed on an image representation that allows the mathematicaloperationsemployedbycomputergraphicsandcompositingtoproducecorrectresults. Motionpictureshavedistinctive"filmlook".Theimagesthatareshownonatheaterscreenarenotfaithful representations of the depicted scenes. An onscreen image typically has a higher contrast and more saturatedcolorsthantheoriginalscene.Atthesametimethedynamicrangeandtheoverallbrightnessofa projectedimageareusually muchlowerthanintheoriginalscene,andtheprojectedimage may havea smallercolorgamut.Thedynamicrange,maximumbrightnessandcolorgamutofprojectedimagesare limitedbytheprojectiontechnology,butcontrast,colorsaturation,andvariousotheraspectsoftoneand colorreproductionaresubjecttoartisticcontrol,anddifferbetweenmoviesorevenbetweenscenesina singlemovie. Processinganimagetoimpartthedesiredlooktoitgenerallylosesinformation.Pixelswhosebrightness levelsorhuesareoutsidethedynamicrangeorcolorgamutofthedisplaysystemarereplacedwithvalues that can be displayed. For example, very dark or very bright pixels are replaced with values that correspondtothedisplay'swhiteorblack.Informationthathasbeenlostcannotberecoveredlater.Ina systemwherethepixelsdirectlyrepresenttheonscreencolorsthislossofinformationmakesitdifficultto revisethelookofanimageafteraninitiallookhasbeenbakedintothepixels.Suchachangetendsto significantlydegradethequalityoftheimage. Anotherproblem withdirectlystoringonscreencolorsin thepixelsofdigitalimagesarises with visual effects:themathematicaloperationsimplementedin3Dcomputergraphicsrenderingand2Dcompositing packagesassumethatthepixelvaluesintheimagesarecloselyrelatedtolightvaluesinthedepictedscene. Ifthisassumptionturnsouttobeincorrect,the3Drenderingand2Dcompositingcalculationsdonotwork correctly,andproducingbelievablevisualeffectsshotsbecomesmoredifficultthannecessary.

Scene-Referred Images Instead of an outputreferred image representation, one might choose a "scenereferred" representation, wherethevaluesstoredinthepixelsofanimagedirectlyrepresenttheamountoflightthecamerareceives fromthecorrespondingobjectsinthedepictedscene.Withsuchasystem,thedesiredonscreenlookofan imageisstorednotinthepixels,butinseparate"colorrendering"metadatathatdescribehowcolorsinthe imagefilesmaptocolorsonthescreen. In such a system, the look of an image would be determined entirely by the color rendering metadata. Decisionsaboutthelookoftheimagescouldberevisedatanytimeduringtheproductionpipeline,without any loss in image quality, simply by changing the colorrenderingmetadata.Inaddition,3Dcomputer graphics and 2D compositing math would work very well because the builtin assumptions about relationshipsbetweenpixelvaluesandlightinthedepictedscenearecorrect. Asystemwhereallimagesmustbescenereferredwouldnotworkverywellformotionpictureproduction. Thesystem wouldlimitpixel modificationstooperationsthat maintainedthescenereferredstateof the images. Discussions in the Image Interchange Framework committee have shown that this restriction quickly leads to conceptual difficulties. Resizing or cropping would clearly be allowed, since those operationsmaintaintherelationshipbetweenscenecolorsandpixelvalues.Colorbalancinganimageby

3 multiplyingeachimagewithaconstantvaluewouldprobablybeallowed,too,sincetheresulting imagestilldepictsaphysicallyplausiblescene,althoughnottheonethatwasactuallypresentonthesetor onlocation.However,othercommonandnecessaryoperations,suchasselectivelyalteringthecolorofan individual object in the scene, stretch the definition of a scenereferred image to the point where that definitionbecomesmeaningless.

ACES and Indirect-Output-Referred Images Animageinterchangeframeworkbecomesmuchmoremanageableifwebackofffromthedemandthat image files are strictly scenereferred. The following describes a system where images live in an "AcademyColorEncodingSpace"orACES. ThedefinitionoftheACESissomewhatindirect:ACESisacolorencodingspacewhosedynamicrange andcolorgamutareconceptuallyunlimited.Pixelscanbearbitrarilybrightordark,andtheycanrepresent any color. ACES images may or may not be scenereferred. In order to display an ACES image, a "renderingtransform"(RT)andthenan"outputdevicetransform"(ODT)areappliedtothepixels.The combinedeffectoftheRTandtheODTistoapplyanoverall"look"toanimage:colorsbecomemore saturated,contrastincreases,thedynamicrangeandcolorgamutarelimited.TheRTandtheODTare chosensothatapplyingthetransformstoatypicalscenereferredimageanddisplayingtheresultsproduces anonscreenimagethatlookspleasingandfairlynatural(butasmentionedabove,ACESimagesarenot requiredtobescenereferred). Essentially,anACESimageisonethatlooksrightafteranRTandanODThavebeenappliedtoit.The ImageInterchangeFrameworkCommitteedistinguishesthistypeofimagefromscenereferredandoutput referredimagesbycallingit“indirectoutputreferred.” WithACESpixeldata,thedynamicrangeandcolorgamutlimitingoperationsthatarepartofproducing thelookofanimagestillloseinformation;applyingtheRTandODTisnotareversibleoperation,butthe resulting "colorrendered" image exists only temporarily, for display on a screen. This colorrendered imageisnotsavedinanimagefile. SincecolorrenderingisperformedontheflyeverytimeanACESimageisdisplayed,itistypicallynot necessarytoapplyinformationlosingoperationstotheACESpixelsinordertoachievethedesiredlook for an image. Since the RT and ODT are chosen such that a typical image already looks pleasing, achievingexactlythedesiredlookrequiresonlyminor,tweakingofthepixelvalues.Tweakingisusually nondestructiveinthesensethat,whenthepixelsaremodified,thereexistsaninversemodificationthat, exceptforroundingerrors,restorestheoriginalimage.Becausetheyarenondestructive,lookadjustments madeearlyoncanberevisedlateinamovie'sproductionpipeline,forexample,duringtheDIprocess. ACESpixeldatacanalsoworkwellforvisualeffectsproduction.Forvisualeffects,imagesshouldideally be in a state we will call "scenelinear." (We avoid terms such as "scenereferred" and "focalplane referred",whoseusualdefinitionsaretoonarrowforourpurposes.) Animageisscenelinearifthevaluesinthepixelsareproportionaltotheamountoflightthatthecamera receivesfromthecorrespondingobjectsinthescene.Thepixelsinascenelinearimagemayrepresentthe actualcolorsinthescene,buttheydon'thaveto.Linearoperationssuchasscalinganimagechannelbya constant number (as in color timing, white balancing and exposure correction) change the image but maintainthelinearrelationshipbetweenthepixelvaluesandlightinthedepictedscene. WhendigitalorfilmimagesareimportedintotheACES,itisusuallydesirablethattheyareinitiallymade asclosetoscenelinearaspossible.Scenelinearimagesprovidethewidestlatitudeforlatertweaking,and theyareidealforvisualeffectswork.ColorcorrectionsduringtheDIprocesstendtomakeimagesnon linearandthuslesssuitableforvisualeffects.Wheneverpossible,visualeffectsshouldmakeuseofscene linearimagesthathavenotbeencolorcorrected,exceptforwhitebalancingandexposureadjustments. InmanycasesproducingscenelinearACESimagesisrelativelystraightforward: Thetonalresponsecurvesofthesensorsindigitalcamerasareinherentlylinear.Digitalcamerasshouldbe able to produce ACES images that are scenelinear with fairly good accuracy, although exact scene colorimetrymaynotnecessarilybemaintained.

4 Scenelinear ACES images produced by scanning film negatives are usually less accurate than those producedbydigitalcameras.Filmstockshaveknownnominalresponsecurves,buttheactualcurvesvary dependingonwhichlaboratorydevelopsthefilm,andonwhatday.Inaddition,ACESimagesareoften derivedfromfilmscanswiththeassumptionthatthefilmresponsecurvesarelinearovertheirentireusable range,ignoringthefilm’stoeandshoulderaswellasnonlinearinteractionsbetweendifferentcolorlayers inthefilm. 3Dcomputergraphicsrenderingsoftwareinternallyworksinalinearcolorspace,andimagesgeneratedby 3Drendering,suchasscenesforcomputeranimatedmoviesorelementsforvisualeffectsshots,areexactly scenelinear. Insomecasesonemaynothaveenoughinformationtoproducescenelinearpixeldatawhenanimageis imported into the ACES. For example, it may be necessary to scan film negatives that have not been sufficientlycharacterized,ortoimportoutputreferreddigitalimagessuchasvideo,JPEGfilesorscanned filmprints.

Overall Pipeline Inanutshell,wepresentanimageinterchangeframeworkthatworksasfollows: • Originalimagesfromdigitalcameras,scannedfilmnegatives,and3Dcomputergraphicsareimported intotheACES. • All digital image files contain ACES pixel data. Image processing under user control, such as compositing,colorcorrectionorgeometrictransformation, operates on ACES images and generates newACESimagefiles. • DisplayinganACESimageonanoutputdevicerequirescolorrenderingtoachievethedesiredlook. Thefollowingsectionsdescribetheframeworkinmoredetail.Thetextintroducesanumberofnewnames forcolorencodingspaceandfortransformsthatconvertimagesbetweenthosespaces.Foraschematic diagramthatshowsthecolorspacesandtransforms,seeFigure1. Itshouldbepointedoutthatimplementingtheproposedframeworkisnotentirelytrivial.However,once implemented,thesystemshouldbeeasytouse.Details,suchaswhatcolortransformsmustbeappliedto animageandwhen,canlargelybehiddenfromartistswhoworkwithimageseveryday.Mostartistswill neverhavetowriteacolortransformorlookatransform’ssourcecode.

Input Aninputdevice,suchasadigitalcamera,producesimagedatainadevicespecificformat,forexample, rawsamplesfromaBayermosaicsensor.Thedataarenotnecessarilycalibrated;twounitsofthesame cameramodelmayproduceslightlydifferentoutput. AnInputDeviceCalibrationTransform,orIDCT,convertstherawimagedataintoarepresentationcalled InputDeviceColorEncodingSpaceorIDCES.TheIDCTisspecifictoanindividualdevice. TheIDCESisspecifictothetypeofinputdevice.Forexample,IDCESpixelsfromafilmscannermay containprintingdensityvalues,butIDCESpixelsfromadigitalcamerawilltypicallycontainRGBvalues relatedtothespectralsensitivitiesforthecameramodel. IDCESdataarecalibrated.ForeachtypeofinputdevicethereisadefinedrelationshipbetweenIDCES andthespectralpowerassociatedwiththepixelvaluesofthecapturedimage.Thisrelationshipisthesame forallunitsofaparticularmodelofinputdevice.Twodifferentunitsofthesamedigitalcameramodel mayvaryintheirsensoroutput,buttheIDCTsforthetwounitscompensateforthevariation. AnInputDeviceTransform,orIDT,convertstheIDCESpixelstoscenelinearACESpixels.TheIDTis specifictoaparticularinputdevicemodel,butitisthesameforallunitsofthismodel.

5 Color Rendering ARenderingTransform,or RT,performscolorrendering by transforming ACES pixels into an Output ColorEncodingspace,orOCES.OCESpixeldatarepresentCIE1931XYZcolorsonthescreenofa referencedisplay.LiketheACES,theOCEShasanunlimitedgamutanddynamicrange. Thereferencedisplayisthedisplayonwhichthefinalmoviewillbepresentedtoviewers.Thereference displaymaybeascreeninadarktheater,illuminatedbyadigitalprojectororbyaprojectedfilmprint,orit maybeavideomonitor.ThereferencedisplayisindicatedbythechoiceoftheOutputDeviceTransform, orODT(seebelow). TheRTconvertsACESimagesintoOCESimagesthatlookpleasingwhenreproducedonthereference display.

The Reference Rendering Transform Developingagoodrenderingtransformisnottrivial,andmostmovieproductionsdonotrequireproject specificrenderingtransforms.TheImageInterchangeFrameworkcommitteeintendstoprovideaspecial ReferenceRenderingTransform,orRRT.TheRRThasbeencarefullydesignedtomakeACESimages thatareapproximatelyscenelinearlookpleasingand“filmlike”whendisplayedonascreen. In addition to producing a pleasing look, the RRT has been designed to be invertible. Inverting the renderingtransformis necessary forimportingoutputreferredmaterial,suchasvideoandJPEGimages intotheACES.Invertibilityisalsorequiredinsomecasestocreateimagesthataresuitableforlongterm archiving. It is expected that most movie production projects will use the RRT for color rendering. However, for someprojects,theRRTwill bereplacedbyanalternativeRT.Forexample,3Dcomputeranimationis sometimesdonesuchthatdisplayingtheresultingACESimagesdirectlyonthereferencedisplayyieldsthe intendedresult.InthiscasecolorrenderingisnotneededandtheRTisanidentitytransform. Making the RRT the default for most movie productions reduces the chances of displaying images incorrectly if metadata are lost or not handled correctly. In the absence of metadata that indicate an alternativerenderingtransform,imagedisplaysoftwareandhardwarewillcolorrenderimagesusingthe RRT,andinmostcasesthiswillbethecorrectchoice.

Output Whenanimageisdisplayed,OCESpixelcolorsmustbeconvertedintopixelsfortheactualoutputdevice thatisusedtodisplaytheimage. AnOutputDeviceTransform,orODT,transformsOCESpixeldataintoanOutputDeviceColorEncoding Space,orODCES.TheODTandtheODCESaresuitableforaparticulartypeofoutputdevice. TheODTlimitsthecolorgamutbyclampingoutofrangeODCESdata.TheODTmayperformgamut mapping to reduce artifacts that are introduced by limiting the gamut. Taking the capabilities and the viewingenvironmentofthe outputdeviceintoaccount,theODTmayalsotrytoachieveaclosercolor appearancematchbetweentheoutputdeviceandtheOCEScolorimetry. AnOutputDeviceCalibrationTransform,orODCT,transformstheODCESpixeldataintovaluesthatcan be sent to a specific output device unit. The ODCT performs minor adjustments of the ODCES pixel valuesinordertocompensatefordifferencesbetweendifferentunits.TheODCTisspecifictoaparticular unit. Forexample,theODTforansRGBmonitorproducesdatasuitablefordisplayonanidealmonitorthat conforms exactly to the sRGB specification. The ODCT for an actual monitor compensates for the differencesbetweentheactualandtheidealmonitor.

6 Specific Input and Output Devices

Digital Camera Thissectiondescribesrecordingimagesforamotionpicturewithahypotheticaldigitalcamerathatdirectly supportstheimageinterchangeframeworkdescribedinthisdocument.(SeeFigure2)Atthistimethere exists no camera with builtin support for our framework, but such a camera could reasonably be constructed.Withexistingdigitalcamerasourframeworkcanbesupportedbyswitchingthecamerato “raw”or“data”modeandimplementingthefunctionsdescribedbelowinsoftwarethatrunsonacomputer thatisconnectedtothecamera. OurhypotheticalcameracapturesimagesusingaBayerpatternmosaicimagesensor.Thecamera’sbuilt inprocessorappliestheIDCTtotherawimagedata.TheIDCTincludesdemosaicing,conversiontoa cameramodelspecificscenelinearIDCES,aswellasanycorrectionsthatmaybeneededtocompensate fordeviationsofthesensorandanalogelectronicsfromthecamera’sspecification. AnIDT,whichmaybeappliedbythecameraorbysupportingsoftware,convertsIDCESdatatoACES data.TheIDTwilltypicallybespecifictothesceneilluminationandanywhitebalancingfiltersusedon thecameralens.TheIDTmaybeselectabletoallowtweakingofthecameracoloranalysischaracteristics, althoughinallcasestheoutputoftheIDTmustbescenelinear. ThecameraorsoftwareoutputsscenelinearACESimages,whicharestoredonarecordingmediumsuch astapeordisk. Thedirectorofphotography(oranyoneelsewhousesthecamera)wantstopreviewtherecordedimages, andmostlikelyheorshewantstotweakthelookoftheimages,forexample,byincreasingcontrastand colorsaturation. Either the digital camera has a builtin preview monitor or it can be connected to an external preview monitorthatallowstheusertoseeacolorrenderedversionoftherecordedACESimages.Thecamera also has a number of useradjustable controls or “knobs” that, in combination with the RT, affect the overallcolorrenderingorlookoftheimage.(Ofcoursetheuseradjustablecontrolsarenotnecessarily implemented as actual knobs and switches on the camera; they may be user interface elements on an externalcameracontrollerunitorinsupportingsoftwarerunningonacomputerthatisconnectedtothe camera.) The effect of the camera’s knobs is implemented in a Look Modification Transform, or LMT, which operatesontheACESimagesthatareoutputbytheIDT.TheLMTisparameterized;itsbehaviordepends ontheknobsettings.TheLMToutputsanewACESimage,whichisthencolorrenderedbytheRRT.The resulting OCES data are displayed on the preview monitor, using an ODT and an ODCT as described above. TheLMT,RRT,ODTandODCTarebuiltintothecameraorsupportingsoftware,andtheyprocessthe scenelinearACESimagesfordisplayonthepreviewmonitor.Thepreviewimagesarenotrecorded.The settings of the camera’s knobs and an identifier fortheLMTare,however,recordedasmetadatainthe scenelinearACESimages. TheLMTisspecifictoaparticularcameramodel,butindependentoftheknobsettingswithwhichitis used.TheLMTchangesonlywhenthecamerareceivesahardwareorsoftwareupgrade.Thereforeitis notnecessarytostoretheactualLMTalongwiththeACESimages;auniqueidentifierisenough.The cameraidentifiestheLMTbystoringastringsuchas“Acme_Inc_Model_5_Version_2_20”intheimage files,andthecameramanufacturermakesaportablerepresentationoftheLMTavailabletousers. Recording scenelinear ACES images together with the knob settings and an LMT identifier allows the intendedlookoftheimagestoberecreated.Atthesametimerecordingimagesthiswaydirectlysupports visualeffectsandotheroperationsthatarebestperformedbeforealookhasbeenbakedintotheimages. Sincethelookhasnotbeen bakedintothe ACESpixels, itcanbechangedlater withoutcompromising image quality. Once a final look has been decided upon and all scenelinear operations have been performed,thelookcanbebakedin,andtheLMTcanbediscarded.

7 (Note:thehypotheticaldigitalcameradescribedaboveissimilartothecamerarawbasedworkflowthat was described in a presentation at the 2007 HPA Technology Retreat: “Production to Presentation: CineForm Online RAW PostProduction Workflow” by DavidNewman,CineFormandMarkPedersen, PedersenMediaGroup.)

Film Input Thissectiondescribesscanningandimportingimagesthatwereshotonmotionpicturenegativefilm. Filmimagesaredigitizedbyafilmscanner,whichrecordsrawnegativedensitysamples.Therelationship betweentherawsamplesandtheactualdensityofthenegativedependsonthetypeoffilmscannerand mayvarybetweendifferentunitsofthesametypeofscanner. The film scanner’s IDCT transforms the raw density samples into the IDCES. In the IDCES for film scanners, pixels are represented as calibrated red, green blue printing density triples. When the same negativeisscannedondifferentfilmscanners,thescanners’IDCTsensurethatnearlyidenticalIDCESdata areproduced. TheIDTtransformstheIDCESdataintoapproximatelyscenelinearACESdata.Usuallyafairlysimple IDTisused;itassumesthatthereisnononlinearinteractionbetweenthenegative’sred,greenandblue sensitivelayersandthattherelationshipbetweendensityandexposureisalinearfunction.Insomecases one may want to employ an IDT that is based on a more realistic model of the relationship between exposureanddensity.SuchanIDTmayproducemoreaccuratescenelineardatabyaccountingforthe film’stoeandshoulderandforinteractionsbetweenlayers.However,usingahighlyaccurateIDTisoften difficult because it tends to be very sensitive to a film processing lab’s daytoday negative processing variations.

Film Output RecordingACESimagesonfilmcanbedonetwoways:

Traditional Mode In this scenario images shot on camera original film are scanned and imported into the ACES. After processingtheACESimages,forexample,toperformvisualeffectsworkortoinsertsubtitles,theimages are recorded on intermediate stock. The intermediate stock will be used together with camera original stockinatraditionalfilmbasedworkflowthatincludescuttingandsplicing,andopticalcolortiming. Filmrecordingintraditionalmodeisthereverseoffilmscanning:TheRTisanidentitytransform,asthe renderingisprovidedbythefilmprint;theODTistheexactinverseoftheIDT;andthefilmrecorder’s ODCTconvertstheprintingdensityODCESvaluesintodatathatdrivethefilmrecorder. IfanimageisimportedintoACESandthenexportedintheabovefashion,withoutfirstalteringtheACES data,thentheresultingnegativeisessentiallyaduplicateofthecameraoriginal. PreviewingACESimagesintraditionalmodesimulatesprintingandprojectingthenegative.Thisrequires ahelpertransformthatmodelstherelationshipbetweenred,green,blueprintingdensitytriplesandtheon screen color that results from printing the negative and projecting the print in a theater. This helper transform is called a Print Film Emulation Transform, or PFET. A given PFET is accurate only for a specificprintfilmstock,developedataparticularlaboratory. TheRTforpreviewingACESimagesintraditionalmodeisformedbyconcatenatingtheinverseoftheIDT withthePFET.ThisRTmayproducecolorsthatcannotbeshowncorrectlybecausetheyfalloutsidethe gamut of the preview display, but as the ACES data are notchangedbypreviewing,therecordedfilm imageswillstillbecorrect.

Reference Display Mode In this scenario, images shot on camera original film are scanned and imported into the ACES. While artistsworkonthem,theimagesaredisplayedusingavideomonitororadigitalprojector.Colorrendering isperformedbytheRRTorbyanotherrenderingtransformthatdoesnotmakeuseofanyparticularPFET.

8 In reference display mode, work on the ACES images is complete when the images look right on the referencedisplay,thatis,whentheylookrightwiththechosenRTapplied.RecordingACESimageson film, and projecting a print should result in onscreen images that look close to what is seen on the referencedisplay,subjectonlytodifferencesbetween the gamut and dynamic range of the film system versusthereferencedisplay. Inreferencedisplaymode,theRTforfilmrecordingisthesameasforpreviewing,andtheODTforthe filmrecorderisessentiallytheinverseofthePFETfortheprintfilmstockandlaboratorythatwillbeused tomakereleaseprints.TheODTmaynotbeanexactinverseofthePFETbecausethecolorgamutofthe referencedisplaytypicallydiffersfromthegamutoftheprojectedfilmprint,andtheODTmayadjustthe imagestoaccountforthedifferences.

Video and other Output-Referred Material FromtimetotimeitwillbenecessarytoimportoutputreferredimagesintotheACES,forexample,video, JPEGfilesorscannedfilmprints.Inthiscase,itmaybeneitherpossiblenordesirabletocreatescene linearpixeldata.ImportingoutputreferredmaterialshouldresultinACESimagesthatlook,whencolor renderedwiththeRT,justliketheoriginalvideo,JPEGimageorfilmprint.Thiscanbeachievedbyusing anIDCTthatconvertsfromthematerial’snativerepresentationtoCIEXYZ,followedbyanIDTthatisthe inverseoftheRT.NotethatthisimpliesthattheRTforanyprojectthatusesoutputreferredmaterialmust beinvertible.

Digital Projector and DCDM Output DCDMencodedimagesforadigitalprojectorarecreatedbystartingwithACESvaluesandapplyingan RT,followedbyanODTthatconvertsfromOCEStotheDCDMencoding.

Long-Term Archiving Metadatarelatedtocolorrenderingshouldnotbepart of archival copies of digital motion pictures. In ordertoachievethis,therenderingtransformforarchivedmoviesmustalwaysbetheRRT.TheACES filesforanymotionpictureprojectthatusestheRRTduringproductioncanbearchiveddirectly.Whena motionpictureprojectisproducedusingadifferent,projectspecificRT,thenaseparatesetofarchival ACESfilesmustbeproduced.ThisisdonebyapplyingfirsttheprojectspecificRTandthentheinverse oftheRRTtotheoriginalACESimages.

ACES Pixel Data ACESimagefilescontainpixeldatathatrepresentCIE1931XYZtristimulusvalues.Dependingonthe ondiskencoding,thefilesmayeithercontainactualXYZtriples,oranyotherrepresentationthatcanbe convertedtoXYZ.(ButseethenoteattheendoftheFileFormatssection,below.)

File Formats TheworkflowdescribedabovecanbeimplementedusingCTLformostofthetransformsandOpenEXRas theACESimagefileformat. Someofthetransformsrequiredforthisworkflowwillbebuiltdirectlyintoinputandoutputdevices,and no portable CTL representation of those transforms isneeded.(Inthecaseofadigitalcamerawitha Bayermosaicsensor,theIDCTincludesdemosaicingandthuscannotbeimplementedinCTLanyway becauseCTLprocesseseachpixelincompleteisolationfromallothers.) Ifmultiplecompanies workonamotionpictureprojecttheyneedtosharecertaintransforms,andthose transformsshouldhaveCTLrepresentations: Transformsthatarenotshared Transformsthatareshared IDCTforanyinputdevice IDTforfilmscanner ODCTforanyoutputdevice ODTforfilmrecorder

9 IDTfordigitalcamera ODTforRGBdisplay RT,includingthestandardizedRRT LMT TheimageinterchangeframeworkdescribedhereassumesthatonlyACESimagesaresavedinimagefiles. IDCES,OCESandODCESimagesexistonlytemporarilyandareneversavedinfiles.Thus,theentire image interchange framework needs only two file formats, one for ACES images and one for color transforms. Incurrentpractice,filmscanningandrecording housesdeliverscannedimagesasDPXfilesandaccept DPXfilesforrecording.Atpresentmostscanningandrecordinghousestendtoavoiddealingwithimage fileformatsotherthanDPX.Inthisframework,theywouldhavetohandleACESfileswithcorresponding transformstoandfromdensity.Theassumptionslistedinthepreviousparagraphcouldberelaxedsuch that images from a film scanner and OCES images for film recording are saved as DPX files with a standardizedandwelldefinedrelationshipbetweenprintingdensitiesandDPXcodevalues.Inthiscase filmscanningandrecordinghouseswouldcontinuetohandleonlyDPXfiles. The existing OpenEXR image file format allows images with arbitrary RGB color encoding spaces; the encodingspaceofagivenfileisspecifiedbya chromaticities attributeintheheader.Softwarethat reads OpenEXR files is expected topay attention to this attribute. In some cases it may be difficult to modifyexistingsoftwareorhardwaretohandlearbitraryprimariescorrectly.Afairlysimpleextensionto OpenEXR’sexistingRGBinterfacecouldimplement“virtualprimaries”:whenanimageisopenedfor readingtheapplicationspecifiesthedesiredRGBspace,andtheOpenEXRfileI/Olibraryautomatically convertsthepixelsintothisspacebyperformingamatrixmultiplication.Whenanimagefileisopenedfor writing,theapplicationspecifiestheRGBspaceofthepixels;thelibrarystoresthepixelsandtheRGB specificationinthefile.AtthistimevirtualprimariesarenotimplementedinOpenEXR. (Note:theImageInterchangeFrameworkcommitteehasnotyetdecidedhowpixelsinACESfilesshould be encoded. The possibilities under consideration include parametric RGB spaces, virtual primaries as describedinthepreviousparagraph,aswellasasingle,fixedACESRGBspace.)

Color Rendering Metadata ACESimagefilesmaybetaggedwithmetadataattributesrelatedtocolorrendering.Thetablebelowgives thenamesandtypesofattributesastheyappearinOpenEXRfileheaders.Forotherimagefileformatsthe attributesmustbetranslatedtoanappropriaterepresentation.Theattributesareoptional,thatis,notevery fileisrequiredtospecifythem.Ifaparticularattributeisnotpresentinagivenfilethensoftwarethatreads thefileshoulduseanappropriatedefaultvalue,asspecifiedbelow. Typeandname description Chromaticities Defines the CIE (x,y) coordinates of the primaries and white point of the chromaticities imagefile.TheC/C++definitionoftype Chromaticities isasfollows: struct V2f { float x; float y; };

struct Chromaticities { V2f red; V2f green; V2f blue; V2f white; };

10 The red , green , blue and white fieldsdefinethe(x,y)coordinatesof theRGBtriples(v,0,0),(0,v,0),(0,0,v)and(v,v,v),wherevisanarbitrary valuegreaterthanzero. Ifthe chromaticities attributeismissing,thentheprimariesandwhite pointareasspecifiedinRec.ITURBT.709. V2f adoptedNeutral Specifies the CIE (x,y) coordinates that should be considered “neutral” duringcolorrendering.Pixelsintheimagefilewhose(x,y)valuematches the adoptedNeutral value should be mapped to neutral values on the display. If the adoptedNeutral attribute is missing, then the white value specified in the chromaticies attribute should be used. If both the adoptedNeutral andthe chromaticities attributearemissing,then theRec.ITURBT.709whitepointshouldbeused. ImageState Specifies the image state of the file. The C/C++ enumeration type imageState ImageState hasthreepossiblevalues: enum ImageState { SCENE_LINEAR = 0, INDIRECT_OUTPUT_REFERRED = 1, OUTPUT_REFERRED = 2 };

SCENE_LINEAR meansthatthepixelsinthefilearescenelinear.Thefile containsanACESimage,anddisplayingitrequirescolorrendering. INDIRECT_OUTPUT_REFERRED means that the file contains an ACES imagewhosepixelshavebeenadjustedsothattheimagelooksasintended whendisplayedwithcolorrendering. OUTPUT_REFERRED means that the file contains an OCES image whose pixelshavealreadybeencolorrendered.Whentheimageisdisplayed,itis necessarytoapplyanODTandanODCT,buttheRTmustbeskipped.(The imageinterchangeframeworkdescribedheredoesnot make use of OCES image files; OCES images exist only temporarily, as intermediate results whenanimageissenttoanoutputdevice.However,savinganOCESimage filemaybeusefulfordebugging.) Ifthe imageState attributeismissing,softwarethatprocessestheimage shouldassumethattheimagestateisINDIRECT_OUTPUT_REFERRED . string SpecifiesthenameoftheCTLfunctionthatimplementstheintendedRTfor renderingTransform thisimage.Ifthe renderingTransform attributeismissing,thename “transform_RRT”shouldbeused. string SpecifiesthenameoftheCTLfunctionthatimplementstheintendedLMT lookModTransform forthisimage.Ifthe lookModTransform attributeismissing,thenthe imagehasnoLMT.

11 Setting the Image State Attribute DuringmoststagesintheproductionofamotionpictureitisnotimportanttoknowwhetheragivenACES imageisscenelinearornot.Compositingand3Dcomputergraphicsforvisualeffectsareanexception; visualeffectsworkisideallydonewithscenelinearimages.The imageState attributelistedinthetable abovedistinguishesbetweenscenelinearandotherACESimages. Original ACES images from a digital camera or a 3D renderer,andimagesproducedby scanning film stockwithknownresponsecurves,areatleastapproximatelyscenelinear.Thedigitalcamera,rendering softwareorfilmscannershouldtagtheimagesbysettingthe imageState attributeto SCENE_LINEAR . Images imported from video, JPEG files and other outputreferred sources, and images produced by scanning film with unknown response curves, are most likely not scenelinear. The imageState attributeinthecorrespondingACESfilesshouldbesetto INDIRECT_OUTPUT_REFERRED . Simplecolortimingofanimage,thatis,adjustingitswhitebalanceandexposurebylinearlyscalingeach channel, as is typically done prior to visual effects work, does not affect scenelinearity. Colortiming softwareorhardwareshouldpreservethevalueofthe imageState attribute. Creativelycolorcorrectinganimagetoachieveadesiredlook,orbakingalook modificationtransform into the pixels of an image, destroys the image’s scenelinearity. The resulting image files should be tagged by setting the imageState attribute to INDIRECT_OUTPUT_REFERRED , or equivalently, by removingthe imageState attribute.

Color Channels, CTL Inputs and Outputs InordertomakeCTLtransformsportablebetweendifferentkindsofapplicationsoftware,thesignaturesof sometransformsinthisimageinterchangeframeworkmustbestandardized.Belowisalistofstandard signatures.Transformsmayhaveinputandoutputparametersinadditiontothoselisted.Additionalinput parameters must have a default value; a transform with additional input parameters must work even if applicationsoftwaredoesnotspecifyvaluesforthoseparameters.Additionaloutputparametersmustnot altertheinterpretationofthestandardoutputparameters;applicationsoftwareorlaterCTLtransformsmust beabletoignoretheadditionaloutputparameters. ThefunctionnamesintheCTLfunctionsignaturesbelowarenottobetakenliterally;theyareplaceholders fortheactualnames.

Rendering Transform, ACES to OCES void transform_name (input varying half R, input varying half G, input varying half B, input uniform Chromaticities chromaticities, input uniform V2f adoptedNeutral, output varying half X_OCES, output varying half Y_OCES, output varying half Z_OCES)

R, GandBdefinethered,greenandbluecomponentsoftheinputpixels. chromaticities definesthe chromaticitiesandwhitepointoftheRGBcolorspaceoftheinputpixels.Theinputpixelsmayormaynot bescenelinear. The R, Gand B valuesarescaledsuchthat “middle gray”correspondstoapixel with (x,y)coordinates equal to adoptedNeutral and a luminance of 0.18. If adoptedNeutral is equal to chromaticities.white ,middlegraycorrespondstotheRGBtriple(0.18,0.18,0.18). X_OCES , Y_OCES andZ_OCES definetheCIEX,YandZcomponentsoftheoutputpixels.Theoutput valuesarelinearwithrespecttotheamountoflightonthedisplay.The X_OCES , Y_OCES and Z_OCES

12 valuesarescaledsuchthatmiddlegraycorrespondstoapixelwith Y_OCES еqualto0.18.Therangeof theoutputvaluesisunlimited.

Look Modification Transform, ACES to ACES void transform_name (input varying half R, input varying half G, input varying half B, input uniform Chromaticities chromaticities, output varying half ROut, output varying half ROut, output varying half ROut)

R, GandBdefinethered,greenandbluecomponentsoftheinputpixels. ROut , GOut and BOut define thered,greenandbluecomponentsoftheoutputpixels. chromaticities definesthechromaticities andwhitepointoftheRGBcolorspaceoftheinputandoutputpixels. TypicallytheLMTwillhaveanumberofparameters in addition to those listed above. The additional parametersarespecifictoaparticulardigitalcameramodel;theirvaluescorrespondtothesettingsofthe camera's"knobs."

Output Device Transform, OCES to ODCES for an RGB display void transform_name (input varying half X_OCES, input varying half Y_OCES, input varying half Z_OCES, input uniform Chromaticities displayChromaticities, output varying half R_display, output varying half G_display, output varying half B_display)

X_OCES , Y_OCES andZ_OCES definetheCIEX,YandZcomponentsoftheOCESinputpixels. R_display , G_display and B_display define the red, green and blue components of the output pixels. displayChromaticities definesthechromaticitiesandwhitepointoftheRGBcolorspace ofthedisplaydevice.TheRGBvaluesarelinearwithrespecttotheamountoflightonthedisplay.Output valuesmustbetween0.0and1.0,correspondingtothedisplay'sminimumandmaximumlightoutput. The gamut of the display is implicitly defined by the displayChromaticities and by the limited rangeoftheoutputvalues.

Output Device Transform, OCES to ODCES for negative film void transform_name (input varying half X_OCES, input varying half Y_OCES, input varying half Z_OCES, output varying half DR_film, output varying half DG_film, output varying half DB_film)

X_OCES , Y_OCES andZ_OCES definetheCIEX,YandZcomponentsoftheOCESinputpixels.

13 DR_film , DG_film andDB_film definethedesiredred,greenandblueprintingdensityvaluesonthe negative. The values are linear with respect to density (negative log of transmittance). They must be between0.0and1023.0,correspondingtodensitiesof0.0and2.046respectively.

Input Device Transform, IDCES for negative film to ACES void transform_name (input varying half DR_film, input varying half DG_film, input varying half DB_film, input uniform Chromaticities chromaticities, output varying half R, output varying half G, output varying half B)

DR_film , DG_film andDB_film definethered,greenandblueprintingdensityvaluesonthenegative. Thevaluesarelinearwithrespecttodensity.Thevaluesarebetween0.0and1023.0,correspondingto densitiesof0.0and2.046respectively. R, GandBdefinethered,greenandbluecomponentsoftheACESoutputpixels. chromaticities definesthechromaticitiesandwhitepointoftheRGBcolorspaceoftheoutputpixels.

Other Transforms TheIDTsfordigitalcameras,aswellasIDCTsandODCTs,arenottypicallyshared.Transformsthatare notsharedmustinterfaceproperlywiththeirpredecessorsorsuccessors.ForexampletheIDTforadigital camera must output ACES RGB pixels as expected by the input side of the RT. Beyond that, standardizationoftransformsthatarenotsharedisnotnecessary,andnoCTLimplementationsofthose transformsarerequired.

14 Glossary

Transforms IDCT InputDeviceCalibrationTransform,convertsrawvaluesfromaninputdevice suchasacameraorfilmscannertocalibratedIDCESdatasuchthatthereisa known relationship between IDCES pixel values and light in the recorded scene.TheIDCTcompensatesfordifferencesbetweenthespecificationsfor aparticulartypeofinputdeviceandanactualunit. Each unit has its own IDCT. IDT InputDeviceTransform,convertscalibratedIDCESdataintoACESdata.If possible,theACESdatawillbescenelinear.TheIDTcanbesharedbetween differentunitsofthesametypeofinputdevice. LMT Look Modification Transform, modifies ACES data in order to impart a particular look to ACES images. The LMT is used by digital cameras. It allowsacameratorecordscenelinearACESaswellastheintendedlookof thoseimages.Sincethelookisnotbakedintotheimagesitcanberevised laterintheproductionpipelinebyalteringtheLMT. ODCT OutputDevice CalibrationTransform,convertscalibrated ODCESdatainto output device values. The ODCT compensates for differences between the specificationsforaparticulartypeofdeviceandanactualunit.Eachunithas itsownODCT. ODT Output Device Transform, converts OCES data into ODCES data for a particulartypeofoutputdevice.TheODTcanbesharedbetweendifferent unitsofthesametypeofoutputdevice. RRT ReferenceRenderingTransform.Adefaultrenderingtransformthatisused fortheproductionofamovie,unlessthereisaspecificreasontouseanother, projectspecificrenderingtransform.UseoftheRRTismandatoryforfinal ACESdatathatwillbearchived. RT RenderingTransform,convertsACESpixeldataintoOCESpixeldata.

Color Encoding Spaces ACES AcademyColorEncodingSpace.Acolorspacewithconceptuallyunlimited gamutanddynamicrange.ACESimagestypicallystartoutasscenelinear, butbecomeincreasinglynonlinearastheyaremodifiedbyvariousstagesof amovieproductionpipeline. IDCES InputDeviceColorEncodingSpace.Acolorencodingspacethatisspecific toaparticulartypeofinputdevice.Wherepossible, the IDCES is chosen suchthatitispossibletoconstructanIDTthatconvertsfromIDCESdatato

15 scenelinearACESdata. ODCES OutputDeviceColorEncodingSpace.Acolorencodingspacethatisspecific to a particular type of output device. The ODT for this type of device convertsOCESdatatoODCESdata. OCES OutputColorEncodingSpace.Acolorspacewith conceptually unlimited gamut and dynamic range. OCES pixel data describe reference display colorimetry.

16 Source Code TheC++sourcecodeforthereferenceimplementationoftheCTLinterpretercanbedownloadedfrom http://www.oscars.org/council/ctl.html The CTL interpreter depends on a set of lowlevel utility libraries, called IlmBase, which can be downloadedfrom http://savannah.nongnu.org/projects/openexr TheOpenEXR_Viewerspackage,availableat http://savannah.nongnu.org/projects/openexr includes source code for OpenEXR still image and moving image viewers, both with CTL support. In addition to the packages listed above, building the image viewers requires the OpenEXR and OpenEXR_CTL,packages,whichareavailableat http://savannah.nongnu.org/projects/openexr and http://www.oscars.org/council/ctl.html Thesourcecodepackagesmentionedcontaininstructionsforbuildingandinstallingtherespectivelibraries andexecutables. The“ColorTransformationLanguageUserGuideandReferenceManual”isavailableat http://ampasctl.sourceforge.net/CtlManual.pdf Documentation for the OpenEXR file format, the OpenEXR file I/O libraries and the OpenEXR image viewersisavailableat http://www.openexr.com/documentation.html

17 Figures Figure1:Colorencodingspacesandcolortransforms

Inputdevice

Inputdevicevalues

IDCT

IDCES

IDT

LMT ACES

RT

OCES

ODT

ODCES

ODCT

Outputde vicevalues

Outputdevice

18 Figure2:Adigitalcamerathatdirectlysupportsthisimageinterchangeframework

Camerasensor,A/Dconversion

Rawsensordata

IDCT

IDCES(scene linearRGBvalues)

IDT Usercontrollable lookparameters (knobsettings) ACES (scene linearXYZvalues)

IdentifierforL MT

LMT Recordingmedium (tape,disk,etc.)

ACES (modifiedXYZvalues)

RT

OCES

ODT

topreviewmonitor

Previewingonly,images arenotrecorded

19